Tuesday 20 December 2016

improvement of the next operator

Just a brief one today. I tweaked the next operator so we can now specify the size of the gap we can handle. Previously it was hard-coded to 3. Recall the next operator, given a subsequence predicts the rest of the sequence.

To demonstrate it, let's use the alphabet:
a = {A.B.C.D.E.F.G.H.I.J.K.L.M.N.O.P.Q.R.S.T.U.V.W.X.Y.Z}
Now put it to use (for brevity I omit the gm2sw-v2.py and load into console step):
sa: next[0] |A>
incoming_sequence: ['A']
|B . C . D . E . F . G . H . I . J . K . L . M . N . O . P . Q . R . S . T . U . V . W . X . Y . Z>
|B>

sa: next[0] |A.E>
incoming_sequence: ['A', 'E']
nodes 1: 0.1|node 1: 0>
intersected nodes: |>
|>

sa: next[1] |A.E>
incoming_sequence: ['A', 'E']
nodes 1: 0.1|node 1: 0>
intersected nodes: |>
|>

sa: next[2] |A.E>
incoming_sequence: ['A', 'E']
nodes 1: 0.1|node 1: 0>
intersected nodes: |>
|>

sa: next[3] |A.E>
incoming_sequence: ['A', 'E']
nodes 1: 0.1|node 1: 0>
intersected nodes: 0.1|node 1: 4>
|F . G . H . I . J . K . L . M . N . O . P . Q . R . S . T . U . V . W . X . Y . Z>
|F>

sa: next[4] |A.E>
incoming_sequence: ['A', 'E']
nodes 1: 0.1|node 1: 0>
intersected nodes: 0.1|node 1: 4>
|F . G . H . I . J . K . L . M . N . O . P . Q . R . S . T . U . V . W . X . Y . Z>
|F>
So, what is happening there? Well in the first example, given A, predict the rest of the alphabet. Next example, given A followed by E, predict the rest of the sequence. But observe we get the null result |> until we specify a skip of at least size 3. And that's it! A small improvement to our next operator.

Monday 19 December 2016

learning and recalling a sequence of frames

In this post, we tweak our sequence learning code to learn and recall a short sequence of random frames. Previously all our sequences have been over random 10 bit on SDR's. This post shows we can extend this to sequences of almost arbitrary SDR's. Perhaps the only limitation is that these SDR's need to have enough on bits. Thanks to the random-column[k] operator acting on our SDR's we have an upper bound of k^n distinct contexts, where n is the number of on bits in the given SDR. Though smaller than this in practice to allow for noise tolerance. Which means the 10 on bits, and k = 10 which we have been using is more than enough, even for the 74k sequences in the spelling dictionary example.

To learn and recall our frames, we need three new operators:
random-frame[w,h,k]
display-frame[w,h]
display-frame-sequence[w,h]
where w,h are the width and height of the frames, and k is the number of on bits. For example, here is a 15*15 frame with 10 on bits:
sa: display-frame[15,15] random-frame[15,15,10]
....#..........
.........#...#.
.......#.......
...............
...............
..............#
.......#.......
...............
...............
..........#....
...............
..#..........#.
...............
.........#.....
...............
And this is what the random-frame SDR's look like (we just store the co-ordinates of the on bits):
sa: random-frame[15,15,10]
|7: 7> + |6: 2> + |10: 8> + |4: 12> + |4: 5> + |13: 9> + |8: 8> + |12: 8> + |13: 0> + |12: 14>
And now displaying this exact frame:
sa: display-frame[15,15] (|7: 7> + |6: 2> + |10: 8> + |4: 12> + |4: 5> + |13: 9> + |8: 8> + |12: 8> + |13: 0> + |12: 14>)
.............#.
...............
......#........
...............
...............
....#..........
...............
.......#.......
........#.#.#..
.............#.
...............
...............
....#..........
...............
............#..
OK. On to the main event. Let's learn these two sequences defined using gm notation:
$ cat gm-examples/frame-sequences.gm
seq-1 = {1.2.3.4.5}
seq-2 = {2.4.1.5.3}
We convert that to sw using gm2sw-v2.py, and then manually edit it to look like this:
-- frames:
frame |1> => random-frame[10,10,10] |>
frame |2> => random-frame[10,10,10] |>
frame |3> => random-frame[10,10,10] |>
frame |4> => random-frame[10,10,10] |>
frame |5> => random-frame[10,10,10] |>
frame |end of sequence> => random-frame[10,10,10] |>

-- learn low level sequences:
-- empty sequence
pattern |node 0: 0> => random-column[10] frame |end of sequence>

-- 1 . 2 . 3 . 4 . 5
pattern |node 1: 0> => random-column[10] frame |1>
then |node 1: 0> => random-column[10] frame |2>

pattern |node 1: 1> => then |node 1: 0>
then |node 1: 1> => random-column[10] frame |3>

pattern |node 1: 2> => then |node 1: 1>
then |node 1: 2> => random-column[10] frame |4>

pattern |node 1: 3> => then |node 1: 2>
then |node 1: 3> => random-column[10] frame |5>

pattern |node 1: 4> => then |node 1: 3>
then |node 1: 4> => random-column[10] frame |end of sequence>

-- 2 . 4 . 1 . 5 . 3
pattern |node 2: 0> => random-column[10] frame |2>
then |node 2: 0> => random-column[10] frame |4>

pattern |node 2: 1> => then |node 2: 0>
then |node 2: 1> => random-column[10] frame |1>

pattern |node 2: 2> => then |node 2: 1>
then |node 2: 2> => random-column[10] frame |5>

pattern |node 2: 3> => then |node 2: 2>
then |node 2: 3> => random-column[10] frame |3>

pattern |node 2: 4> => then |node 2: 3>
then |node 2: 4> => random-column[10] frame |end of sequence>


-- define our classes:
-- seq-1 = {1.2.3.4.5}
start-node |seq 1> +=> |node 1: 0>

-- seq-2 = {2.4.1.5.3}
start-node |seq 2> +=> |node 2: 0>
After loading that into the console we have:
$ ./the_semantic_db_console.py
Welcome!

sa: load frame-sequences.sw
sa: dump

----------------------------------------
|context> => |context: sw console>

frame |1> => |0: 9> + |5: 1> + |5: 5> + |8: 1> + |8: 2> + |7: 7> + |9: 1> + |3: 0> + |6: 3> + |2: 8>

frame |2> => |0: 6> + |3: 8> + |1: 2> + |2: 5> + |2: 2> + |0: 9> + |4: 3> + |6: 6> + |9: 4> + |9: 0>

frame |3> => |5: 7> + |9: 1> + |9: 3> + |0: 8> + |0: 0> + |8: 6> + |7: 9> + |5: 9> + |2: 5> + |7: 0>

frame |4> => |9: 3> + |1: 7> + |6: 9> + |0: 0> + |4: 6> + |8: 1> + |7: 4> + |4: 8> + |9: 1> + |4: 1>

frame |5> => |3: 3> + |5: 3> + |5: 9> + |5: 8> + |7: 1> + |7: 6> + |3: 1> + |4: 1> + |2: 2> + |5: 2>

frame |end of sequence> => |3: 8> + |7: 7> + |1: 3> + |2: 0> + |9: 2> + |0: 7> + |5: 7> + |3: 1> + |4: 9> + |3: 0>

pattern |node 0: 0> => |3: 8: 7> + |7: 7: 4> + |1: 3: 0> + |2: 0: 0> + |9: 2: 3> + |0: 7: 8> + |5: 7: 2> + |3: 1: 0> + |4: 9: 1> + |3: 0: 6>

pattern |node 1: 0> => |0: 9: 0> + |5: 1: 9> + |5: 5: 1> + |8: 1: 6> + |8: 2: 1> + |7: 7: 7> + |9: 1: 6> + |3: 0: 6> + |6: 3: 4> + |2: 8: 2>
then |node 1: 0> => |0: 6: 0> + |3: 8: 4> + |1: 2: 4> + |2: 5: 7> + |2: 2: 9> + |0: 9: 5> + |4: 3: 3> + |6: 6: 2> + |9: 4: 9> + |9: 0: 4>

pattern |node 1: 1> => |0: 6: 0> + |3: 8: 4> + |1: 2: 4> + |2: 5: 7> + |2: 2: 9> + |0: 9: 5> + |4: 3: 3> + |6: 6: 2> + |9: 4: 9> + |9: 0: 4>
then |node 1: 1> => |5: 7: 8> + |9: 1: 1> + |9: 3: 1> + |0: 8: 2> + |0: 0: 6> + |8: 6: 8> + |7: 9: 3> + |5: 9: 6> + |2: 5: 7> + |7: 0: 0>

pattern |node 1: 2> => |5: 7: 8> + |9: 1: 1> + |9: 3: 1> + |0: 8: 2> + |0: 0: 6> + |8: 6: 8> + |7: 9: 3> + |5: 9: 6> + |2: 5: 7> + |7: 0: 0>
then |node 1: 2> => |9: 3: 9> + |1: 7: 0> + |6: 9: 9> + |0: 0: 9> + |4: 6: 5> + |8: 1: 3> + |7: 4: 5> + |4: 8: 0> + |9: 1: 8> + |4: 1: 7>

pattern |node 1: 3> => |9: 3: 9> + |1: 7: 0> + |6: 9: 9> + |0: 0: 9> + |4: 6: 5> + |8: 1: 3> + |7: 4: 5> + |4: 8: 0> + |9: 1: 8> + |4: 1: 7>
then |node 1: 3> => |3: 3: 7> + |5: 3: 6> + |5: 9: 4> + |5: 8: 4> + |7: 1: 7> + |7: 6: 9> + |3: 1: 8> + |4: 1: 3> + |2: 2: 5> + |5: 2: 3>

pattern |node 1: 4> => |3: 3: 7> + |5: 3: 6> + |5: 9: 4> + |5: 8: 4> + |7: 1: 7> + |7: 6: 9> + |3: 1: 8> + |4: 1: 3> + |2: 2: 5> + |5: 2: 3>
then |node 1: 4> => |3: 8: 6> + |7: 7: 0> + |1: 3: 7> + |2: 0: 9> + |9: 2: 0> + |0: 7: 1> + |5: 7: 3> + |3: 1: 9> + |4: 9: 2> + |3: 0: 9>

pattern |node 2: 0> => |0: 6: 7> + |3: 8: 5> + |1: 2: 9> + |2: 5: 4> + |2: 2: 3> + |0: 9: 2> + |4: 3: 4> + |6: 6: 1> + |9: 4: 8> + |9: 0: 6>
then |node 2: 0> => |9: 3: 8> + |1: 7: 3> + |6: 9: 0> + |0: 0: 2> + |4: 6: 8> + |8: 1: 1> + |7: 4: 0> + |4: 8: 4> + |9: 1: 0> + |4: 1: 1>

pattern |node 2: 1> => |9: 3: 8> + |1: 7: 3> + |6: 9: 0> + |0: 0: 2> + |4: 6: 8> + |8: 1: 1> + |7: 4: 0> + |4: 8: 4> + |9: 1: 0> + |4: 1: 1>
then |node 2: 1> => |0: 9: 9> + |5: 1: 6> + |5: 5: 0> + |8: 1: 0> + |8: 2: 8> + |7: 7: 2> + |9: 1: 1> + |3: 0: 2> + |6: 3: 7> + |2: 8: 3>

pattern |node 2: 2> => |0: 9: 9> + |5: 1: 6> + |5: 5: 0> + |8: 1: 0> + |8: 2: 8> + |7: 7: 2> + |9: 1: 1> + |3: 0: 2> + |6: 3: 7> + |2: 8: 3>
then |node 2: 2> => |3: 3: 9> + |5: 3: 8> + |5: 9: 6> + |5: 8: 7> + |7: 1: 8> + |7: 6: 5> + |3: 1: 5> + |4: 1: 9> + |2: 2: 8> + |5: 2: 0>

pattern |node 2: 3> => |3: 3: 9> + |5: 3: 8> + |5: 9: 6> + |5: 8: 7> + |7: 1: 8> + |7: 6: 5> + |3: 1: 5> + |4: 1: 9> + |2: 2: 8> + |5: 2: 0>
then |node 2: 3> => |5: 7: 0> + |9: 1: 3> + |9: 3: 8> + |0: 8: 5> + |0: 0: 8> + |8: 6: 6> + |7: 9: 9> + |5: 9: 5> + |2: 5: 9> + |7: 0: 8>

pattern |node 2: 4> => |5: 7: 0> + |9: 1: 3> + |9: 3: 8> + |0: 8: 5> + |0: 0: 8> + |8: 6: 6> + |7: 9: 9> + |5: 9: 5> + |2: 5: 9> + |7: 0: 8>
then |node 2: 4> => |3: 8: 8> + |7: 7: 1> + |1: 3: 7> + |2: 0: 1> + |9: 2: 3> + |0: 7: 1> + |5: 7: 4> + |3: 1: 6> + |4: 9: 1> + |3: 0: 8>

start-node |seq 1> => |node 1: 0>

start-node |seq 2> => |node 2: 0>
----------------------------------------
with one interpretation that each ket is the co-ordinate of a synapse. Eg, |5: 1> or |5: 9: 4>. Noting that frames are 2D, and random-column[k] maps frames to 3D. It is this extra dimension that allows SDR's to be used in more than 1 context. Indeed, an upper bound of k^n distinct contexts, where n is the number of on bits. Though in the current example we only have two distinct sequences. Now let's display a couple of frames:
sa: display-frame[10,10] frame |1>
...#......
.....#..##
........#.
......#...
..........
.....#....
..........
.......#..
..#.......
#.........

sa: display-frame[10,10] frame |2>
.........#
..........
.##.......
....#.....
.........#
..#.......
#.....#...
..........
...#......
#.........
Now a couple of frames in our first sequence, noting "extract-category" is the inverse of random-column[k] and hence converting the pattern SDR back to 2D:
sa: display-frame[10,10] extract-category pattern |node 1: 0>
...#......
.....#..##
........#.
......#...
..........
.....#....
..........
.......#..
..#.......
#.........

sa: display-frame[10,10] extract-category then |node 1: 0>
.........#
..........
.##.......
....#.....
.........#
..#.......
#.....#...
..........
...#......
#.........
And finally our sequences:
sa: display-frame-sequence[10,10] start-node |seq 1>
...#......
.....#..##
........#.
......#...
..........
.....#....
..........
.......#..
..#.......
#.........

.........#
..........
.##.......
....#.....
.........#
..#.......
#.....#...
..........
...#......
#.........

#......#..
.........#
..........
.........#
..........
..#.......
........#.
.....#....
#.........
.....#.#..

#.........
....#...##
..........
.........#
.......#..
..........
....#.....
.#........
....#.....
......#...

..........
...##..#..
..#..#....
...#.#....
..........
..........
.......#..
..........
.....#....
.....#....

|end of sequence>

sa: display-frame-sequence[10,10] start-node |seq 2>
.........#
..........
.##.......
....#.....
.........#
..#.......
#.....#...
..........
...#......
#.........

#.........
....#...##
..........
.........#
.......#..
..........
....#.....
.#........
....#.....
......#...

...#......
.....#..##
........#.
......#...
..........
.....#....
..........
.......#..
..#.......
#.........

..........
...##..#..
..#..#....
...#.#....
..........
..........
.......#..
..........
.....#....
.....#....

#......#..
.........#
..........
.........#
..........
..#.......
........#.
.....#....
#.........
.....#.#..

|end of sequence>
Anyway, a nice proof of concept I suppose.

Friday 16 December 2016

predicting sequences

In today's post we are going to be predicting the parent sequence given a subsequence. This is a nice addition to the other tools we have to work with sequences, and one I've been thinking about implementing for a while now. The subsequence can either be exact in which case it will only match the parent sequence if it is a perfect subsequence, or the version we use in this post where the subsequence can skip a couple of elements and still predict the right parent sequence. Here we just consider sequences of letters, and then later words, but the back-end is general enough that it should apply to sequences of many types of objects.

Let's jump into an example. Consider these two sequences (defined using the labor saving gm notation mentioned in my last post):
a = {A.B.C.D.E.F.G}
b = {U.V.W.B.C.D.X.Y.Z}
Then convert that to sw and load into the console:
$ ./gm2sw-v2.py gm-examples/simple-sequences.gm > sw-examples/simple-sequences.sw
$ ./the_semantic_db_console.py
Welcome!

sa: info off
sa: load simple-sequences.sw
Here is what our two sequences expand to:
full |range> => range(|1>,|2048>)
encode |end of sequence> => pick[10] full |range>

-- encode words:
encode |A> => pick[10] full |range>
encode |B> => pick[10] full |range>
encode |C> => pick[10] full |range>
encode |D> => pick[10] full |range>
encode |E> => pick[10] full |range>
encode |F> => pick[10] full |range>
encode |G> => pick[10] full |range>
encode |U> => pick[10] full |range>
encode |V> => pick[10] full |range>
encode |W> => pick[10] full |range>
encode |X> => pick[10] full |range>
encode |Y> => pick[10] full |range>
encode |Z> => pick[10] full |range>

-- encode classes:
encode |a> => pick[10] full |range>
encode |b> => pick[10] full |range>

-- encode sequence names:

-- encode low level sequences:
-- empty sequence
pattern |node 0: 0> => random-column[10] encode |end of sequence>

-- A . B . C . D . E . F . G
pattern |node 1: 0> => random-column[10] encode |A>
then |node 1: 0> => random-column[10] encode |B>

pattern |node 1: 1> => then |node 1: 0>
then |node 1: 1> => random-column[10] encode |C>

pattern |node 1: 2> => then |node 1: 1>
then |node 1: 2> => random-column[10] encode |D>

pattern |node 1: 3> => then |node 1: 2>
then |node 1: 3> => random-column[10] encode |E>

pattern |node 1: 4> => then |node 1: 3>
then |node 1: 4> => random-column[10] encode |F>

pattern |node 1: 5> => then |node 1: 4>
then |node 1: 5> => random-column[10] encode |G>

pattern |node 1: 6> => then |node 1: 5>
then |node 1: 6> => random-column[10] encode |end of sequence>

-- U . V . W . B . C . D . X . Y . Z
pattern |node 2: 0> => random-column[10] encode |U>
then |node 2: 0> => random-column[10] encode |V>

pattern |node 2: 1> => then |node 2: 0>
then |node 2: 1> => random-column[10] encode |W>

pattern |node 2: 2> => then |node 2: 1>
then |node 2: 2> => random-column[10] encode |B>

pattern |node 2: 3> => then |node 2: 2>
then |node 2: 3> => random-column[10] encode |C>

pattern |node 2: 4> => then |node 2: 3>
then |node 2: 4> => random-column[10] encode |D>

pattern |node 2: 5> => then |node 2: 4>
then |node 2: 5> => random-column[10] encode |X>

pattern |node 2: 6> => then |node 2: 5>
then |node 2: 6> => random-column[10] encode |Y>

pattern |node 2: 7> => then |node 2: 6>
then |node 2: 7> => random-column[10] encode |Z>

pattern |node 2: 8> => then |node 2: 7>
then |node 2: 8> => random-column[10] encode |end of sequence>
Now put it to use. First up, given the first element in the sequence, predict the rest of the parent sequence:
sa: next |A>
incoming_sequence: ['A']
|B . C . D . E . F . G>
|B>

sa: next |U>
incoming_sequence: ['U']
|V . W . B . C . D . X . Y . Z>
|V>
The first letters in our sequences are distinct, so our code has no trouble finding a unique parent sequence. Note that our code also returns the list of elements that are only one step ahead of the current position, in this case |B> and |V>. Now, what if we give it a non-unqiue subsequence?
sa: next |B.C>
incoming_sequence: ['B', 'C']
nodes 1: 0.1|node 1: 1> + 0.1|node 2: 3>
intersected nodes: 0.1|node 1: 2> + 0.1|node 2: 4>
|D . E . F . G>
|D . X . Y . Z>
2|D>
So, B is found at |node 1: 1> and |node 2: 3> in our stored sequences, and B.C at |node 1: 2> and |node 2: 3>, resulting in two matching parent sequences: |D . E . F . G> and |D . X . Y . Z>, and a one step ahead prediction of |D>. Next we include 'D' and see it is still ambiguous:
sa: next |B.C.D>
incoming_sequence: ['B', 'C', 'D']
nodes 1: 0.1|node 1: 1> + 0.1|node 2: 3>
intersected nodes: 0.1|node 1: 2> + 0.1|node 2: 4>
nodes 1: 0.1|node 1: 2> + 0.1|node 2: 4>
intersected nodes: 0.1|node 1: 3> + 0.1|node 2: 5>
|E . F . G>
|X . Y . Z>
|E> + |X>
And since we don't uniquely know which sequence B.C.D belongs to, the one step ahead prediction is for an E or a X. But if we then prepend an A or a W, we again have unique parent sequences:
sa: next |A.B.C>
incoming_sequence: ['A', 'B', 'C']
nodes 1: 0.1|node 1: 0>
intersected nodes: 0.1|node 1: 1>
nodes 1: 0.1|node 1: 1>
intersected nodes: 0.1|node 1: 2>
|D . E . F . G>
|D>

sa: next |W.B.C>
incoming_sequence: ['W', 'B', 'C']
nodes 1: 0.1|node 2: 2>
intersected nodes: 0.1|node 2: 3>
nodes 1: 0.1|node 2: 3>
intersected nodes: 0.1|node 2: 4>
|D . X . Y . Z>
|D>
Or another example:
sa: next |B.C.D.E>
incoming_sequence: ['B', 'C', 'D', 'E']
nodes 1: 0.1|node 1: 1> + 0.1|node 2: 3>
intersected nodes: 0.1|node 1: 2> + 0.1|node 2: 4>
nodes 1: 0.1|node 1: 2> + 0.1|node 2: 4>
intersected nodes: 0.1|node 1: 3> + 0.1|node 2: 5>
nodes 1: 0.1|node 1: 3> + 0.1|node 2: 5>
intersected nodes: 0.1|node 1: 4>
|F . G>
|F>

sa: next |B.C.D.X>
incoming_sequence: ['B', 'C', 'D', 'X']
nodes 1: 0.1|node 1: 1> + 0.1|node 2: 3>
intersected nodes: 0.1|node 1: 2> + 0.1|node 2: 4>
nodes 1: 0.1|node 1: 2> + 0.1|node 2: 4>
intersected nodes: 0.1|node 1: 3> + 0.1|node 2: 5>
nodes 1: 0.1|node 1: 3> + 0.1|node 2: 5>
intersected nodes: 0.1|node 2: 6>
|Y . Z>
|Y>
So it all works as desired. Here is a quick demonstration where we skip a couple of sequence elements, as might happen in a noisy room, in this case C.D, and it still works:
sa: next |B.E>
incoming_sequence: ['B', 'E']
nodes 1: 0.1|node 1: 1> + 0.1|node 2: 3>
intersected nodes: 0.1|node 1: 4>
|F . G>
|F>

sa: next |B.X>
incoming_sequence: ['B', 'X']
nodes 1: 0.1|node 1: 1> + 0.1|node 2: 3>
intersected nodes: 0.1|node 2: 6>
|Y . Z>
|Y>
That's the basics. Subsequences predicting parent sequences, with tolerance for noisy omission of elements. Now apply it to simple sentences encoded as sequences. Consider this knowledge:
$ cat gm-examples/george.gm
A = {george.is.27.years.old}
B = {the.mother.of.george.is.jane}
C = {the.father.of.george.is.frank}
D = {the.sibling.of.george.is.liz}
E = {jane.is.47.years.old}
F = {frank.is.50.years.old}
G = {liz.is.29.years.old}
H = {the.age.of.george.is.27}
I = {the.age.of.jane.is.47}
J = {the.age.of.frank.is.50}
K = {the.age.of.liz.is.29}
L = {the.mother.of.liz.is.jane}
M = {the.father.of.liz.is.frank}
N = {the.sibling.of.liz.is.george}
Process it as usual:
$ ./gm2sw-v2.py gm-examples/george.gm > sw-examples/george-gm.sw
$ ./the_semantic_db_console.py
sa: load george-gm.sw
And first consider what sequences follow "the":
sa: next |the>
incoming_sequence: ['the']
|mother . of . george . is . jane>
|father . of . george . is . frank>
|sibling . of . george . is . liz>
|age . of . george . is . 27>
|age . of . jane . is . 47>
|age . of . frank . is . 50>
|age . of . liz . is . 29>
|mother . of . liz . is . jane>
|father . of . liz . is . frank>
|sibling . of . liz . is . george>
2|mother> + 2|father> + 2|sibling> + 4|age>
And these simple sentences enable us to ask simple questions:
sa: next |the.mother.of>
incoming_sequence: ['the', 'mother', 'of']
nodes 1: 0.1|node 2: 0> + 0.1|node 3: 0> + 0.1|node 4: 0> + 0.1|node 8: 0> + 0.1|node 9: 0> + 0.1|node 10: 0> + 0.1|node 11: 0> + 0.1|node 12: 0> + 0.1|node 13: 0> + 0.1|node 14: 0>
intersected nodes: 0.1|node 2: 1> + 0.1|node 12: 1>
nodes 1: 0.1|node 2: 1> + 0.1|node 12: 1>
intersected nodes: 0.1|node 2: 2> + 0.1|node 12: 2>
|george . is . jane>
|liz . is . jane>
|george> + |liz>

sa: next |the.age.of>
incoming_sequence: ['the', 'age', 'of']
nodes 1: 0.1|node 2: 0> + 0.1|node 3: 0> + 0.1|node 4: 0> + 0.1|node 8: 0> + 0.1|node 9: 0> + 0.1|node 10: 0> + 0.1|node 11: 0> + 0.1|node 12: 0> + 0.1|node 13: 0> + 0.1|node 14: 0>
intersected nodes: 0.1|node 8: 1> + 0.1|node 9: 1> + 0.1|node 10: 1> + 0.1|node 11: 1>
nodes 1: 0.1|node 8: 1> + 0.1|node 9: 1> + 0.1|node 10: 1> + 0.1|node 11: 1>
intersected nodes: 0.1|node 8: 2> + 0.1|node 9: 2> + 0.1|node 10: 2> + 0.1|node 11: 2>
|george . is . 27>
|jane . is . 47>
|frank . is . 50>
|liz . is . 29>
|george> + |jane> + |frank> + |liz>

sa: next |sibling.of>
incoming_sequence: ['sibling', 'of']
nodes 1: 0.1|node 4: 1> + 0.1|node 14: 1>
intersected nodes: 0.1|node 4: 2> + 0.1|node 14: 2>
|george . is . liz>
|liz . is . george>
|george> + |liz>
Or making use of sequence element skipping (in the current code up to 3 sequence elements), we can ask more compact questions:
sa: next |father.george>
incoming_sequence: ['father', 'george']
nodes 1: 0.1|node 3: 1> + 0.1|node 13: 1>
intersected nodes: 0.1|node 3: 3>
|is . frank>
|is>

sa: next |age.george>
incoming_sequence: ['age', 'george']
nodes 1: 0.1|node 8: 1> + 0.1|node 9: 1> + 0.1|node 10: 1> + 0.1|node 11: 1>
intersected nodes: 0.1|node 8: 3>
|is . 27>
|is>
Obviously the brain stores knowledge about the world using more than just rote sentences (unless you are bad at studying for exams), but I think it is not a bad first step. Who knows, maybe very young children do just store simple sequences, without "decorations"? Certainly in adults music knowledge of lyrics and notes feels to be simple sequences. But we still don't have a good definition of what it means to understand something. To me it feels like some well constructed network. ie understanding something means it is thoroughly interlinked with related existing knowledge. But how do you code that?

Finally, an important point to make is the above is only interesting in that I'm doing it in a proposed brain like way. Using grep it is trivial to find subsequences of parent sequences. For example:
$ grep "father.*george" george.gm
C = {the.father.of.george.is.frank}

$ grep "the.*age.*of" george.gm
H = {the.age.of.george.is.27}
I = {the.age.of.jane.is.47}
J = {the.age.of.frank.is.50}
K = {the.age.of.liz.is.29}
And the way we represent our high order sequences has a lot of similarity to linked lists.

BTW, I should mention I tried the strict version of the next operator on the spelling dictionary example, using the subsequence f.r, resulting in this prediction for the next letter:
173|e> + 157|a> + 120|o> + 114|i> + 49|u> + 8|y> + 2|.> + |t>
So pretty much just vowels.

Next post, learning and recalling a sequence of random frames.

Tuesday 22 November 2016

generating random grammatically correct sentences

In the last post we looked at generating a short grammatically correct sentence, in a proposed brain like way. The central idea was to represent our sentences using only classes and sequences. It's classes and sequences all the way down (not turtles!). In this post we extend this, and introduce a clean minimalist notation to represent theses sequences and classes, and a "compiler" of sorts, that converts this notation back to BKO. I guess with the implication that BKO could be considered a sort of assembly language for the brain.

Now on to this new notation (which is somewhat similar to BNF). We have these foundational objects:
{}                     -- the empty sequence
A                      -- a sequence of length one
A.B.C                  -- a sequence
{A, B, C}              -- a class
A = {B, C.D.E, F, G.H} -- definition of a class of sequences
I = A.B.C.D            -- definition of a sequence of classes
And that is pretty much it! Perhaps it would help to show how these map back to BKO:
-- the empty sequence:
pattern |node 0: 0> => random-column[10] encode |end of sequence>

-- a sequence of length one:
-- boy
pattern |node 5: 0> => random-column[10] encode |boy>
then |node 5: 0> => random-column[10] encode |end of sequence>

-- a sequence of length three:
-- the . woman . saw
pattern |node 1: 0> => random-column[10] encode |the>
then |node 1: 0> => random-column[10] encode |woman>

pattern |node 1: 1> => then |node 1: 0>
then |node 1: 1> => random-column[10] encode |saw>

pattern |node 1: 2> => then |node 1: 1>
then |node 1: 2> => random-column[10] encode |end of sequence>

-- a sequence of classes:
-- L = A.K.B
pattern |node 20: 0> => random-column[10] encode |A>
then |node 20: 0> => random-column[10] encode |K>

pattern |node 20: 1> => then |node 20: 0>
then |node 20: 1> => random-column[10] encode |B>

pattern |node 20: 2> => then |node 20: 1>
then |node 20: 2> => random-column[10] encode |end of sequence>

-- a class of one sequence:
-- A = {the.woman.saw}
start-node |A: 0> => pattern |node 1: 0>

-- a class of three sequences:
-- E = {{}, old, other}
start-node |E: 0> => pattern |node 0: 0>
start-node |E: 1> => pattern |node 6: 0>
start-node |E: 2> => pattern |node 7: 0>
Now that is in place, we can consider our first sentence:
$ cat gm-examples/first-sentence.gm
A = {the}
B = {{}, old, other}
C = {man, woman, lady}
D = {{}, young}
E = {child}
F = {youngest, eldest}
G = {child, sibling}
H = {{}, on.the.hill, also}
I = {used.a.telescope}

J = B.C
K = D.E
L = F.G

M = {J, K, L}

N = A.M.H.I
Then we compile this back to BKO using gm2sw.py:
$ ./gm2sw.py gm-examples/first-sentence.gm > sw-examples/first-sentence.sw
Load it up in the console:
$ ./the_semantic_db_console.py
Welcome!

-- switch off displaying "info" messages:
sa: info off

-- load our sentence:
sa: load first-sentence.sw

-- find available "sentences":
sa: rel-kets[sentence]
|J> + |K> + |L> + |N>

-- recall the "N" sentence:
sa: recall-sentence sentence |N>
|the>
|man>
|used>
|a>
|telescope>
|end of sequence>

-- and again:
sa: .
|the>
|young>
|child>
|also>
|used>
|a>
|telescope>
|end of sequence>

-- and again:
sa: .
|the>
|old>
|woman>
|used>
|a>
|telescope>
|end of sequence>
Now for a slightly more interesting sentence:
$ cat gm-examples/the-woman-saw.gm
A = {the.woman.saw}
B = {through.the.telescope}
C = {{}, young}
D = {girl, boy}
E = {{}, old, other}
F = {man, woman, lady}
G = E.F
H = {the}
I = H.C.D
J = H.E.F
K = {{},I,J}

L = A.K.B

M = {I,J}
N = {saw}
O = M.N.K.B

P = {through.the}
Q = {telescope, binoculars, night.vision.goggles}

R = M.N.K.P.Q
Compile and load it up:
$ ./gm2sw.py gm-examples/the-woman-saw.gm > sw-examples/the-woman-saw.sw
$ ./the_semantic_db_console.py
sa: load the-woman-saw.sw
sa: rel-kets[sentence]
|G> + |I> + |J> + |L> + |O> + |R>

sa: recall-sentence sentence |R>
|the>
|boy>
|saw>
|the>
|old>
|woman>
|through>
|the>
|telescope>
|end of sequence>

sa: .
|the>
|lady>
|saw>
|the>
|old>
|woman>
|through>
|the>
|binoculars>
|end of sequence>

sa: .
|the>
|old>
|man>
|saw>
|through>
|the>
|night>
|vision>
|goggles>
|end of sequence>

sa: .
|the>
|woman>
|saw>
|the>
|young>
|boy>
|through>
|the>
|binoculars>
|end of sequence>

sa: .
|the>
|girl>
|saw>
|through>
|the>
|telescope>
|end of sequence>
While we have this knowledge loaded, we can also do things like randomly walk individual sub-elements of our full sentences:
sa: recall-sentence pattern pick-elt rel-kets[pattern]
|the>
|man>
|end of sequence>

sa: .
|other>
|end of sequence>

sa: .
|boy>
|end of sequence>

sa: .
|girl>
|end of sequence>

sa: .
|saw>
|through>
|the>
|night>
|vision>
|goggles>
|end of sequence>

sa: .
|binoculars>
|end of sequence>

sa: .
|lady>
|end of sequence>

sa: .
|telescope>
|end of sequence>

sa: .
|saw>
|the>
|young>
|girl>
|through>
|the>
|telescope>
|end of sequence>
So at this point it might be a bit opaque how recall-sentence unpacks our stored sentences. Essentially it walks the given sentence, ie sequence, and if an element in that sequence is a class (ie, has a start-node defined), then recursively walk that sub-sequence, else print the element name. For example, recall this knowledge and consider the high level sequence R:
$ cat gm-examples/the-woman-saw.gm
A = {the.woman.saw}
B = {through.the.telescope}
C = {{}, young}
D = {girl, boy}
E = {{}, old, other}
F = {man, woman, lady}
G = E.F
H = {the}
I = H.C.D
J = H.E.F
K = {{},I,J}

L = A.K.B

M = {I,J}
N = {saw}
O = M.N.K.B

P = {through.the}
Q = {telescope, binoculars, night.vision.goggles}

R = M.N.K.P.Q
So if we walk the R sequence, with no recursion, we have:
sa: follow-sequence sentence |R>
|M>
|N>
|K>
|P>
|Q>
|end of sequence>
But each of these elements are themselves classes. Here are the sequences in the M, N and K classes:
sa: follow-sequence start-node |M: 0>
|H>
|C>
|D>
|end of sequence>

sa: follow-sequence start-node |M: 1>
|H>
|E>
|F>
|end of sequence>

sa: follow-sequence start-node |N: 0>
|saw>
|end of sequence>

sa: follow-sequence start-node |K: 0>
|end of sequence>

sa: follow-sequence start-node |K: 1>
|H>
|C>
|D>
|end of sequence>

sa: follow-sequence start-node |K: 2>
|H>
|E>
|F>
|end of sequence>
And if a class contains more than one member, the sub-sequence to recursively walk is chosen randomly. And so on, until you have objects with no start-nodes, ie low level sequences. Heh. I don't know if that explanation helped. This is the full python that defines the recall-sentence operator:
# Usage:
# load sentence-sequence--multi-layer.sw 
# print-sentence |*> #=> recall-sentence pattern |_self>
# print-sentence |node 200: 1>
#
# one is a sp
def recall_sentence(one,context):
  if len(one) == 0:
    return one
  current_node = one
    
  def next(one):
    return one.similar_input(context,"pattern").select_range(1,1).apply_sigmoid(clean).apply_op(context,"then")

  def name(one):
    return one.apply_fn(extract_category).similar_input(context,"encode").select_range(1,1).apply_sigmoid(clean)

  def has_start_node(one):                                            # check if one is a class
    two = ket(one.the_label() + ": ")                                 
    return len(two.apply_fn(starts_with,context).select_range(1,1).apply_op(context,"start-node")) > 0

  def get_start_node(one):
    two = ket(one.the_label() + ": ")
    return two.apply_fn(starts_with,context).pick_elt().apply_op(context,"start-node")        
   
  while name(current_node).the_label() != "end of sequence":
    if not has_start_node(name(current_node)):
      print(name(current_node))
    else:
      start_node = get_start_node(name(current_node))
      recall_sentence(start_node, context)       
    current_node = next(current_node)
  return ket("end of sequence")
Now, just for fun we can visualize our sentence structure, which is essentially a complex network, using our sw2dot code.
$ ./the_semantic_db_console.py
sa: load the-woman-saw.sw
sa: save the-woman-saw--saved.sw
sa: q

$ grep -v "^full" sw-examples/the-woman-saw--saved.sw | grep -v "^support" > sw-examples/the-woman-saw--tidy.sw

$ ./sw2dot-v2.py sw-examples/the-woman-saw--tidy.sw
Open that in graphviz, using neato and we have:
Now some notes:
1) Because of the recursive nature of the recall-sentence operator it should, baring a bug, handle multiple levels of sequences and classes, in contrast with the simpler example in the last post that was restricted to one level of classes and sequences. Potentially allowing for very complex structures, and certainly longer text than single sentences.
2) Even with our short-cut notation defining sentences is still somewhat hard work. The eventual goal is for it to be learnt automatically. A hard task, but having sentence representation is at least a useful step in that direction.
3) So far our classes and sequences have been small. I suspect classes will always remain small, as grammar has strict rules that seem to require small classes. Sequences on the other hand I don't know. Presumably larger structures than single sentences would need longer sequences, but the fact that the brain uses chunking hints that those sequences can't be too long. So instead of a large structure using long sequences, instead it would use more levels of shorter sequences. Which is essentially what chunking does. Indeed, here is our chunked sequences example in our new gm notation:
$ cat gm-examples/alphabet-pi.gm
a1 = {A.B.C}
a2 = {D.E.F}
a3 = {G.H.I}
a4 = {J.K.L}
a5 = {M.N.O}
a6 = {P.Q.R}
a7 = {S.T.U}
a8 = {V.W.X}
a9 = {Y.Z}

alphabet = a1.a2.a3.a4.a5.a6.a7.a8.a9

p1 = {3.1.4}
p2 = {1.5}
p3 = {9.2}
p4 = {6.5}
p5 = {3.5}
p6 = {8.9}

pi = p1.p2.p3.p4.p5.p6
4) What other objects can we represent, other than grammatical sentences, using just classes and sequences? Something I have been thinking about for a long time now is, how would you represent the knowledge stored in a mathematicians head? My project is claiming to be about knowledge representation right, so why not mathematics? I don't know, but I suspect we wont have an artificial mathematician until well after we have a full AGI.
5) The other side of that is, what can't we represent using just classes and sequences? I don't know yet. But certainly long range structure might be part of that. Given a random choice at the start of a sentence sometimes has an impact on what is valid later on in that sentence. I don't think we can represent that. And that leads to my last point. Fixed classes and random choice are just the first step. In a brain, the set of available classes to compose your sentences from, are dynamic, always changing, and if you want to say anything meaningful, your choices of how to unpack a sentence are the opposite of random.
6) Approximately how many neurons in our "the-woman-saw.gm" example? Well, we have:
sa: how-many rel-kets[pattern]
|number: 44>

sa: how-many starts-with |node >
|number: 44>

sa: how-many rel-kets[start-node]
|number: 23>
So roughly 67 neurons. Though that doesn't count the neurons needed to recall the sentences that corresponds to our python recall-sentence operator.

Monday 21 November 2016

learning and recalling a simple sentence

In this post we are going to use HTM inspired sequences to learn a short and simple, grammatically correct, sentence. This is a nice follow on from learning to spell, and recalling chunked sequences. The key idea being that the brain stores sentences as sequences of classes, and when we recall a sentence we unpack that structure. So how do we implement this? Well, we can easily represent sequences, as seen in previous posts, and classes are simple enough. So the hard bit becomes finding an operator that can recall the sentence.

Let's start with this "sentence", or sequence of classes (dots are our short-hand notation for sequences):
A . X . B . Y . C
where we have these classes:
A = {the}
B = {man, woman, lady}
C = {used a telescope}
X = {{}, old, other}
Y = {{}, on the hill, also}
And that is enough to generate a bunch of grammatically correct sentences, by picking randomly from each class at each step in the sequence. And noting that {} is the empty sequence. How many sentences? Just multiply the sizes of the classes:
|A|*|X|*|B|*|Y|*|C| = 1*3*3*3*1 = 27
Now on to the code. First up we need to encode our objects that we intend to use in our sequences. Again, our encode SDR's are just 10 random bits on out of 2048 total:
full |range> => range(|1>,|2048>)
encode |end of sequence> => pick[10] full |range>

-- encode words:
encode |old> => pick[10] full |range>
encode |other> => pick[10] full |range>
encode |on> => pick[10] full |range>
encode |the> => pick[10] full |range>
encode |hill> => pick[10] full |range>
encode |also> => pick[10] full |range>
encode |the> => pick[10] full |range>
encode |man> => pick[10] full |range>
encode |used> => pick[10] full |range>
encode |a> => pick[10] full |range>
encode |telescope> => pick[10] full |range>
encode |woman> => pick[10] full |range>
encode |lady> => pick[10] full |range>

-- encode classes:
encode |A> => pick[10] full |range>
encode |B> => pick[10] full |range>
encode |C> => pick[10] full |range>
encode |X> => pick[10] full |range>
encode |Y> => pick[10] full |range>
Next, define our low level sequences of words, though most of them are sequences of length one:
-- empty sequence
pattern |node 1: 1> => append-column[10] encode |end of sequence>

-- old
pattern |node 2: 1> => random-column[10] encode |old>
then |node 2: 1> => append-column[10] encode |end of sequence>

-- other
pattern |node 3: 1> => random-column[10] encode |other>
then |node 3: 1> => append-column[10] encode |end of sequence>

-- on, the, hill
pattern |node 4: 1> => random-column[10] encode |on>
then |node 4: 1> => random-column[10] encode |the>

pattern |node 4: 2> => then |node 4: 1>
then |node 4: 2> => random-column[10] encode |hill>

pattern |node 4: 3> => then |node 4: 2>
then |node 4: 3> => append-column[10] encode |end of sequence>

-- also
pattern |node 5: 1> => random-column[10] encode |also>
then |node 5: 1> => append-column[10] encode |end of sequence>


-- the
pattern |node 6: 1> => random-column[10] encode |the>
then |node 6: 1> => append-column[10] encode |end of sequence>

-- man
pattern |node 7: 1> => random-column[10] encode |man>
then |node 7: 1> => append-column[10] encode |end of sequence>

-- used, a, telescope
pattern |node 8: 1> => random-column[10] encode |used>
then |node 8: 1> => random-column[10] encode |a>

pattern |node 8: 2> => then |node 8: 1>
then |node 8: 2> => random-column[10] encode |telescope>

pattern |node 8: 3> => then |node 8: 2>
then |node 8: 3> => append-column[10] encode |end of sequence>

-- woman
pattern |node 9: 1> => random-column[10] encode |woman>
then |node 9: 1> => append-column[10] encode |end of sequence>

-- lady
pattern |node 10: 1> => random-column[10] encode |lady>
then |node 10: 1> => append-column[10] encode |end of sequence>
Here is the easiest bit, representing the word classes:
-- X: {{}, old, other}
start-node |X: 1> => pattern |node 1: 1>
start-node |X: 2> => pattern |node 2: 1>
start-node |X: 3> => pattern |node 3: 1>

-- Y: {{}, on the hill, also}
start-node |Y: 1> => pattern |node 1: 1>
start-node |Y: 2> => pattern |node 4: 1>
start-node |Y: 3> => pattern |node 5: 1>

-- A: {the}
start-node |A: 1> => pattern |node 6: 1>

-- B: {man,woman,lady}
start-node |B: 1> => pattern |node 7: 1>
start-node |B: 2> => pattern |node 9: 1>
start-node |B: 3> => pattern |node 10: 1>

-- C: {used a telescope}
start-node |C: 1> => pattern |node 8: 1>
Finally, we need to define our sentence "A . X . B . Y . C", ie our sequence of classes:
-- A, X, B, Y, C
pattern |node 20: 1> => random-column[10] encode |A>
then |node 20: 1> => random-column[10] encode |X>

pattern |node 20: 2> => then |node 20: 1>
then |node 20: 2> => random-column[10] encode |B>

pattern |node 20: 3> => then |node 20: 2>
then |node 20: 3> => random-column[10] encode |Y>

pattern |node 20: 4> => then |node 20: 3>
then |node 20: 4> => random-column[10] encode |C>

pattern |node 20: 5> => then |node 20: 4>
then |node 20: 5> => append-column[10] encode |end of sequence>
And that's it. We have learnt a simple sentence in a proposed brain like way, just using sequences and classes. For the recall stage we need to define an appropriate operator. With some thinking we have this python:
# one is a sp
def follow_sequence(one,context,op=None):
  if len(one) == 0:
    return one
    
  def next(one):
    return one.similar_input(context,"pattern").select_range(1,1).apply_sigmoid(clean).apply_op(context,"then")
  def name(one):
    return one.apply_fn(extract_category).similar_input(context,"encode").select_range(1,1).apply_sigmoid(clean)    
    
  current_node = one  
  while name(current_node).the_label() != "end of sequence":
    if op == None:
      print(name(current_node))      
    else:
      name(current_node).apply_op(context,op)
    current_node = next(current_node)
  return ket("end of sequence")
And these operator definitions:
-- operators:
append-colon |*> #=> merge-labels(|_self> + |: >)
random-class-sequence |*> #=> follow-sequence start-node pick-elt starts-with append-colon |_self>
random-sequence |*> #=> follow-sequence start-node pick-elt rel-kets[start-node] |>
print-sentence |*> #=> follow-sequence[random-class-sequence] pattern |_self>
We can now recall our sentence:
$ ./the_semantic_db_console.py
Welcome!

sa: load sentence-sequence.sw
sa: info off
sa: print-sentence |node 20: 1>
|the>
|old>
|woman>
|used>
|a>
|telescope>
|end of sequence>

sa: .
|the>
|man>
|also>
|used>
|a>
|telescope>
|end of sequence>

sa: .
|the>
|old>
|man>
|on>
|the>
|hill>
|used>
|a>
|telescope>
|end of sequence>
And that's it. We now have a structure in place that we can easily copy and reuse for other sentences. The hard part is typing it up, and I have an idea how to help with that. The eventual goal would be for it to be fully automatic, but that will be difficult. For example, given this set of sentences:
"the man used a telescope"
"the woman used a telescope"
"the lady used a telescope"
"the old man also used a telescope"
"the other man on the hill used a telescope"
It feels plausible that that is enough information to learn the above classes and sequences. Some kind of sequence intersection, it seems to me. And if that were the case, it shows the power of grammatical structure. 5 sentences would be enough to generate 27 daughter sentences. For any real world example, the number of daughter sentences would be huge.

Next post a more complicated sentence, with several levels of sequences and classes.

Saturday 5 November 2016

learning and recalling chunked sequences

So, it is very common (universal?) for people to chunk difficult to recall, or long, sequences. Perhaps a password, the alphabet, or digits of pi. So I thought it would be useful to implement this idea in my notation, and as a sort of extension to learning sequences in my last post. The idea is simple enough, instead of learning a single long sequence, break the sequence into chunks, and then learn their respective sub-sequences. Here is how my brain chunks the alphabet and pi, though other people will have different chunking sizes: (ABC)(DEF)(GHI)... and (3.14)(15)(92)(65)(35)... Giving this collection of sequences:
alphabet: ABC, DEF, GHI, ...
ABC: A, B, C
DEF: D, E, F
GHI: G, H, I
...

pi: 3.14, 15, 92, 65, 35, ...
3.14: 3, ., 1, 4,
15: 1, 5
92: 9, 2
65: 6, 5
35: 3, 5
...
Given we already know how to learn sequences, this is easy to learn. Here is the code (using a constant chunk size of 3), here is the knowledge before learning, and after. I guess I should show a little of what that looks like. First the random (though in other uses, it would be preferable to use a more semantically similar encoding) encode stage:
full |range> => range(|1>,|2048>)
encode |end of sequence> => pick[10] full |range>
encode |A> => pick[10] full |range>
encode |B> => pick[10] full |range>
encode |C> => pick[10] full |range>
encode |D> => pick[10] full |range>
encode |E> => pick[10] full |range>
encode |F> => pick[10] full |range>
encode |G> => pick[10] full |range>
encode |H> => pick[10] full |range>
encode |I> => pick[10] full |range>
encode |J> => pick[10] full |range>
encode |K> => pick[10] full |range>
encode |L> => pick[10] full |range>
encode |M> => pick[10] full |range>
encode |N> => pick[10] full |range>
encode |O> => pick[10] full |range>
encode |P> => pick[10] full |range>
encode |Q> => pick[10] full |range>
encode |R> => pick[10] full |range>
encode |S> => pick[10] full |range>
encode |T> => pick[10] full |range>
encode |U> => pick[10] full |range>
encode |V> => pick[10] full |range>
encode |W> => pick[10] full |range>
encode |X> => pick[10] full |range>
encode |Y> => pick[10] full |range>
encode |Z> => pick[10] full |range>
encode |A B C> => pick[10] full |range>
encode |D E F> => pick[10] full |range>
encode |G H I> => pick[10] full |range>
encode |J K L> => pick[10] full |range>
encode |M N O> => pick[10] full |range>
encode |P Q R> => pick[10] full |range>
encode |S T U> => pick[10] full |range>
encode |V W X> => pick[10] full |range>
encode |Y Z> => pick[10] full |range>
encode |3> => pick[10] full |range>
encode |.> => pick[10] full |range>
encode |1> => pick[10] full |range>
encode |4> => pick[10] full |range>
encode |5> => pick[10] full |range>
encode |9> => pick[10] full |range>
encode |2> => pick[10] full |range>
encode |6> => pick[10] full |range>
encode |8> => pick[10] full |range>
encode |7> => pick[10] full |range>
encode |3 . 1> => pick[10] full |range>
encode |4 1 5> => pick[10] full |range>
encode |9 2 6> => pick[10] full |range>
encode |5 3 5> => pick[10] full |range>
encode |8 9 7> => pick[10] full |range>
encode |9 3 2> => pick[10] full |range>
encode |3 8 4> => pick[10] full |range>
The main thing to note here is that we are not just learning encodings for single symbols eg |A> or |3>, but also for chunks of symbols too. eg |A B C> and |3 . 1>. And in general, we can do similar encodings for anything we want to stuff into a ket. Once we have encodings for our objects we can learn their sequences. Here are a couple of them:
-- alphabet
-- A B C, D E F, G H I, J K L, M N O, P Q R, S T U, V W X, Y Z
start-node |alphabet> => random-column[10] encode |A B C>
pattern |node 0: 0> => start-node |alphabet>
then |node 0: 0> => random-column[10] encode |D E F>

pattern |node 0: 1> => then |node 0: 0>
then |node 0: 1> => random-column[10] encode |G H I>

pattern |node 0: 2> => then |node 0: 1>
then |node 0: 2> => random-column[10] encode |J K L>

pattern |node 0: 3> => then |node 0: 2>
then |node 0: 3> => random-column[10] encode |M N O>

pattern |node 0: 4> => then |node 0: 3>
then |node 0: 4> => random-column[10] encode |P Q R>

pattern |node 0: 5> => then |node 0: 4>
then |node 0: 5> => random-column[10] encode |S T U>

pattern |node 0: 6> => then |node 0: 5>
then |node 0: 6> => random-column[10] encode |V W X>

pattern |node 0: 7> => then |node 0: 6>
then |node 0: 7> => random-column[10] encode |Y Z>

pattern |node 0: 8> => then |node 0: 7>
then |node 0: 8> => append-column[10] encode |end of sequence>


-- A B C
-- A, B, C
start-node |A B C> => random-column[10] encode |A>
pattern |node 1: 0> => start-node |A B C>
then |node 1: 0> => random-column[10] encode |B>

pattern |node 1: 1> => then |node 1: 0>
then |node 1: 1> => random-column[10] encode |C>

pattern |node 1: 2> => then |node 1: 1>
then |node 1: 2> => append-column[10] encode |end of sequence>


-- D E F
-- D, E, F
start-node |D E F> => random-column[10] encode |D>
pattern |node 2: 0> => start-node |D E F>
then |node 2: 0> => random-column[10] encode |E>

pattern |node 2: 1> => then |node 2: 0>
then |node 2: 1> => random-column[10] encode |F>

pattern |node 2: 2> => then |node 2: 1>
then |node 2: 2> => append-column[10] encode |end of sequence>

...
where we see both the high level sequence of the alphabet chunks (ABC)(DEF)..., and the lower sequences of single letters A, B, C and D, E, F. The pi sequence has identical structure, so I'll omit that. For the curious, see the pre-learning sw file.

That's the learn stage taken care of, now the bit that took a little more work, code that recalls sequences, no matter how many layers deep. Though I've only so far tested it on a two-layer system. Here is the pseudo code:
  next (*) #=> then clean select[1,1] similar-input[pattern] |_self>
  name (*) #=> clean select[1,1] similar-input[encode] extract-category |_self>

  print-sequence |*> #=>
    if not do-you-know start-node |_self>:
      return |_self>
    if name start-node |_self> == |_self>:                    -- prevent infinite loop when an object is its own sequence
      print |_self>
      return |>
    |node> => new-GUID |>
    current "" |node> => start-node |_self>
    while name current "" |node> != |end of sequence>:
      if not do-you-know start-node name current "" |node>:
        print name current "" |node>
      else:
        print-sequence name current "" |node>
      current "" |node> => next current "" |node>
    return |end of sequence>
And the corresponding python:
def new_print_sequence(one,context,start_node=None):
  if start_node is None:                                          # so we can change the operator name that links to the first element in the sequence.
    start_node = "start-node"
  if len(one.apply_op(context,start_node)) == 0:                  # if we don't know the start-node, return the input ket
    return one
  print("print sequence:",one)

  def next(one):
    return one.similar_input(context,"pattern").select_range(1,1).apply_sigmoid(clean).apply_op(context,"then")
  def name(one):
    return one.apply_fn(extract_category).similar_input(context,"encode").select_range(1,1).apply_sigmoid(clean)
    
  if name(one.apply_op(context,start_node)).the_label() == one.the_label():
    print(one)                                                               # prevent infinte loop when an object is its own sequence. Maybe should have handled at learn stage, not recall?
    return ket("")  
  current_node = one.apply_op(context,start_node)  
  while name(current_node).the_label() != "end of sequence":
    if len(name(current_node).apply_op(context,start_node)) == 0:
      print(name(current_node))      
    else:
      new_print_sequence(name(current_node),context,start_node)
    current_node = next(current_node)
  return ket("end of sequence")
And finally, put it to use:
$ ./the_semantic_db_console.py
Welcome!

sa: load chunked-alphabet-pi.sw
sa: new-print-sequence |alphabet>
print sequence: |alphabet>
print sequence: |A B C>
|A>
|B>
|C>
print sequence: |D E F>
|D>
|E>
|F>
print sequence: |G H I>
|G>
|H>
|I>
print sequence: |J K L>
|J>
|K>
|L>
print sequence: |M N O>
|M>
|N>
|O>
print sequence: |P Q R>
|P>
|Q>
|R>
print sequence: |S T U>
|S>
|T>
|U>
print sequence: |V W X>
|V>
|W>
|X>
print sequence: |Y Z>
|Y>
|Z>
|end of sequence>

sa: new-print-sequence |pi>
print sequence: |pi>
print sequence: |3 . 1>
|3>
|.>
|1>
print sequence: |4 1 5>
|4>
|1>
|5>
print sequence: |9 2 6>
|9>
|2>
print sequence: |6>
|6>
print sequence: |5 3 5>
|5>
|3>
|5>
print sequence: |8 9 7>
|8>
|9>
|7>
print sequence: |9 3 2>
|9>
|3>
|2>
print sequence: |3 8 4>
|3>
|8>
|4>
print sequence: |6>
|6>
|end of sequence>
And we can print individual sub-sequences:
sa: new-print-sequence |D E F>
print sequence: |D E F>
|D>
|E>
|F>
|end of sequence>

sa: new-print-sequence |Y Z>
print sequence: |Y Z>
|Y>
|Z>
|end of sequence>

sa: new-print-sequence |8 9 7>
print sequence: |8 9 7>
|8>
|9>
|7>
|end of sequence>
Some notes:
1) There are of course other ways to implement learning and recalling chunked sequences. In my implementation above, when a subsequence hits an "end of sequence" it escapes from the while loop, and the high level sequence resumes. But an alternative would be for the end of say the |8 9 7> subsequence to link back to the parent pi sequence, and then resume that sequence. In which case we would have this:
sa: new-print-sequence |8 9 7>
print sequence: |8 9 7>
|8>
|9>
|7>
print sequence: |9 3 2>
|9>
|3>
|2>
print sequence: |3 8 4>
|3>
|8>
|4>
print sequence: |6>
|6>
|end of sequence>
So, does |8 9 7> live as an independent sequence with no link to the parent sequence, or does the final |7> link back to the pi sequence? I don't know for sure, but I suspect it is independent, because consider the case where |8 9 7> is in multiple high level sequences. The |7> wouldn't know where to link back to.
2) I have had for a long time my similarity metric called simm, that returns the similarity of superpositions (1 for exact match, 0 for disjoint, values in between otherwise). But I have so far failed to implement a decent simm for sequences (aside from mapping strings to ngrams, and then running simm on that). I now suspect/hope chunking of sequences might be a key part.
3) presumably the chunking of sequences structure is used by the brain for more than just difficult passwords, eg perhaps grammar. Seems likely to me that if a structure is used somewhere by the brain, then it is used in many other places too. ie, if a structure is good, then reuse it.

Tuesday 25 October 2016

learning how to spell

In this post we are going to learn how to spell using HTM style high order sequences. This may look trivial, eg compared to how you would do it in python, but it is a nice proof of concept of how the brain might do it. Or at least a mathematical abstraction of that. There are two stages, the learning stage, and the recall stage. And there are two components to the learning stage. Encoding all the symbols we will use, and then learning the sequences of those symbols. In our case 69 symbols, 74,550 words, and hence 74,550 sequences. The words are from the Moby project. I guess the key point of this post is that without the concept of mini-columns (and our random-column[k] operator), we could not represent distinct sequences of our symbols. Another point is this is just a proof of concept. In practice we should be able to carry over the idea to other types of sequences, not just individual letters. I'll probably try that later, eg maybe sequences of words in text.

Here is what the encode stage looks like, where we map symbols to random SDR's with 10 bits on, out of a possible 65536 bits. I chose 65536 since it works best if our encode SDR's do not have any overlap. eg, my first attempt I used only 2048 total bits, but that had issues. But thanks to our sparse representation, this change was essentially free.
full |range> => range(|1>,|65536>)
encode |-> => pick[10] full |range>
encode |a> => pick[10] full |range>
encode |b> => pick[10] full |range>
encode |l> => pick[10] full |range>
encode |e> => pick[10] full |range>
encode |c> => pick[10] full |range>
encode |o> => pick[10] full |range>
encode |u> => pick[10] full |range>
encode |s> => pick[10] full |range>
encode |d> => pick[10] full |range>
encode |m> => pick[10] full |range>
encode |i> => pick[10] full |range>
encode |g> => pick[10] full |range>
encode |y> => pick[10] full |range>
encode |n> => pick[10] full |range>
...
And note we have single symbols inside our kets, but they could be anything.

Next we have the learn sequence stage, eg "frog":
-- frog
-- f, r, o, g
first-letter |frog> => random-column[10] encode |f>
parent-word |node 35839: *> => |frog>
pattern |node 35839: 0> => first-letter |frog>
then |node 35839: 0> => random-column[10] encode |r>

pattern |node 35839: 1> => then |node 35839: 0>
then |node 35839: 1> => random-column[10] encode |o>

pattern |node 35839: 2> => then |node 35839: 1>
then |node 35839: 2> => random-column[10] encode |g>

pattern |node 35839: 3> => then |node 35839: 2>
then |node 35839: 3> #=> append-column[10] encode |end of sequence>
This is what that looks like after learning, first in the standard superposition representation:
sa: dump |frog>
first-letter |frog> => |48197: 6> + |53532: 0> + |62671: 2> + |14968: 2> + |62260: 8> + |16180: 1> + |15225: 0> + |19418: 4> + |24524: 7> + |13432: 6>

sa: dump starts-with |node 35839: >
parent-word |node 35839: *> => |frog>

pattern |node 35839: 0> => |48197: 6> + |53532: 0> + |62671: 2> + |14968: 2> + |62260: 8> + |16180: 1> + |15225: 0> + |19418: 4> + |24524: 7> + |13432: 6>
then |node 35839: 0> => |56997: 6> + |38159: 3> + |55020: 5> + |10359: 6> + |29215: 7> + |56571: 6> + |55139: 9> + |27229: 5> + |57329: 7> + |56577: 4>

pattern |node 35839: 1> => |56997: 6> + |38159: 3> + |55020: 5> + |10359: 6> + |29215: 7> + |56571: 6> + |55139: 9> + |27229: 5> + |57329: 7> + |56577: 4>
then |node 35839: 1> => |41179: 2> + |12201: 9> + |63912: 7> + |33066: 1> + |47072: 1> + |17108: 4> + |48988: 0> + |9205: 2> + |34935: 2> + |513: 2>

pattern |node 35839: 2> => |41179: 2> + |12201: 9> + |63912: 7> + |33066: 1> + |47072: 1> + |17108: 4> + |48988: 0> + |9205: 2> + |34935: 2> + |513: 2>
then |node 35839: 2> => |55496: 8> + |57594: 7> + |60795: 5> + |54740: 4> + |40157: 2> + |2940: 7> + |51329: 1> + |24597: 7> + |15515: 9> + |47272: 8>

pattern |node 35839: 3> => |55496: 8> + |57594: 7> + |60795: 5> + |54740: 4> + |40157: 2> + |2940: 7> + |51329: 1> + |24597: 7> + |15515: 9> + |47272: 8>
then |node 35839: 3> #=> append-column[10] encode |end of sequence>
And now in the display representation:
sa: display |frog>
  frog
  supported-ops: op: first-letter
   first-letter: 48197: 6, 53532: 0, 62671: 2, 14968: 2, 62260: 8, 16180: 1, 15225: 0, 19418: 4, 24524: 7, 13432: 6

sa: display starts-with |node 35839: >
  node 35839: *
  supported-ops: op: parent-word
    parent-word: frog

  node 35839: 0
  supported-ops: op: pattern, op: then
        pattern: 48197: 6, 53532: 0, 62671: 2, 14968: 2, 62260: 8, 16180: 1, 15225: 0, 19418: 4, 24524: 7, 13432: 6
           then: 56997: 6, 38159: 3, 55020: 5, 10359: 6, 29215: 7, 56571: 6, 55139: 9, 27229: 5, 57329: 7, 56577: 4

  node 35839: 1
  supported-ops: op: pattern, op: then
        pattern: 56997: 6, 38159: 3, 55020: 5, 10359: 6, 29215: 7, 56571: 6, 55139: 9, 27229: 5, 57329: 7, 56577: 4
           then: 41179: 2, 12201: 9, 63912: 7, 33066: 1, 47072: 1, 17108: 4, 48988: 0, 9205: 2, 34935: 2, 513: 2

  node 35839: 2
  supported-ops: op: pattern, op: then
        pattern: 41179: 2, 12201: 9, 63912: 7, 33066: 1, 47072: 1, 17108: 4, 48988: 0, 9205: 2, 34935: 2, 513: 2
           then: 55496: 8, 57594: 7, 60795: 5, 54740: 4, 40157: 2, 2940: 7, 51329: 1, 24597: 7, 15515: 9, 47272: 8

  node 35839: 3
  supported-ops: op: pattern, op: then
        pattern: 55496: 8, 57594: 7, 60795: 5, 54740: 4, 40157: 2, 2940: 7, 51329: 1, 24597: 7, 15515: 9, 47272: 8
           then: # append-column[10] encode |end of sequence>
So, what on Earth does this all mean? Let's try to unwrap it by first considering the first-letter of our sample sequence "frog", the letter f. Here is what the encoded symbol "f" looks like, followed by the mini-column version that is specific to the "frog" sequence, and then the "fish" sequence:
sa: encode |f>
|48197> + |53532> + |62671> + |14968> + |62260> + |16180> + |15225> + |19418> + |24524> + |13432>

sa: first-letter |frog>
|48197: 6> + |53532: 0> + |62671: 2> + |14968: 2> + |62260: 8> + |16180: 1> + |15225: 0> + |19418: 4> + |24524: 7> + |13432: 6>

sa: first-letter |fish>
|48197: 4> + |53532: 0> + |62671: 7> + |14968: 3> + |62260: 4> + |16180: 2> + |15225: 5> + |19418: 3> + |24524: 3> + |13432: 3>
Perhaps one way to understand the |x: y> is as co-ordinates of synapses. The encode step provides the x co-ordinate, and the mini-column cell the y co-ordinate, where the x co-ord is the same for all instances of "f", but the y co-ords are specific to particular sequences. It is this property that allows us to encode an entire dictionary worth of words, composed of just a handful of symbols.

Once we have the start superposition (otherwise known as a SDR since the coefficients of our kets are all equal to 1) for our sequence, we then use if-then machines to define the rest of the sequence.

The "f" superposition is followed by the "r" superposition:
pattern |node 35839: 0> => |48197: 6> + |53532: 0> + |62671: 2> + |14968: 2> + |62260: 8> + |16180: 1> + |15225: 0> + |19418: 4> + |24524: 7> + |13432: 6>
then |node 35839: 0> => |56997: 6> + |38159: 3> + |55020: 5> + |10359: 6> + |29215: 7> + |56571: 6> + |55139: 9> + |27229: 5> + |57329: 7> + |56577: 4>
The "r" superposition followed by the "o" superposition:
pattern |node 35839: 1> => |56997: 6> + |38159: 3> + |55020: 5> + |10359: 6> + |29215: 7> + |56571: 6> + |55139: 9> + |27229: 5> + |57329: 7> + |56577: 4>
then |node 35839: 1> => |41179: 2> + |12201: 9> + |63912: 7> + |33066: 1> + |47072: 1> + |17108: 4> + |48988: 0> + |9205: 2> + |34935: 2> + |513: 2>
And so on.The next thing to note is we can invert our superpositions back to their original symbols using this operator:
name-pattern |*> #=> clean select[1,1] similar-input[encode] extract-category pattern |_self>
where the "pattern" operator maps from node space to pattern space, "extract-category" is the inverse of our "random-column[k]" operator, "similar-input[encode]" is essentially the inverse of the "encode" operator, "select[1,1]" selects the first element in the superposition, and "clean" sets the coefficient of all kets to 1. Let's unwrap it:
sa: pattern |node 35839: 0>
|48197: 6> + |53532: 0> + |62671: 2> + |14968: 2> + |62260: 8> + |16180: 1> + |15225: 0> + |19418: 4> + |24524: 7> + |13432: 6>

sa: extract-category pattern |node 35839: 0>
|48197> + |53532> + |62671> + |14968> + |62260> + |16180> + |15225> + |19418> + |24524> + |13432>

sa: similar-input[encode] extract-category pattern |node 35839: 0>
1.0|f>

sa: select[1,1] similar-input[encode] extract-category pattern |node 35839: 0>
1.0|f>

sa: clean select[1,1] similar-input[encode] extract-category pattern |node 35839: 0>
|f>
And here is an example:
sa: name-pattern |node 35839: 0>
|f>

sa: name-pattern |node 35839: 1>
|r>

sa: name-pattern |node 35839: 2>
|o>

sa: name-pattern |node 35839: 3>
|g>
Indeed, the name operator is a key piece we need to define our spell operator. The other piece is the next operator, which given the current pattern, returns the next pattern:
next (*) #=> then clean select[1,1] similar-input[pattern] |_self>
Though due to an incomplete parser (this project is still very much a work in progress!) we can only implement this operator:
next-pattern |*> #=> then clean select[1,1] similar-input[pattern] pattern |_self>
where "pattern" maps from node space to pattern space, "similar-input[pattern]" is approximately the inverse of "pattern", "select[1,1]" and "clean" tidy up our results, and then the "then" operator maps to the next pattern. Let's unwrap it:
sa: pattern |node 35839: 0>
|48197: 6> + |53532: 0> + |62671: 2> + |14968: 2> + |62260: 8> + |16180: 1> + |15225: 0> + |19418: 4> + |24524: 7> + |13432: 6>

sa: similar-input[pattern] pattern |node 35839: 0>
1.0|node 35839: 0> + 0.6|node 35370: 0> + 0.5|node 11806: 3> + 0.5|node 18883: 7> + 0.5|node 20401: 8> + 0.5|node 20844: 5> + 0.5|node 26112: 8> + 0.5|node 29209: 4> + 0.5|node 33566: 0> + 0.5|node 33931: 0> + 0.5|node 35463: 0> + ...

sa: select[1,1] similar-input[pattern] pattern |node 35839: 0>
1.0|node 35839: 0>

sa: clean select[1,1] similar-input[pattern] pattern |node 35839: 0>
|node 35839: 0>

sa: then clean select[1,1] similar-input[pattern] pattern |node 35839: 0>
|56997: 6> + |38159: 3> + |55020: 5> + |10359: 6> + |29215: 7> + |56571: 6> + |55139: 9> + |27229: 5> + |57329: 7> + |56577: 4>
And here is an example:
sa: next-pattern |node 35839: 0>
|56997: 6> + |38159: 3> + |55020: 5> + |10359: 6> + |29215: 7> + |56571: 6> + |55139: 9> + |27229: 5> + |57329: 7> + |56577: 4>

sa: next-pattern |node 35839: 1>
|41179: 2> + |12201: 9> + |63912: 7> + |33066: 1> + |47072: 1> + |17108: 4> + |48988: 0> + |9205: 2> + |34935: 2> + |513: 2>

sa: next-pattern |node 35839: 2>
|55496: 8> + |57594: 7> + |60795: 5> + |54740: 4> + |40157: 2> + |2940: 7> + |51329: 1> + |24597: 7> + |15515: 9> + |47272: 8>

sa: next-pattern |node 35839: 3>
|31379: 0> + |31379: 1> + |31379: 2> + |31379: 3> + |31379: 4> + |31379: 5> + |31379: 6> + |31379: 7> + |31379: 8> + |31379: 9> + |46188: 0> + |46188: 1> + |46188: 2> + |46188: 3> + |46188: 4> + |46188: 5> + |46188: 6> + |46188: 7> + |46188: 8> + |46188: 9> + |9864: 0> + |9864: 1> + |9864: 2> + |9864: 3> + |9864: 4> + |9864: 5> + |9864: 6> + |9864: 7> + |9864: 8> + |9864: 9> + |49649: 0> + |49649: 1> + |49649: 2> + |49649: 3> + |49649: 4> + |49649: 5> + |49649: 6> + |49649: 7> + |49649: 8> + |49649: 9> + |43145: 0> + |43145: 1> + |43145: 2> + |43145: 3> + |43145: 4> + |43145: 5> + |43145: 6> + |43145: 7> + |43145: 8> + |43145: 9> + |45289: 0> + |45289: 1> + |45289: 2> + |45289: 3> + |45289: 4> + |45289: 5> + |45289: 6> + |45289: 7> + |45289: 8> + |45289: 9> + |38722: 0> + |38722: 1> + |38722: 2> + |38722: 3> + |38722: 4> + |38722: 5> + |38722: 6> + |38722: 7> + |38722: 8> + |38722: 9> + |43012: 0> + |43012: 1> + |43012: 2> + |43012: 3> + |43012: 4> + |43012: 5> + |43012: 6> + |43012: 7> + |43012: 8> + |43012: 9> + |1949: 0> + |1949: 1> + |1949: 2> + |1949: 3> + |1949: 4> + |1949: 5> + |1949: 6> + |1949: 7> + |1949: 8> + |1949: 9> + |31083: 0> + |31083: 1> + |31083: 2> + |31083: 3> + |31083: 4> + |31083: 5> + |31083: 6> + |31083: 7> + |31083: 8> + |31083: 9>
Noting that the final pattern is the end-of-sequence pattern "append-column[10] encode |end of sequence>", used to signify to our code the end of a sequence. I don't know the biological equivalent, but it seems plausible to me that there is one. But even if not, no big drama, I'm already abstracted away from the underlying biology. Next up, here is the code for our spell operator, though largely pseudo-code at the moment:
  next (*) #=> then clean select[1,1] similar-input[pattern] |_self>
  name (*) #=> clean select[1,1] similar-input[encode] extract-category |_self>

  not |yes> => |no>
  not |no> => |yes>

  spell (*) #=>
    if not do-you-know first-letter |_self>:
      return |_self>
    current |node> => first-letter |_self>
    while name current |node> /= |end of sequence>:
      print name current |node>
      current |node> => next current |node>
    return |end of sequence>
And here is that translated to the underlying python:
# one is a ket
def spell(one,context):
  start = one.apply_op(context,"first-letter")
  if len(start) == 0:                  # we don't know the first letter, so return the input ket
    return one
  print("spell word:",one)
  context.learn("current","node",start)
  name = context.recall("current","node",True).apply_fn(extract_category).similar_input(context,"encode").select_range(1,1).apply_sigmoid(clean)
  while name.the_label() != "end of sequence":
    print(name)
    context.learn("current","node",ket("node").apply_op(context,"current").similar_input(context,"pattern").select_range(1,1).apply_sigmoid(clean).apply_op(context,"then"))
    name = context.recall("current","node",True).apply_fn(extract_category).similar_input(context,"encode").select_range(1,1).apply_sigmoid(clean)
  return name
And finally, let's actually use this code!
sa: spell |frog>
spell word: |frog>
|f>
|r>
|o>
|g>
|end of sequence>

sa: spell |fish>
spell word: |fish>
|f>
|i>
|s>
|h>
|end of sequence>

sa: spell |rabbit>
spell word: |rabbit>
|r>
|a>
|b>
|b>
|i>
|t>
|end of sequence>
Next up, let's see what we can do with this data. I warn you, the answer is quite a lot. First, a basic look at the number of learn rules in our data:
-- the number of encode learn rules, ie, the number of symbols:
sa: how-many rel-kets[encode]
|number: 69>

-- the number of "first-letter" operators, ie, the number of words:
sa: how-many rel-kets[first-letter]
|number: 74550>

-- the number of "pattern" operators, ie, the number of well, patterns:
sa: how-many rel-kets[pattern]
|number: 656132>

-- the number of nodes, ie the number of if-then machines, ie, roughly the number of neurons in our system:
sa: how-many starts-with |node >
|number: 730682>
Next, let's produce a bar-chart of the lengths of our sequences/words:
sa: bar-chart[50] plus[1] ket-sort extract-value clean similar-input[then] append-column[10] encode |end of sequence>
----------
1  :
2  : |
3  : ||||||
4  : ||||||||||||||||||||
5  : ||||||||||||||||||||||||||||||
6  : ||||||||||||||||||||||||||||||||||||||||||
7  : ||||||||||||||||||||||||||||||||||||||||||||||||
8  : ||||||||||||||||||||||||||||||||||||||||||||||||||
9  : ||||||||||||||||||||||||||||||||||||||||||||||||
10 : ||||||||||||||||||||||||||||||||||||||||
11 : ||||||||||||||||||||||||||||||
12 : ||||||||||||||||||||||
13 : |||||||||||||||
14 : ||||||||||
15 : |||||||
16 : |||||
17 : |||
18 : ||
19 : |
20 : |
21 :
22 :
23 :
24 :
25 :
26 :
27 :
28 :
29 :
30 :
31 :
32 :
33 :
34 :
37 :
39 :
42 :
45 :
53 :
----------
Next, given a symbol predict what comes next:
next-symbol-after |*> #=> bar-chart[50] ket-sort similar-input[encode] extract-category then drop-below[0.09] similar-input[pattern] append-column[10] encode |_self>

-- what usually follows "A":
sa: next-symbol-after |A>
----------
1               :
2               :
                :
-               :
.               : |||||||||
/               :
a               : |
A               :
b               : ||||
B               :
c               : ||||
C               :
d               : ||||||
D               :
e               : |||
E               :
end of sequence : ||||||||||||||||||||||||||||||||||||||||||||||||||
f               : ||||
F               :
g               : |||
G               :
h               : |
H               :
i               : ||
I               :
j               :
k               : |
l               : |||||||||||||||||||||||
L               :
m               : |||||||||||
M               : |
n               : |||||||||||||||||||||
N               :
O               :
o               :
p               : ||||
P               :
q               :
Q               :
r               : ||||||||||||||||||||||
R               :
s               : |||||||||
S               :
t               : ||||||
T               : |
u               : ||||||||||
v               : |||
x               :
y               : |
z               : |
----------

-- what usually follows "a":
sa: next-symbol-after |a>
----------
                :
'               :
-               :
.               :
a               :
b               : ||
c               : |||||
d               : |||
e               :
end of sequence : ||||||||||||||||||||||||||||||||||||||||||||||||||
f               :
g               : ||
h               :
i               : ||
I               :
j               :
k               : |
l               : ||||||||||
m               : |||
n               : ||||||||||||||
o               :
p               : ||
q               :
r               : |||||||||||
R               :
s               : |||||
t               : |||||||||||
u               : |
v               : |
w               :
x               :
y               : |
z               :
----------


-- what usually follows "k":
sa: next-symbol-after |k>
----------
                : |
'               :
-               :
.               :
a               : |
b               :
c               :
d               :
e               : ||||
end of sequence : ||||||||||||||||||||||||||||||||||||||||||||||||||
f               :
g               :
h               :
H               :
i               : ||
I               :
j               :
k               :
l               :
m               :
n               :
o               :
p               :
r               :
R               :
s               :
t               :
u               :
v               :
V               :
w               :
W               :
y               :
----------
And so on. Though the graphs are somewhat pretty, the result is actually a bit boring. These correspond to standard Markov: "given a char predict the next char". Much more interesting would be given a sequence of characters, then predict what is next. So I tried to do this, but it was slow and didn't work quite right. I'll try sometime again in the future. But even this is somewhat boring, since firefox and google already do this, and I suspect it could be done with only a few lines of python. But I suppose that is missing the point. The point is to learn a large number of sequences in a proposed brain-like way, as a proof of concept, and then hopefully be useful sometime in the future.

Next up, what are the usual positions of a symbol in a word:
sa: symbol-positions-for |*> #=> bar-chart[50] ket-sort extract-value clean drop-below[0.09] similar-input[pattern] append-column[10] encode |_self>

-- the bar chart of the positions for "B":
sa: symbol-positions-for |B>
----------
0  : ||||||||||||||||||||||||||||||||||||||||||||||||||
1  :
2  : |
3  :
4  : |
5  :
6  : |
7  :
8  : |
9  :
10 :
11 :
12 :
13 :
14 :
15 :
17 :
----------

-- the bar chart of the positions for "b":
sa: symbol-positions-for |b>
----------
0  : ||||||||||||||||||||||||||||||||||||||||||||||||||
1  : ||||||
2  : |||||||||||||||||||||||
3  : ||||||||||||||||||||
4  : ||||||||||||||
5  : |||||||||||||
6  : ||||||||
7  : |||||||
8  : |||||
9  : |||
10 : |||
11 : |
12 :
13 :
14 :
15 :
16 :
17 :
18 :
19 :
20 :
22 :
23 :
28 :
----------

sa: symbol-positions-for |k>
----------
0  : ||||||||||||||||||
1  : |||||
2  : ||||||||||||
3  : ||||||||||||||||||||||||||||||||||||||||||||||||||
4  : ||||||||||||||||||||||||||||||||
5  : |||||||||
6  : ||||||||||||
7  : ||||||||||||
8  : |||||||||||||
9  : ||||||||
10 : |||||
11 : ||||
12 : |||
13 : |
14 : |
15 :
16 :
17 :
18 :
19 :
20 :
21 :
24 :
----------

sa: symbol-positions-for |x>
----------
0  : ||||
1  : ||||||||||||||||||||||||||||||||||||||||||||||||||
2  : |||||||||||||||||||||||||||||||||||
3  : |||||||||||||
4  : ||||||||
5  : |||||||||||
6  : ||||||||||
7  : |||||||
8  : |||||
9  : ||||
10 : |||
11 : ||
12 : ||
13 : |
14 : |
15 :
16 :
17 :
18 :
19 :
20 :
25 :
----------
And so on for other symbols. Next we introduce a stripped down follow-sequence operator. This one will follow a sequence from any starting point, not just the first letter, though in spirit it is identical to the above spell operator. Indeed, it would have been cleaner for me to have defined spell in terms of follow-sequence:
  next (*) #=> then clean select[1,1] similar-input[pattern] |_self>
  name (*) #=> clean select[1,1] similar-input[encode] extract-category |_self>

  follow-sequence (*) #=>
    current |node> => |_self>
    while name current |node> /= |end of sequence>:
      print name current |node>
      current |node> => next current |node>
    return |end of sequence>

  spell |*> #=> follow-sequence first-letter |_self>
In our first example, just jump into any random sequence and follow it:
sa: follow-a-random-sequence |*> #=> follow-sequence pattern pick-elt rel-kets[pattern] |>
sa: follow-a-random-sequence
|e>
|n>
|c>
|h>
| >
|k>
|n>
|i>
|f>
|e>
|end of sequence>

-- find the parent word:
sa: parent-word |node 69596: 2>
|trench knife>

-- another example:
sa: follow-a-random-sequence
|e>
|i>
|g>
|h>
|end of sequence>

sa: parent-word |node 43118: 3>
|inveigh>
Next, spell a random word:
sa: spell-a-random-word |*> #=> follow-sequence first-letter pick-elt rel-kets[first-letter] |>
sa: spell-a-random-word
|Z>
|o>
|r>
|o>
|a>
|s>
|t>
|r>
|i>
|a>
|n>
|i>
|s>
|m>
0|end of sequence>
Jump into a random sequence and start at the given symbol:
sa: follow-sequence-starting-at |*> #=> follow-sequence pattern pick-elt drop-below[0.09] similar-input[pattern] append-column[10] encode |_self>
sa: follow-sequence-starting-at |c>
|c>
|a>
|r>
|d>
|end of sequence>

sa: parent-word |node 42039: 6>
|index card>

sa: follow-sequence-starting-at |c>
|c>
|u>
|m>
|b>
|e>
|n>
|t>
|end of sequence>

sa: parent-word |node 27871: 2>
|decumbent>
Next, spell a random word that starts with a given symbol:
sa: spell-a-random-word-that-starts-with |*> #=> follow-sequence first-letter pick-elt drop-below[0.09] similar-input[first-letter] append-column[10] encode |_self>
sa: spell-a-random-word-that-starts-with |X>
|X>
|e>
|n>
|o>
|p>
|h>
|a>
|n>
|e>
|s>
|end of sequence>

sa: spell-a-random-word-that-starts-with |f>
|f>
|o>
|r>
|g>
|a>
|v>
|e>
|end of sequence>

sa: spell-a-random-word-that-starts-with |f>
|f>
|a>
|r>
|i>
|n>
|a>
|end of sequence>
Next, spell a random word that contains the given symbol:
sa: spell-a-random-word-that-contains |*> #=> follow-sequence pattern merge-labels (extract-category pick-elt drop-below[0.09] similar-input[pattern] append-column[10] encode |_self> + |: 0>)
sa: spell-a-random-word-that-contains |x>
|b>
|a>
|u>
|x>
|i>
|t>
|e>
|end of sequence>

sa: spell-a-random-word-that-contains |z>
|L>
|e>
|i>
|b>
|n>
|i>
|z>
|end of sequence>
Now, I think it might be instructive to see all our operator definitions at once:
  name-pattern |*> #=> clean select[1,1] similar-input[encode] extract-category pattern |_self>
  next-pattern |*> #=> then clean select[1,1] similar-input[pattern] pattern |_self>

  name (*) #=> clean select[1,1] similar-input[encode] extract-category |_self>
  next (*) #=> then clean select[1,1] similar-input[pattern] |_self>

  not |yes> => |no>
  not |no> => |yes>

  spell (*) #=>
    if not do-you-know first-letter |_self>:
      return |_self>
    current |node> => first-letter |_self>
    while name current |node> /= |end of sequence>:
      print name current |node>
      current |node> => next current |node>
    return |end of sequence>

  sequence-lengths |*> #=> bar-chart[50] plus[1] ket-sort extract-value clean similar-input[then] append-column[10] encode |end of sequence>
  next-symbol-after |*> #=> bar-chart[50] ket-sort similar-input[encode] extract-category then drop-below[0.09] similar-input[pattern] append-column[10] encode |_self>
  symbol-positions-for |*> #=> bar-chart[50] ket-sort extract-value clean drop-below[0.09] similar-input[pattern] append-column[10] encode |_self>

  follow-sequence (*) #=>
    current |node> => |_self>
    while name current |node> /= |end of sequence>:
      print name current |node>
      current |node> => next current |node>
    return |end of sequence>

  spell |*> #=> follow-sequence first-letter |_self>

  spell-a-random-word |*> #=> follow-sequence first-letter pick-elt rel-kets[first-letter] |>
  follow-a-random-sequence |*> #=> follow-sequence pattern pick-elt rel-kets[pattern] |>

  spell-a-random-word-that-starts-with |*> #=> follow-sequence first-letter pick-elt drop-below[0.09] similar-input[first-letter] append-column[10] encode |_self>
  follow-sequence-starting-at |*> #=> follow-sequence pattern pick-elt drop-below[0.09] similar-input[pattern] append-column[10] encode |_self>
  spell-a-random-word-that-contains |*> #=> follow-sequence pattern merge-labels (extract-category pick-elt drop-below[0.09] similar-input[pattern] append-column[10] encode |_self> + |: 0>)
So there we have it. We successfully learned and recalled a whole dictionary of words using some HTM inspired ideas. In the process this became the largest and most complex use of my language/notation yet. Though I'm still waiting to find an application of my notation to something really interesting. For example, I'm hoping that with ideas from if-then machines, sequences, and chunked sequences we might be able to encode grammatical structures. But that is a long way off yet, but might just be possible. Another goal is to implement something similar to word2vec and cortical.io, that would map words to superpositions, with the property that semantically similar words have similar superpositions.

In the next post I plan to extend the above to learning and recalling chunked sequences. In particular, some digits of pi and the alphabet.

Update: we can also count letter frequencies. I guess not super interesting, but may as well add it. Here are the needed operators:
  count-first-letter-frequency |*> #=> pop-float rewrite( how-many drop-below[0.09] similar-input[first-letter] append-column[10] encode |_self>, |number>, |_self> )
  count-letter-frequency |*> #=> pop-float rewrite( how-many drop-below[0.09] similar-input[pattern] append-column[10] encode |_self>, |number>, |_self> )
And now apply them:
-- first letter frequency for the upper case alphabet:
sa: bar-chart[50] count-first-letter-frequency split |A B C D E F G H I J K L M N O P Q R S T U V W X Y Z>
----------
A : ||||||||||||||||||||||||||||||||||||||||||||||
B : |||||||||||||||||||||||||||||||||||||||||||||
C : ||||||||||||||||||||||||||||||||||||||||||||||||||
D : ||||||||||||||||||||||||
E : |||||||||||||||||||
F : ||||||||||||||||||
G : ||||||||||||||||||||||||||||
H : ||||||||||||||||||||||||||||
I : ||||||||||||||
J : |||||||||||||
K : ||||||||||||||||||
L : |||||||||||||||||||||||||||||
M : |||||||||||||||||||||||||||||||||||||||
N : ||||||||||||||||||
O : |||||||||||
P : ||||||||||||||||||||||||||||||||
Q : ||
R : ||||||||||||||||||
S : |||||||||||||||||||||||||||||||||||||||||||
T : ||||||||||||||||||||||||
U : |||||
V : ||||||||||
W : ||||||||||||
X :
Y : |||
Z : |||
----------

-- first letter frequency for the lower case alphabet:
sa: bar-chart[50] count-first-letter-frequency split |a b c d e f g h i j k l m n o p q r s t u v w x y z>
----------
a : |||||||||||||||||||||||||||||
b : ||||||||||||||||||||||||||
c : ||||||||||||||||||||||||||||||||||||||||||||||
d : |||||||||||||||||||||||||
e : ||||||||||||||||||
f : |||||||||||||||||||||
g : ||||||||||||||||
h : |||||||||||||||||||
i : |||||||||||||||||
j : |||
k : ||||
l : ||||||||||||||||
m : ||||||||||||||||||||||
n : ||||||||
o : ||||||||||
p : ||||||||||||||||||||||||||||||||||||
q : ||
r : ||||||||||||||||||
s : ||||||||||||||||||||||||||||||||||||||||||||||||||
t : |||||||||||||||||||||||
u : ||||||||
v : |||||||
w : |||||||||||
x :
y : |
z : |
----------

-- letter frequency for the uppercase alphabet:
sa: bar-chart[50] count-letter-frequency split |A B C D E F G H I J K L M N O P Q R S T U V W X Y Z>
----------
A : |||||||||||||||||||||||||||||||||||||||||||||
B : ||||||||||||||||||||||||||||||||||||||||
C : ||||||||||||||||||||||||||||||||||||||||||||||||||
D : ||||||||||||||||||||||||
E : ||||||||||||||||||
F : ||||||||||||||||||
G : ||||||||||||||||||||||||||
H : |||||||||||||||||||||||||
I : ||||||||||||||||||||||||||
J : ||||||||||||
K : |||||||||||||||
L : ||||||||||||||||||||||||||
M : ||||||||||||||||||||||||||||||||||||
N : ||||||||||||||||
O : |||||||||||
P : ||||||||||||||||||||||||||||||||
Q : ||
R : ||||||||||||||||||||
S : ||||||||||||||||||||||||||||||||||||||||||||
T : ||||||||||||||||||||||
U : ||||
V : ||||||||||||
W : ||||||||||||
X : |
Y : |||
Z : ||
----------

-- letter frequency for the lowercase alphabet:
sa: bar-chart[50] count-letter-frequency split |a b c d e f g h i j k l m n o p q r s t u v w x y z>
----------
a : |||||||||||||||||||||||||||||||||||||||||
b : ||||||||
c : ||||||||||||||||||||
d : ||||||||||||||
e : ||||||||||||||||||||||||||||||||||||||||||||||||||
f : ||||||
g : ||||||||||
h : |||||||||||||
i : ||||||||||||||||||||||||||||||||||||
j :
k : ||||
l : |||||||||||||||||||||||||
m : |||||||||||||
n : |||||||||||||||||||||||||||||||
o : |||||||||||||||||||||||||||||||||
p : |||||||||||||
q :
r : ||||||||||||||||||||||||||||||||||
s : ||||||||||||||||||||||||||
t : |||||||||||||||||||||||||||||||
u : ||||||||||||||||
v : ||||
w : ||||
x : |
y : ||||||||
z : |
----------
And let's finish with the code all at once:
  count-first-letter-frequency |*> #=> pop-float rewrite( how-many drop-below[0.09] similar-input[first-letter] append-column[10] encode |_self>, |number>, |_self> )
  count-letter-frequency |*> #=> pop-float rewrite( how-many drop-below[0.09] similar-input[pattern] append-column[10] encode |_self>, |number>, |_self> )

  bar-chart[50] count-first-letter-frequency split |A B C D E F G H I J K L M N O P Q R S T U V W X Y Z>
  bar-chart[50] count-first-letter-frequency split |a b c d e f g h i j k l m n o p q r s t u v w x y z>

  bar-chart[50] count-letter-frequency split |A B C D E F G H I J K L M N O P Q R S T U V W X Y Z>
  bar-chart[50] count-letter-frequency split |a b c d e f g h i j k l m n o p q r s t u v w x y z>
That's it for this update. Chunked sequences are coming up soon.

Thursday 29 September 2016

smoothed spike wave similarity

OK. Continuing on from the last post about spike wave similarity, let's look at smoothed spike wave similarity. This version is much more tolerant for the location of the spikes, allowing them to be in a Gaussian around the integer, rather than exactly an integer. It takes a little bit of work to reproduce this behaviour though.

Let's jump in, and start with defining our spike waves:
  spikes |wave-1> => range(|0>,|1000>,|1>)
  spikes |wave-2> => range(|0>,|1000>,|2>)
  spikes |wave-3> => range(|0>,|1000>,|3>)
  spikes |wave-4> => range(|0>,|1000>,|4>)
  spikes |wave-5> => range(|0>,|1000>,|5>)
  spikes |wave-6> => range(|0>,|1000>,|6>)
  spikes |wave-7> => range(|0>,|1000>,|7>)
  spikes |wave-8> => range(|0>,|1000>,|8>)
  spikes |wave-9> => range(|0>,|1000>,|9>)
  spikes |wave-10> => range(|0>,|1000>,|10>)
  spikes |wave-11> => range(|0>,|1000>,|11>)
  spikes |wave-12> => range(|0>,|1000>,|12>)
  spikes |wave-13> => range(|0>,|1000>,|13>)
  spikes |wave-14> => range(|0>,|1000>,|14>)
  spikes |wave-15> => range(|0>,|1000>,|15>)
  spikes |wave-16> => range(|0>,|1000>,|16>)
  spikes |wave-17> => range(|0>,|1000>,|17>)
  spikes |wave-18> => range(|0>,|1000>,|18>)
  spikes |wave-19> => range(|0>,|1000>,|19>)
  spikes |wave-20> => range(|0>,|1000>,|20>)
  spikes |empty> => 0 spikes |wave-1>
Next, we need to make use of the smooth[dx] operator. Essentially it maps:
f(x) -> f(x - dx)/4 + f(x)/2 + f(x + dx)/4
Here are a couple of examples to try and show how it works in practice:
sa: smooth[0.25] |10>
0.25|9.75> + 0.5|10.0> + 0.25|10.25>

sa: bar-chart[50] smooth[0.25] |10>
----------
9.75  : |||||||||||||||||||||||||
10.0  : ||||||||||||||||||||||||||||||||||||||||||||||||||
10.25 : |||||||||||||||||||||||||
----------
|bar chart>

sa: bar-chart[50] smooth[0.25]^5 |10>
----------
8.75  :
9.0   : |
9.25  : ||||||||
9.5   : |||||||||||||||||||||||
9.75  : |||||||||||||||||||||||||||||||||||||||||
10.0  : ||||||||||||||||||||||||||||||||||||||||||||||||||
10.25 : |||||||||||||||||||||||||||||||||||||||||
10.5  : |||||||||||||||||||||||
10.75 : ||||||||
11.0  : |
11.25 :
----------
|bar chart>
Basically maps spikes to Gaussian smoothed spikes. Which is exactly what we need for this post. Now, let's generate our smoothed spike waves:
  smooth-spike-op |*> #=> smooth[0.25]^5 spikes |_self>
  map[smooth-spike-op,smoothed-spikes] rel-kets[spikes]
Now, define a couple of operators:
-- the float-value operator, to define our sort order:
  float-value |*> #=> pop-float clean |_self>

-- the show the wave operator:
  show-smoothed-spike-wave |*> #=> bar-chart[50] select[1,45] sort-by[float-value] smoothed-spikes (|empty> + |_self>)
Now, look at the waves, and note they are no longer spikes, but Gaussian smoothed spikes:
sa: show-smoothed-spike-wave |wave-1>
----------
-1.25 :
-1.0  : |
-0.75 : ||||||||
-0.5  : ||||||||||||||||||||||
-0.25 : ||||||||||||||||||||||||||||||||||||||
0.0   : ||||||||||||||||||||||||||||||||||||||||||||||||
0.25  : ||||||||||||||||||||||||||||||||||||||||||||||
0.5   : ||||||||||||||||||||||||||||||||||||||||||||
0.75  : |||||||||||||||||||||||||||||||||||||||||||||||
1.0   : ||||||||||||||||||||||||||||||||||||||||||||||||||
1.25  : |||||||||||||||||||||||||||||||||||||||||||||||
1.5   : ||||||||||||||||||||||||||||||||||||||||||||
1.75  : |||||||||||||||||||||||||||||||||||||||||||||||
2.0   : ||||||||||||||||||||||||||||||||||||||||||||||||||
2.25  : |||||||||||||||||||||||||||||||||||||||||||||||
2.5   : ||||||||||||||||||||||||||||||||||||||||||||
2.75  : |||||||||||||||||||||||||||||||||||||||||||||||
3.0   : ||||||||||||||||||||||||||||||||||||||||||||||||||
3.25  : |||||||||||||||||||||||||||||||||||||||||||||||
3.5   : ||||||||||||||||||||||||||||||||||||||||||||
3.75  : |||||||||||||||||||||||||||||||||||||||||||||||
4.0   : ||||||||||||||||||||||||||||||||||||||||||||||||||
4.25  : |||||||||||||||||||||||||||||||||||||||||||||||
4.5   : ||||||||||||||||||||||||||||||||||||||||||||
4.75  : |||||||||||||||||||||||||||||||||||||||||||||||
5.0   : ||||||||||||||||||||||||||||||||||||||||||||||||||
5.25  : |||||||||||||||||||||||||||||||||||||||||||||||
5.5   : ||||||||||||||||||||||||||||||||||||||||||||
5.75  : |||||||||||||||||||||||||||||||||||||||||||||||
6.0   : ||||||||||||||||||||||||||||||||||||||||||||||||||
6.25  : |||||||||||||||||||||||||||||||||||||||||||||||
6.5   : ||||||||||||||||||||||||||||||||||||||||||||
6.75  : |||||||||||||||||||||||||||||||||||||||||||||||
7.0   : ||||||||||||||||||||||||||||||||||||||||||||||||||
7.25  : |||||||||||||||||||||||||||||||||||||||||||||||
7.5   : ||||||||||||||||||||||||||||||||||||||||||||
7.75  : |||||||||||||||||||||||||||||||||||||||||||||||
8.0   : ||||||||||||||||||||||||||||||||||||||||||||||||||
8.25  : |||||||||||||||||||||||||||||||||||||||||||||||
8.5   : ||||||||||||||||||||||||||||||||||||||||||||
8.75  : |||||||||||||||||||||||||||||||||||||||||||||||
9.0   : ||||||||||||||||||||||||||||||||||||||||||||||||||
9.25  : |||||||||||||||||||||||||||||||||||||||||||||||
9.5   : ||||||||||||||||||||||||||||||||||||||||||||
9.75  : |||||||||||||||||||||||||||||||||||||||||||||||
----------

sa: show-smoothed-spike-wave |wave-2>
----------
-1.25 :
-1.0  : |
-0.75 : ||||||||
-0.5  : |||||||||||||||||||||||
-0.25 : |||||||||||||||||||||||||||||||||||||||||
0.0   : ||||||||||||||||||||||||||||||||||||||||||||||||||
0.25  : |||||||||||||||||||||||||||||||||||||||||
0.5   : |||||||||||||||||||||||
0.75  : |||||||||
1.0   : |||
1.25  : |||||||||
1.5   : |||||||||||||||||||||||
1.75  : |||||||||||||||||||||||||||||||||||||||||
2.0   : ||||||||||||||||||||||||||||||||||||||||||||||||||
2.25  : |||||||||||||||||||||||||||||||||||||||||
2.5   : |||||||||||||||||||||||
2.75  : |||||||||
3.0   : |||
3.25  : |||||||||
3.5   : |||||||||||||||||||||||
3.75  : |||||||||||||||||||||||||||||||||||||||||
4.0   : ||||||||||||||||||||||||||||||||||||||||||||||||||
4.25  : |||||||||||||||||||||||||||||||||||||||||
4.5   : |||||||||||||||||||||||
4.75  : |||||||||
5.0   : |||
5.25  : |||||||||
5.5   : |||||||||||||||||||||||
5.75  : |||||||||||||||||||||||||||||||||||||||||
6.0   : ||||||||||||||||||||||||||||||||||||||||||||||||||
6.25  : |||||||||||||||||||||||||||||||||||||||||
6.5   : |||||||||||||||||||||||
6.75  : |||||||||
7.0   : |||
7.25  : |||||||||
7.5   : |||||||||||||||||||||||
7.75  : |||||||||||||||||||||||||||||||||||||||||
8.0   : ||||||||||||||||||||||||||||||||||||||||||||||||||
8.25  : |||||||||||||||||||||||||||||||||||||||||
8.5   : |||||||||||||||||||||||
8.75  : |||||||||
9.0   : |||
9.25  : |||||||||
9.5   : |||||||||||||||||||||||
9.75  : |||||||||||||||||||||||||||||||||||||||||
----------

sa: show-smoothed-spike-wave |wave-3>
----------
-1.25 :
-1.0  : |
-0.75 : ||||||||
-0.5  : |||||||||||||||||||||||
-0.25 : |||||||||||||||||||||||||||||||||||||||||
0.0   : ||||||||||||||||||||||||||||||||||||||||||||||||||
0.25  : |||||||||||||||||||||||||||||||||||||||||
0.5   : |||||||||||||||||||||||
0.75  : ||||||||
1.0   : |
1.25  :
1.5   :
1.75  :
2.0   : |
2.25  : ||||||||
2.5   : |||||||||||||||||||||||
2.75  : |||||||||||||||||||||||||||||||||||||||||
3.0   : ||||||||||||||||||||||||||||||||||||||||||||||||||
3.25  : |||||||||||||||||||||||||||||||||||||||||
3.5   : |||||||||||||||||||||||
3.75  : ||||||||
4.0   : |
4.25  :
4.5   :
4.75  :
5.0   : |
5.25  : ||||||||
5.5   : |||||||||||||||||||||||
5.75  : |||||||||||||||||||||||||||||||||||||||||
6.0   : ||||||||||||||||||||||||||||||||||||||||||||||||||
6.25  : |||||||||||||||||||||||||||||||||||||||||
6.5   : |||||||||||||||||||||||
6.75  : ||||||||
7.0   : |
7.25  :
7.5   :
7.75  :
8.0   : |
8.25  : ||||||||
8.5   : |||||||||||||||||||||||
8.75  : |||||||||||||||||||||||||||||||||||||||||
9.0   : ||||||||||||||||||||||||||||||||||||||||||||||||||
9.25  : |||||||||||||||||||||||||||||||||||||||||
9.5   : |||||||||||||||||||||||
9.75  : ||||||||
----------

sa: show-smoothed-spike-wave |wave-7>
----------
-1.25 :
-1.0  : |
-0.75 : ||||||||
-0.5  : |||||||||||||||||||||||
-0.25 : |||||||||||||||||||||||||||||||||||||||||
0.0   : ||||||||||||||||||||||||||||||||||||||||||||||||||
0.25  : |||||||||||||||||||||||||||||||||||||||||
0.5   : |||||||||||||||||||||||
0.75  : ||||||||
1.0   : |
1.25  :
1.5   :
1.75  :
2.0   :
2.25  :
2.5   :
2.75  :
3.0   :
3.25  :
3.5   :
3.75  :
4.0   :
4.25  :
4.5   :
4.75  :
5.0   :
5.25  :
5.5   :
5.75  :
6.0   : |
6.25  : ||||||||
6.5   : |||||||||||||||||||||||
6.75  : |||||||||||||||||||||||||||||||||||||||||
7.0   : ||||||||||||||||||||||||||||||||||||||||||||||||||
7.25  : |||||||||||||||||||||||||||||||||||||||||
7.5   : |||||||||||||||||||||||
7.75  : ||||||||
8.0   : |
8.25  :
8.5   :
8.75  :
9.0   :
9.25  :
9.5   :
9.75  :
----------
OK. All nice and pretty. Now we want to look at their similarities. Presumably, for integers they should give very similar similarities as the single spike versions. Only for off-integer spikes should there be a big difference. Let's see what the data says:
-- define our show similarity operator:
  show-smoothed-similarity |*> #=> bar-chart[50] ket-sort similar-input[smoothed-spikes] smoothed-spikes |_self>

-- now put it to use:
sa: show-smoothed-similarity |wave-1>
----------
wave-1  : ||||||||||||||||||||||||||||||||||||||||||||||||||
wave-2  : ||||||||||||||||||||||||||||||||||||
wave-3  : ||||||||||||||||||||||||||
wave-4  : ||||||||||||||||||||
wave-5  : |||||||||||||||||
wave-6  : |||||||||||||||
wave-7  : |||||||||||||
wave-8  : |||||||||||
wave-9  : ||||||||||
wave-10 : |||||||||
wave-11 : ||||||||
wave-12 : ||||||||
wave-13 : |||||||
wave-14 : |||||||
wave-15 : ||||||
wave-16 : ||||||
wave-17 : ||||||
wave-18 : |||||
wave-19 : |||||
wave-20 : |||||
----------

sa: show-smoothed-similarity |wave-2>
----------
wave-1  : ||||||||||||||||||||||||||||||||||||
wave-2  : ||||||||||||||||||||||||||||||||||||||||||||||||||
wave-3  : |||||||||||||||||||||||||
wave-4  : |||||||||||||||||||||||||
wave-5  : ||||||||||||||||
wave-6  : |||||||||||||||||
wave-7  : ||||||||||||
wave-8  : ||||||||||||
wave-9  : ||||||||||
wave-10 : ||||||||||
wave-11 : ||||||||
wave-12 : ||||||||
wave-13 : |||||||
wave-14 : |||||||
wave-15 : ||||||
wave-16 : ||||||
wave-17 : |||||
wave-18 : |||||
wave-19 : |||||
wave-20 : |||||
----------

sa: show-smoothed-similarity |wave-3>
----------
wave-1  : ||||||||||||||||||||||||||
wave-2  : |||||||||||||||||||||||||
wave-3  : ||||||||||||||||||||||||||||||||||||||||||||||||||
wave-4  : ||||||||||||||||||
wave-5  : |||||||||||||||
wave-6  : |||||||||||||||||||||||||
wave-7  : |||||||||||
wave-8  : ||||||||||
wave-9  : ||||||||||||||||
wave-10 : ||||||||
wave-11 : ||||||||
wave-12 : ||||||||||||
wave-13 : |||||||
wave-14 : ||||||
wave-15 : ||||||||||
wave-16 : |||||
wave-17 : |||||
wave-18 : ||||||||
wave-19 : |||||
wave-20 : ||||
----------

sa: show-smoothed-similarity |wave-4>
----------
wave-1  : ||||||||||||||||||||
wave-2  : |||||||||||||||||||||||||
wave-3  : ||||||||||||||||||
wave-4  : ||||||||||||||||||||||||||||||||||||||||||||||||||
wave-5  : |||||||||||||||
wave-6  : |||||||||||||||||
wave-7  : |||||||||||
wave-8  : |||||||||||||||||||||||||
wave-9  : |||||||||
wave-10 : ||||||||||
wave-11 : |||||||
wave-12 : ||||||||||||||||
wave-13 : ||||||
wave-14 : |||||||
wave-15 : |||||
wave-16 : ||||||||||||
wave-17 : |||||
wave-18 : |||||
wave-19 : |||||
wave-20 : ||||||||||
----------

sa: show-smoothed-similarity |wave-5>
----------
wave-1  : |||||||||||||||||
wave-2  : ||||||||||||||||
wave-3  : |||||||||||||||
wave-4  : |||||||||||||||
wave-5  : ||||||||||||||||||||||||||||||||||||||||||||||||||
wave-6  : ||||||||||||
wave-7  : ||||||||||
wave-8  : |||||||||
wave-9  : ||||||||
wave-10 : |||||||||||||||||||||||||
wave-11 : |||||||
wave-12 : ||||||
wave-13 : ||||||
wave-14 : ||||||
wave-15 : ||||||||||||||||
wave-16 : |||||
wave-17 : |||||
wave-18 : |||||
wave-19 : ||||
wave-20 : ||||||||||||
----------

sa: show-smoothed-similarity |wave-6>
----------
wave-1  : |||||||||||||||
wave-2  : |||||||||||||||||
wave-3  : |||||||||||||||||||||||||
wave-4  : |||||||||||||||||
wave-5  : ||||||||||||
wave-6  : ||||||||||||||||||||||||||||||||||||||||||||||||||
wave-7  : ||||||||||
wave-8  : ||||||||||||
wave-9  : ||||||||||||||||
wave-10 : ||||||||||
wave-11 : |||||||
wave-12 : |||||||||||||||||||||||||
wave-13 : ||||||
wave-14 : |||||||
wave-15 : ||||||||||
wave-16 : ||||||
wave-17 : ||||
wave-18 : ||||||||||||||||
wave-19 : ||||
wave-20 : |||||
----------

sa: show-smoothed-similarity |wave-7>
----------
wave-1  : |||||||||||||
wave-2  : ||||||||||||
wave-3  : |||||||||||
wave-4  : |||||||||||
wave-5  : ||||||||||
wave-6  : ||||||||||
wave-7  : ||||||||||||||||||||||||||||||||||||||||||||||||||
wave-8  : |||||||||
wave-9  : ||||||||
wave-10 : |||||||
wave-11 : |||||||
wave-12 : ||||||
wave-13 : ||||||
wave-14 : |||||||||||||||||||||||||
wave-15 : |||||
wave-16 : |||||
wave-17 : |||||
wave-18 : ||||
wave-19 : ||||
wave-20 : ||||
----------

sa: show-smoothed-similarity |wave-8>
----------
wave-1  : |||||||||||
wave-2  : ||||||||||||
wave-3  : ||||||||||
wave-4  : |||||||||||||||||||||||||
wave-5  : |||||||||
wave-6  : ||||||||||||
wave-7  : |||||||||
wave-8  : ||||||||||||||||||||||||||||||||||||||||||||||||||
wave-9  : ||||||||
wave-10 : ||||||||||
wave-11 : |||||||
wave-12 : ||||||||||||||||
wave-13 : ||||||
wave-14 : |||||||
wave-15 : |||||
wave-16 : |||||||||||||||||||||||||
wave-17 : ||||
wave-18 : |||||
wave-19 : ||||
wave-20 : ||||||||||
----------

sa: show-smoothed-similarity |wave-9>
----------
wave-1  : ||||||||||
wave-2  : ||||||||||
wave-3  : ||||||||||||||||
wave-4  : |||||||||
wave-5  : ||||||||
wave-6  : ||||||||||||||||
wave-7  : ||||||||
wave-8  : ||||||||
wave-9  : ||||||||||||||||||||||||||||||||||||||||||||||||||
wave-10 : |||||||
wave-11 : |||||||
wave-12 : ||||||||||||
wave-13 : ||||||
wave-14 : |||||
wave-15 : ||||||||||
wave-16 : ||||
wave-17 : ||||
wave-18 : |||||||||||||||||||||||||
wave-19 : ||||
wave-20 : ||||
----------

sa: show-smoothed-similarity |wave-10>
----------
wave-1  : |||||||||
wave-2  : ||||||||||
wave-3  : ||||||||
wave-4  : ||||||||||
wave-5  : |||||||||||||||||||||||||
wave-6  : ||||||||||
wave-7  : |||||||
wave-8  : ||||||||||
wave-9  : |||||||
wave-10 : ||||||||||||||||||||||||||||||||||||||||||||||||||
wave-11 : |||||||
wave-12 : ||||||||
wave-13 : |||||
wave-14 : |||||||
wave-15 : ||||||||||||||||
wave-16 : ||||||
wave-17 : ||||
wave-18 : ||||||
wave-19 : ||||
wave-20 : |||||||||||||||||||||||||
----------

sa: show-smoothed-similarity |wave-11>
----------
wave-1  : ||||||||
wave-2  : ||||||||
wave-3  : ||||||||
wave-4  : |||||||
wave-5  : |||||||
wave-6  : |||||||
wave-7  : |||||||
wave-8  : |||||||
wave-9  : |||||||
wave-10 : |||||||
wave-11 : ||||||||||||||||||||||||||||||||||||||||||||||||||
wave-12 : ||||||
wave-13 : |||||
wave-14 : |||||
wave-15 : |||||
wave-16 : ||||
wave-17 : ||||
wave-18 : ||||
wave-19 : ||||
wave-20 : ||||
----------

sa: show-smoothed-similarity |wave-12>
----------
wave-1  : ||||||||
wave-2  : ||||||||
wave-3  : ||||||||||||
wave-4  : ||||||||||||||||
wave-5  : ||||||
wave-6  : |||||||||||||||||||||||||
wave-7  : ||||||
wave-8  : ||||||||||||||||
wave-9  : ||||||||||||
wave-10 : ||||||||
wave-11 : ||||||
wave-12 : ||||||||||||||||||||||||||||||||||||||||||||||||||
wave-13 : ||||||
wave-14 : |||||||
wave-15 : ||||||||||
wave-16 : ||||||||||||
wave-17 : ||||
wave-18 : ||||||||||||||||
wave-19 : ||||
wave-20 : ||||||||||
----------
Which is roughly the same as the spike version, though wave-2 looks distinctly different, and wave-k for prime k have slightly higher similarity than in the spike version. But the thing we really want to test are non-integer wave's. Here is |wave-9.75>:
sa: show-smoothed-spike-wave |wave-9.75>
----------
-1.25 :
-1.0  : |
-0.75 : ||||||||
-0.5  : |||||||||||||||||||||||
-0.25 : |||||||||||||||||||||||||||||||||||||||||
0.0   : ||||||||||||||||||||||||||||||||||||||||||||||||||
0.25  : |||||||||||||||||||||||||||||||||||||||||
0.5   : |||||||||||||||||||||||
0.75  : ||||||||
1.0   : |
1.25  :
1.5   :
1.75  :
2.0   :
2.25  :
2.5   :
2.75  :
3.0   :
3.25  :
3.5   :
3.75  :
4.0   :
4.25  :
4.5   :
4.75  :
5.0   :
5.25  :
5.5   :
5.75  :
6.0   :
6.25  :
6.5   :
6.75  :
7.0   :
7.25  :
7.5   :
7.75  :
8.0   :
8.25  :
8.5   :
8.75  : |
9.0   : ||||||||
9.25  : |||||||||||||||||||||||
9.5   : |||||||||||||||||||||||||||||||||||||||||
9.75  : ||||||||||||||||||||||||||||||||||||||||||||||||||
----------

sa: show-smoothed-similarity |wave-9.75>
----------
wave-1    : ||||||||||
wave-2    : |||||||||
wave-3    : ||||||||
wave-4    : ||||||||
wave-5    : ||||||||
wave-6    : |||||||
wave-7    : |||||||
wave-8    : |||||||
wave-9    : |||||||
wave-9.75 : ||||||||||||||||||||||||||||||||||||||||||||||||||
wave-10   : ||||||
wave-11   : ||||||
wave-12   : ||||||
wave-13   : ||||||||||||
wave-14   : |||||
wave-15   : |||||
wave-16   : ||||
wave-17   : ||||
wave-18   : ||||
wave-19   : ||||
wave-20   : ||||
----------
vs the spike similarity:
sa: show-similarity |wave-9.75>
----------
wave-9.75 : ||||||||||||||||||||||||||||||||||||||||||||||||||
----------
Which is pretty much my point. The smoothed spike transform is more tolerant of the spikes not being at exact integer spacing's. And for that reason, in practice should be more useful, because brains are not expected to be ultra precise.

BTW, we have two parameters we can tweak. Recall the smooth spike operator:
  smooth-spike-op |*> #=> smooth[0.25]^5 spikes |_self>
In particular, note the smooth[dx]^k term. We can tweak both dx and k, with the result that we can tweak the shape/width of our Gaussian spikes. For example keeping track of music would require narrower spikes than in other contexts.