Tuesday 22 November 2016

generating random grammatically correct sentences

In the last post we looked at generating a short grammatically correct sentence, in a proposed brain like way. The central idea was to represent our sentences using only classes and sequences. It's classes and sequences all the way down (not turtles!). In this post we extend this, and introduce a clean minimalist notation to represent theses sequences and classes, and a "compiler" of sorts, that converts this notation back to BKO. I guess with the implication that BKO could be considered a sort of assembly language for the brain.

Now on to this new notation (which is somewhat similar to BNF). We have these foundational objects:
{}                     -- the empty sequence
A                      -- a sequence of length one
A.B.C                  -- a sequence
{A, B, C}              -- a class
A = {B, C.D.E, F, G.H} -- definition of a class of sequences
I = A.B.C.D            -- definition of a sequence of classes
And that is pretty much it! Perhaps it would help to show how these map back to BKO:
-- the empty sequence:
pattern |node 0: 0> => random-column[10] encode |end of sequence>

-- a sequence of length one:
-- boy
pattern |node 5: 0> => random-column[10] encode |boy>
then |node 5: 0> => random-column[10] encode |end of sequence>

-- a sequence of length three:
-- the . woman . saw
pattern |node 1: 0> => random-column[10] encode |the>
then |node 1: 0> => random-column[10] encode |woman>

pattern |node 1: 1> => then |node 1: 0>
then |node 1: 1> => random-column[10] encode |saw>

pattern |node 1: 2> => then |node 1: 1>
then |node 1: 2> => random-column[10] encode |end of sequence>

-- a sequence of classes:
-- L = A.K.B
pattern |node 20: 0> => random-column[10] encode |A>
then |node 20: 0> => random-column[10] encode |K>

pattern |node 20: 1> => then |node 20: 0>
then |node 20: 1> => random-column[10] encode |B>

pattern |node 20: 2> => then |node 20: 1>
then |node 20: 2> => random-column[10] encode |end of sequence>

-- a class of one sequence:
-- A = {the.woman.saw}
start-node |A: 0> => pattern |node 1: 0>

-- a class of three sequences:
-- E = {{}, old, other}
start-node |E: 0> => pattern |node 0: 0>
start-node |E: 1> => pattern |node 6: 0>
start-node |E: 2> => pattern |node 7: 0>
Now that is in place, we can consider our first sentence:
$ cat gm-examples/first-sentence.gm
A = {the}
B = {{}, old, other}
C = {man, woman, lady}
D = {{}, young}
E = {child}
F = {youngest, eldest}
G = {child, sibling}
H = {{}, on.the.hill, also}
I = {used.a.telescope}

J = B.C
K = D.E
L = F.G

M = {J, K, L}

N = A.M.H.I
Then we compile this back to BKO using gm2sw.py:
$ ./gm2sw.py gm-examples/first-sentence.gm > sw-examples/first-sentence.sw
Load it up in the console:
$ ./the_semantic_db_console.py
Welcome!

-- switch off displaying "info" messages:
sa: info off

-- load our sentence:
sa: load first-sentence.sw

-- find available "sentences":
sa: rel-kets[sentence]
|J> + |K> + |L> + |N>

-- recall the "N" sentence:
sa: recall-sentence sentence |N>
|the>
|man>
|used>
|a>
|telescope>
|end of sequence>

-- and again:
sa: .
|the>
|young>
|child>
|also>
|used>
|a>
|telescope>
|end of sequence>

-- and again:
sa: .
|the>
|old>
|woman>
|used>
|a>
|telescope>
|end of sequence>
Now for a slightly more interesting sentence:
$ cat gm-examples/the-woman-saw.gm
A = {the.woman.saw}
B = {through.the.telescope}
C = {{}, young}
D = {girl, boy}
E = {{}, old, other}
F = {man, woman, lady}
G = E.F
H = {the}
I = H.C.D
J = H.E.F
K = {{},I,J}

L = A.K.B

M = {I,J}
N = {saw}
O = M.N.K.B

P = {through.the}
Q = {telescope, binoculars, night.vision.goggles}

R = M.N.K.P.Q
Compile and load it up:
$ ./gm2sw.py gm-examples/the-woman-saw.gm > sw-examples/the-woman-saw.sw
$ ./the_semantic_db_console.py
sa: load the-woman-saw.sw
sa: rel-kets[sentence]
|G> + |I> + |J> + |L> + |O> + |R>

sa: recall-sentence sentence |R>
|the>
|boy>
|saw>
|the>
|old>
|woman>
|through>
|the>
|telescope>
|end of sequence>

sa: .
|the>
|lady>
|saw>
|the>
|old>
|woman>
|through>
|the>
|binoculars>
|end of sequence>

sa: .
|the>
|old>
|man>
|saw>
|through>
|the>
|night>
|vision>
|goggles>
|end of sequence>

sa: .
|the>
|woman>
|saw>
|the>
|young>
|boy>
|through>
|the>
|binoculars>
|end of sequence>

sa: .
|the>
|girl>
|saw>
|through>
|the>
|telescope>
|end of sequence>
While we have this knowledge loaded, we can also do things like randomly walk individual sub-elements of our full sentences:
sa: recall-sentence pattern pick-elt rel-kets[pattern]
|the>
|man>
|end of sequence>

sa: .
|other>
|end of sequence>

sa: .
|boy>
|end of sequence>

sa: .
|girl>
|end of sequence>

sa: .
|saw>
|through>
|the>
|night>
|vision>
|goggles>
|end of sequence>

sa: .
|binoculars>
|end of sequence>

sa: .
|lady>
|end of sequence>

sa: .
|telescope>
|end of sequence>

sa: .
|saw>
|the>
|young>
|girl>
|through>
|the>
|telescope>
|end of sequence>
So at this point it might be a bit opaque how recall-sentence unpacks our stored sentences. Essentially it walks the given sentence, ie sequence, and if an element in that sequence is a class (ie, has a start-node defined), then recursively walk that sub-sequence, else print the element name. For example, recall this knowledge and consider the high level sequence R:
$ cat gm-examples/the-woman-saw.gm
A = {the.woman.saw}
B = {through.the.telescope}
C = {{}, young}
D = {girl, boy}
E = {{}, old, other}
F = {man, woman, lady}
G = E.F
H = {the}
I = H.C.D
J = H.E.F
K = {{},I,J}

L = A.K.B

M = {I,J}
N = {saw}
O = M.N.K.B

P = {through.the}
Q = {telescope, binoculars, night.vision.goggles}

R = M.N.K.P.Q
So if we walk the R sequence, with no recursion, we have:
sa: follow-sequence sentence |R>
|M>
|N>
|K>
|P>
|Q>
|end of sequence>
But each of these elements are themselves classes. Here are the sequences in the M, N and K classes:
sa: follow-sequence start-node |M: 0>
|H>
|C>
|D>
|end of sequence>

sa: follow-sequence start-node |M: 1>
|H>
|E>
|F>
|end of sequence>

sa: follow-sequence start-node |N: 0>
|saw>
|end of sequence>

sa: follow-sequence start-node |K: 0>
|end of sequence>

sa: follow-sequence start-node |K: 1>
|H>
|C>
|D>
|end of sequence>

sa: follow-sequence start-node |K: 2>
|H>
|E>
|F>
|end of sequence>
And if a class contains more than one member, the sub-sequence to recursively walk is chosen randomly. And so on, until you have objects with no start-nodes, ie low level sequences. Heh. I don't know if that explanation helped. This is the full python that defines the recall-sentence operator:
# Usage:
# load sentence-sequence--multi-layer.sw 
# print-sentence |*> #=> recall-sentence pattern |_self>
# print-sentence |node 200: 1>
#
# one is a sp
def recall_sentence(one,context):
  if len(one) == 0:
    return one
  current_node = one
    
  def next(one):
    return one.similar_input(context,"pattern").select_range(1,1).apply_sigmoid(clean).apply_op(context,"then")

  def name(one):
    return one.apply_fn(extract_category).similar_input(context,"encode").select_range(1,1).apply_sigmoid(clean)

  def has_start_node(one):                                            # check if one is a class
    two = ket(one.the_label() + ": ")                                 
    return len(two.apply_fn(starts_with,context).select_range(1,1).apply_op(context,"start-node")) > 0

  def get_start_node(one):
    two = ket(one.the_label() + ": ")
    return two.apply_fn(starts_with,context).pick_elt().apply_op(context,"start-node")        
   
  while name(current_node).the_label() != "end of sequence":
    if not has_start_node(name(current_node)):
      print(name(current_node))
    else:
      start_node = get_start_node(name(current_node))
      recall_sentence(start_node, context)       
    current_node = next(current_node)
  return ket("end of sequence")
Now, just for fun we can visualize our sentence structure, which is essentially a complex network, using our sw2dot code.
$ ./the_semantic_db_console.py
sa: load the-woman-saw.sw
sa: save the-woman-saw--saved.sw
sa: q

$ grep -v "^full" sw-examples/the-woman-saw--saved.sw | grep -v "^support" > sw-examples/the-woman-saw--tidy.sw

$ ./sw2dot-v2.py sw-examples/the-woman-saw--tidy.sw
Open that in graphviz, using neato and we have:
Now some notes:
1) Because of the recursive nature of the recall-sentence operator it should, baring a bug, handle multiple levels of sequences and classes, in contrast with the simpler example in the last post that was restricted to one level of classes and sequences. Potentially allowing for very complex structures, and certainly longer text than single sentences.
2) Even with our short-cut notation defining sentences is still somewhat hard work. The eventual goal is for it to be learnt automatically. A hard task, but having sentence representation is at least a useful step in that direction.
3) So far our classes and sequences have been small. I suspect classes will always remain small, as grammar has strict rules that seem to require small classes. Sequences on the other hand I don't know. Presumably larger structures than single sentences would need longer sequences, but the fact that the brain uses chunking hints that those sequences can't be too long. So instead of a large structure using long sequences, instead it would use more levels of shorter sequences. Which is essentially what chunking does. Indeed, here is our chunked sequences example in our new gm notation:
$ cat gm-examples/alphabet-pi.gm
a1 = {A.B.C}
a2 = {D.E.F}
a3 = {G.H.I}
a4 = {J.K.L}
a5 = {M.N.O}
a6 = {P.Q.R}
a7 = {S.T.U}
a8 = {V.W.X}
a9 = {Y.Z}

alphabet = a1.a2.a3.a4.a5.a6.a7.a8.a9

p1 = {3.1.4}
p2 = {1.5}
p3 = {9.2}
p4 = {6.5}
p5 = {3.5}
p6 = {8.9}

pi = p1.p2.p3.p4.p5.p6
4) What other objects can we represent, other than grammatical sentences, using just classes and sequences? Something I have been thinking about for a long time now is, how would you represent the knowledge stored in a mathematicians head? My project is claiming to be about knowledge representation right, so why not mathematics? I don't know, but I suspect we wont have an artificial mathematician until well after we have a full AGI.
5) The other side of that is, what can't we represent using just classes and sequences? I don't know yet. But certainly long range structure might be part of that. Given a random choice at the start of a sentence sometimes has an impact on what is valid later on in that sentence. I don't think we can represent that. And that leads to my last point. Fixed classes and random choice are just the first step. In a brain, the set of available classes to compose your sentences from, are dynamic, always changing, and if you want to say anything meaningful, your choices of how to unpack a sentence are the opposite of random.
6) Approximately how many neurons in our "the-woman-saw.gm" example? Well, we have:
sa: how-many rel-kets[pattern]
|number: 44>

sa: how-many starts-with |node >
|number: 44>

sa: how-many rel-kets[start-node]
|number: 23>
So roughly 67 neurons. Though that doesn't count the neurons needed to recall the sentences that corresponds to our python recall-sentence operator.

Monday 21 November 2016

learning and recalling a simple sentence

In this post we are going to use HTM inspired sequences to learn a short and simple, grammatically correct, sentence. This is a nice follow on from learning to spell, and recalling chunked sequences. The key idea being that the brain stores sentences as sequences of classes, and when we recall a sentence we unpack that structure. So how do we implement this? Well, we can easily represent sequences, as seen in previous posts, and classes are simple enough. So the hard bit becomes finding an operator that can recall the sentence.

Let's start with this "sentence", or sequence of classes (dots are our short-hand notation for sequences):
A . X . B . Y . C
where we have these classes:
A = {the}
B = {man, woman, lady}
C = {used a telescope}
X = {{}, old, other}
Y = {{}, on the hill, also}
And that is enough to generate a bunch of grammatically correct sentences, by picking randomly from each class at each step in the sequence. And noting that {} is the empty sequence. How many sentences? Just multiply the sizes of the classes:
|A|*|X|*|B|*|Y|*|C| = 1*3*3*3*1 = 27
Now on to the code. First up we need to encode our objects that we intend to use in our sequences. Again, our encode SDR's are just 10 random bits on out of 2048 total:
full |range> => range(|1>,|2048>)
encode |end of sequence> => pick[10] full |range>

-- encode words:
encode |old> => pick[10] full |range>
encode |other> => pick[10] full |range>
encode |on> => pick[10] full |range>
encode |the> => pick[10] full |range>
encode |hill> => pick[10] full |range>
encode |also> => pick[10] full |range>
encode |the> => pick[10] full |range>
encode |man> => pick[10] full |range>
encode |used> => pick[10] full |range>
encode |a> => pick[10] full |range>
encode |telescope> => pick[10] full |range>
encode |woman> => pick[10] full |range>
encode |lady> => pick[10] full |range>

-- encode classes:
encode |A> => pick[10] full |range>
encode |B> => pick[10] full |range>
encode |C> => pick[10] full |range>
encode |X> => pick[10] full |range>
encode |Y> => pick[10] full |range>
Next, define our low level sequences of words, though most of them are sequences of length one:
-- empty sequence
pattern |node 1: 1> => append-column[10] encode |end of sequence>

-- old
pattern |node 2: 1> => random-column[10] encode |old>
then |node 2: 1> => append-column[10] encode |end of sequence>

-- other
pattern |node 3: 1> => random-column[10] encode |other>
then |node 3: 1> => append-column[10] encode |end of sequence>

-- on, the, hill
pattern |node 4: 1> => random-column[10] encode |on>
then |node 4: 1> => random-column[10] encode |the>

pattern |node 4: 2> => then |node 4: 1>
then |node 4: 2> => random-column[10] encode |hill>

pattern |node 4: 3> => then |node 4: 2>
then |node 4: 3> => append-column[10] encode |end of sequence>

-- also
pattern |node 5: 1> => random-column[10] encode |also>
then |node 5: 1> => append-column[10] encode |end of sequence>


-- the
pattern |node 6: 1> => random-column[10] encode |the>
then |node 6: 1> => append-column[10] encode |end of sequence>

-- man
pattern |node 7: 1> => random-column[10] encode |man>
then |node 7: 1> => append-column[10] encode |end of sequence>

-- used, a, telescope
pattern |node 8: 1> => random-column[10] encode |used>
then |node 8: 1> => random-column[10] encode |a>

pattern |node 8: 2> => then |node 8: 1>
then |node 8: 2> => random-column[10] encode |telescope>

pattern |node 8: 3> => then |node 8: 2>
then |node 8: 3> => append-column[10] encode |end of sequence>

-- woman
pattern |node 9: 1> => random-column[10] encode |woman>
then |node 9: 1> => append-column[10] encode |end of sequence>

-- lady
pattern |node 10: 1> => random-column[10] encode |lady>
then |node 10: 1> => append-column[10] encode |end of sequence>
Here is the easiest bit, representing the word classes:
-- X: {{}, old, other}
start-node |X: 1> => pattern |node 1: 1>
start-node |X: 2> => pattern |node 2: 1>
start-node |X: 3> => pattern |node 3: 1>

-- Y: {{}, on the hill, also}
start-node |Y: 1> => pattern |node 1: 1>
start-node |Y: 2> => pattern |node 4: 1>
start-node |Y: 3> => pattern |node 5: 1>

-- A: {the}
start-node |A: 1> => pattern |node 6: 1>

-- B: {man,woman,lady}
start-node |B: 1> => pattern |node 7: 1>
start-node |B: 2> => pattern |node 9: 1>
start-node |B: 3> => pattern |node 10: 1>

-- C: {used a telescope}
start-node |C: 1> => pattern |node 8: 1>
Finally, we need to define our sentence "A . X . B . Y . C", ie our sequence of classes:
-- A, X, B, Y, C
pattern |node 20: 1> => random-column[10] encode |A>
then |node 20: 1> => random-column[10] encode |X>

pattern |node 20: 2> => then |node 20: 1>
then |node 20: 2> => random-column[10] encode |B>

pattern |node 20: 3> => then |node 20: 2>
then |node 20: 3> => random-column[10] encode |Y>

pattern |node 20: 4> => then |node 20: 3>
then |node 20: 4> => random-column[10] encode |C>

pattern |node 20: 5> => then |node 20: 4>
then |node 20: 5> => append-column[10] encode |end of sequence>
And that's it. We have learnt a simple sentence in a proposed brain like way, just using sequences and classes. For the recall stage we need to define an appropriate operator. With some thinking we have this python:
# one is a sp
def follow_sequence(one,context,op=None):
  if len(one) == 0:
    return one
    
  def next(one):
    return one.similar_input(context,"pattern").select_range(1,1).apply_sigmoid(clean).apply_op(context,"then")
  def name(one):
    return one.apply_fn(extract_category).similar_input(context,"encode").select_range(1,1).apply_sigmoid(clean)    
    
  current_node = one  
  while name(current_node).the_label() != "end of sequence":
    if op == None:
      print(name(current_node))      
    else:
      name(current_node).apply_op(context,op)
    current_node = next(current_node)
  return ket("end of sequence")
And these operator definitions:
-- operators:
append-colon |*> #=> merge-labels(|_self> + |: >)
random-class-sequence |*> #=> follow-sequence start-node pick-elt starts-with append-colon |_self>
random-sequence |*> #=> follow-sequence start-node pick-elt rel-kets[start-node] |>
print-sentence |*> #=> follow-sequence[random-class-sequence] pattern |_self>
We can now recall our sentence:
$ ./the_semantic_db_console.py
Welcome!

sa: load sentence-sequence.sw
sa: info off
sa: print-sentence |node 20: 1>
|the>
|old>
|woman>
|used>
|a>
|telescope>
|end of sequence>

sa: .
|the>
|man>
|also>
|used>
|a>
|telescope>
|end of sequence>

sa: .
|the>
|old>
|man>
|on>
|the>
|hill>
|used>
|a>
|telescope>
|end of sequence>
And that's it. We now have a structure in place that we can easily copy and reuse for other sentences. The hard part is typing it up, and I have an idea how to help with that. The eventual goal would be for it to be fully automatic, but that will be difficult. For example, given this set of sentences:
"the man used a telescope"
"the woman used a telescope"
"the lady used a telescope"
"the old man also used a telescope"
"the other man on the hill used a telescope"
It feels plausible that that is enough information to learn the above classes and sequences. Some kind of sequence intersection, it seems to me. And if that were the case, it shows the power of grammatical structure. 5 sentences would be enough to generate 27 daughter sentences. For any real world example, the number of daughter sentences would be huge.

Next post a more complicated sentence, with several levels of sequences and classes.

Saturday 5 November 2016

learning and recalling chunked sequences

So, it is very common (universal?) for people to chunk difficult to recall, or long, sequences. Perhaps a password, the alphabet, or digits of pi. So I thought it would be useful to implement this idea in my notation, and as a sort of extension to learning sequences in my last post. The idea is simple enough, instead of learning a single long sequence, break the sequence into chunks, and then learn their respective sub-sequences. Here is how my brain chunks the alphabet and pi, though other people will have different chunking sizes: (ABC)(DEF)(GHI)... and (3.14)(15)(92)(65)(35)... Giving this collection of sequences:
alphabet: ABC, DEF, GHI, ...
ABC: A, B, C
DEF: D, E, F
GHI: G, H, I
...

pi: 3.14, 15, 92, 65, 35, ...
3.14: 3, ., 1, 4,
15: 1, 5
92: 9, 2
65: 6, 5
35: 3, 5
...
Given we already know how to learn sequences, this is easy to learn. Here is the code (using a constant chunk size of 3), here is the knowledge before learning, and after. I guess I should show a little of what that looks like. First the random (though in other uses, it would be preferable to use a more semantically similar encoding) encode stage:
full |range> => range(|1>,|2048>)
encode |end of sequence> => pick[10] full |range>
encode |A> => pick[10] full |range>
encode |B> => pick[10] full |range>
encode |C> => pick[10] full |range>
encode |D> => pick[10] full |range>
encode |E> => pick[10] full |range>
encode |F> => pick[10] full |range>
encode |G> => pick[10] full |range>
encode |H> => pick[10] full |range>
encode |I> => pick[10] full |range>
encode |J> => pick[10] full |range>
encode |K> => pick[10] full |range>
encode |L> => pick[10] full |range>
encode |M> => pick[10] full |range>
encode |N> => pick[10] full |range>
encode |O> => pick[10] full |range>
encode |P> => pick[10] full |range>
encode |Q> => pick[10] full |range>
encode |R> => pick[10] full |range>
encode |S> => pick[10] full |range>
encode |T> => pick[10] full |range>
encode |U> => pick[10] full |range>
encode |V> => pick[10] full |range>
encode |W> => pick[10] full |range>
encode |X> => pick[10] full |range>
encode |Y> => pick[10] full |range>
encode |Z> => pick[10] full |range>
encode |A B C> => pick[10] full |range>
encode |D E F> => pick[10] full |range>
encode |G H I> => pick[10] full |range>
encode |J K L> => pick[10] full |range>
encode |M N O> => pick[10] full |range>
encode |P Q R> => pick[10] full |range>
encode |S T U> => pick[10] full |range>
encode |V W X> => pick[10] full |range>
encode |Y Z> => pick[10] full |range>
encode |3> => pick[10] full |range>
encode |.> => pick[10] full |range>
encode |1> => pick[10] full |range>
encode |4> => pick[10] full |range>
encode |5> => pick[10] full |range>
encode |9> => pick[10] full |range>
encode |2> => pick[10] full |range>
encode |6> => pick[10] full |range>
encode |8> => pick[10] full |range>
encode |7> => pick[10] full |range>
encode |3 . 1> => pick[10] full |range>
encode |4 1 5> => pick[10] full |range>
encode |9 2 6> => pick[10] full |range>
encode |5 3 5> => pick[10] full |range>
encode |8 9 7> => pick[10] full |range>
encode |9 3 2> => pick[10] full |range>
encode |3 8 4> => pick[10] full |range>
The main thing to note here is that we are not just learning encodings for single symbols eg |A> or |3>, but also for chunks of symbols too. eg |A B C> and |3 . 1>. And in general, we can do similar encodings for anything we want to stuff into a ket. Once we have encodings for our objects we can learn their sequences. Here are a couple of them:
-- alphabet
-- A B C, D E F, G H I, J K L, M N O, P Q R, S T U, V W X, Y Z
start-node |alphabet> => random-column[10] encode |A B C>
pattern |node 0: 0> => start-node |alphabet>
then |node 0: 0> => random-column[10] encode |D E F>

pattern |node 0: 1> => then |node 0: 0>
then |node 0: 1> => random-column[10] encode |G H I>

pattern |node 0: 2> => then |node 0: 1>
then |node 0: 2> => random-column[10] encode |J K L>

pattern |node 0: 3> => then |node 0: 2>
then |node 0: 3> => random-column[10] encode |M N O>

pattern |node 0: 4> => then |node 0: 3>
then |node 0: 4> => random-column[10] encode |P Q R>

pattern |node 0: 5> => then |node 0: 4>
then |node 0: 5> => random-column[10] encode |S T U>

pattern |node 0: 6> => then |node 0: 5>
then |node 0: 6> => random-column[10] encode |V W X>

pattern |node 0: 7> => then |node 0: 6>
then |node 0: 7> => random-column[10] encode |Y Z>

pattern |node 0: 8> => then |node 0: 7>
then |node 0: 8> => append-column[10] encode |end of sequence>


-- A B C
-- A, B, C
start-node |A B C> => random-column[10] encode |A>
pattern |node 1: 0> => start-node |A B C>
then |node 1: 0> => random-column[10] encode |B>

pattern |node 1: 1> => then |node 1: 0>
then |node 1: 1> => random-column[10] encode |C>

pattern |node 1: 2> => then |node 1: 1>
then |node 1: 2> => append-column[10] encode |end of sequence>


-- D E F
-- D, E, F
start-node |D E F> => random-column[10] encode |D>
pattern |node 2: 0> => start-node |D E F>
then |node 2: 0> => random-column[10] encode |E>

pattern |node 2: 1> => then |node 2: 0>
then |node 2: 1> => random-column[10] encode |F>

pattern |node 2: 2> => then |node 2: 1>
then |node 2: 2> => append-column[10] encode |end of sequence>

...
where we see both the high level sequence of the alphabet chunks (ABC)(DEF)..., and the lower sequences of single letters A, B, C and D, E, F. The pi sequence has identical structure, so I'll omit that. For the curious, see the pre-learning sw file.

That's the learn stage taken care of, now the bit that took a little more work, code that recalls sequences, no matter how many layers deep. Though I've only so far tested it on a two-layer system. Here is the pseudo code:
  next (*) #=> then clean select[1,1] similar-input[pattern] |_self>
  name (*) #=> clean select[1,1] similar-input[encode] extract-category |_self>

  print-sequence |*> #=>
    if not do-you-know start-node |_self>:
      return |_self>
    if name start-node |_self> == |_self>:                    -- prevent infinite loop when an object is its own sequence
      print |_self>
      return |>
    |node> => new-GUID |>
    current "" |node> => start-node |_self>
    while name current "" |node> != |end of sequence>:
      if not do-you-know start-node name current "" |node>:
        print name current "" |node>
      else:
        print-sequence name current "" |node>
      current "" |node> => next current "" |node>
    return |end of sequence>
And the corresponding python:
def new_print_sequence(one,context,start_node=None):
  if start_node is None:                                          # so we can change the operator name that links to the first element in the sequence.
    start_node = "start-node"
  if len(one.apply_op(context,start_node)) == 0:                  # if we don't know the start-node, return the input ket
    return one
  print("print sequence:",one)

  def next(one):
    return one.similar_input(context,"pattern").select_range(1,1).apply_sigmoid(clean).apply_op(context,"then")
  def name(one):
    return one.apply_fn(extract_category).similar_input(context,"encode").select_range(1,1).apply_sigmoid(clean)
    
  if name(one.apply_op(context,start_node)).the_label() == one.the_label():
    print(one)                                                               # prevent infinte loop when an object is its own sequence. Maybe should have handled at learn stage, not recall?
    return ket("")  
  current_node = one.apply_op(context,start_node)  
  while name(current_node).the_label() != "end of sequence":
    if len(name(current_node).apply_op(context,start_node)) == 0:
      print(name(current_node))      
    else:
      new_print_sequence(name(current_node),context,start_node)
    current_node = next(current_node)
  return ket("end of sequence")
And finally, put it to use:
$ ./the_semantic_db_console.py
Welcome!

sa: load chunked-alphabet-pi.sw
sa: new-print-sequence |alphabet>
print sequence: |alphabet>
print sequence: |A B C>
|A>
|B>
|C>
print sequence: |D E F>
|D>
|E>
|F>
print sequence: |G H I>
|G>
|H>
|I>
print sequence: |J K L>
|J>
|K>
|L>
print sequence: |M N O>
|M>
|N>
|O>
print sequence: |P Q R>
|P>
|Q>
|R>
print sequence: |S T U>
|S>
|T>
|U>
print sequence: |V W X>
|V>
|W>
|X>
print sequence: |Y Z>
|Y>
|Z>
|end of sequence>

sa: new-print-sequence |pi>
print sequence: |pi>
print sequence: |3 . 1>
|3>
|.>
|1>
print sequence: |4 1 5>
|4>
|1>
|5>
print sequence: |9 2 6>
|9>
|2>
print sequence: |6>
|6>
print sequence: |5 3 5>
|5>
|3>
|5>
print sequence: |8 9 7>
|8>
|9>
|7>
print sequence: |9 3 2>
|9>
|3>
|2>
print sequence: |3 8 4>
|3>
|8>
|4>
print sequence: |6>
|6>
|end of sequence>
And we can print individual sub-sequences:
sa: new-print-sequence |D E F>
print sequence: |D E F>
|D>
|E>
|F>
|end of sequence>

sa: new-print-sequence |Y Z>
print sequence: |Y Z>
|Y>
|Z>
|end of sequence>

sa: new-print-sequence |8 9 7>
print sequence: |8 9 7>
|8>
|9>
|7>
|end of sequence>
Some notes:
1) There are of course other ways to implement learning and recalling chunked sequences. In my implementation above, when a subsequence hits an "end of sequence" it escapes from the while loop, and the high level sequence resumes. But an alternative would be for the end of say the |8 9 7> subsequence to link back to the parent pi sequence, and then resume that sequence. In which case we would have this:
sa: new-print-sequence |8 9 7>
print sequence: |8 9 7>
|8>
|9>
|7>
print sequence: |9 3 2>
|9>
|3>
|2>
print sequence: |3 8 4>
|3>
|8>
|4>
print sequence: |6>
|6>
|end of sequence>
So, does |8 9 7> live as an independent sequence with no link to the parent sequence, or does the final |7> link back to the pi sequence? I don't know for sure, but I suspect it is independent, because consider the case where |8 9 7> is in multiple high level sequences. The |7> wouldn't know where to link back to.
2) I have had for a long time my similarity metric called simm, that returns the similarity of superpositions (1 for exact match, 0 for disjoint, values in between otherwise). But I have so far failed to implement a decent simm for sequences (aside from mapping strings to ngrams, and then running simm on that). I now suspect/hope chunking of sequences might be a key part.
3) presumably the chunking of sequences structure is used by the brain for more than just difficult passwords, eg perhaps grammar. Seems likely to me that if a structure is used somewhere by the brain, then it is used in many other places too. ie, if a structure is good, then reuse it.