Tuesday 16 February 2016

towards a definition of intelligence

So, have we done enough work that we can now make a reasonable guess at a definition of intelligence? Let's see. In my travels I have seen one definition being, an intelligent agent, given a current situation, will manipulate things so as to maximize the number of potential future states. So, if such an agent is stuck in a valley, it will climb to the top of the hill to maximize its potential pathways.

Mathematically, roughly (in a simplified 1 dimension):
F = d/dx V(x)
where V(x) is the landscape, and F is the direction you want to head.

I have an alternate definition:
1) given the agents current state (as represented by some superposition) find a pathway (as represented by some operator sequence) to your desired state (again, represented by some superposition). The quicker the agent can do this, and the shorter the pathway, then the more intelligence points we give that agent. Noting that for sufficiently hard problems, most agents won't be able to find a pathway at all.
2) given an object the agent wishes to understand, how well constructed is the agents internal representation of that object. At one extreme we have rote learning, say you recall an objects definition word for word, with essentially no understanding. At the other we have a very dense network linking the object with the rest of the knowledge in the agents memory store. The denser the network, the more intelligence points we give that agent. And I suppose we should give some points for speed as well.

Comments:
1) the above is somewhat dependent on the agent already have a large body of knowledge. This isn't perfect since young children do not have as much knowledge as an adult, but in some regards are far more intelligent than adults. Frankly, it is hard work to boot-strap from nothing to a thorough understanding of the world.
2) if you ever watch Richard Feynman talk, it is obvious he had a very dense network representation of physics in his head. Everything was linked to everything. This gives him lots of (2) points in my scheme, but then he was a physics genius!
3) OK. So how do we build an intelligent agent? Heh. No one knows!! My guess is that it requires at least three components: 1) a processing center (eg the neocortex), 2) a memory system (eg the hippocampus), and 3) an attention system (eg the thalamus). I personally think the attention system is the most important of the three. We need some system to filter and only attend to what is currently important, and to dynamically change attention as needed. Indeed, this sounds an awful lot like a von Neumann architecture computer! With CPU, RAM and instruction pointer (as the attention system). But in detail they are quite different. Especially the attention system, what I have in mind is a lot more involved than an IP.
3) superpositions and operator sequences should be sufficient to represent any current state, or pathway between states, of interest. That being my main thesis of the project! Is there anything that can't be represented this way? I don't know. But the implication would be that a human brain couldn't represent it either.

Sunday 14 February 2016

new operators: guess-ket and guess-operator

I decided it might be useful to have a couple of operators that guess the ket or the operator, even if you don't know their name exactly. Don't have a strong use-case yet, but seems to be something that humans do, so should presumably be useful eventually.

There are three variations of each:
guess-ket |ket>
guess-ket[k] |ket>
guess-ket[*] |ket>
The first one just gives you the best matching ket. The second returns the top k matches. The third gives all matches with similarity > 0.

Likewise, we have:
guess-operator[op]
guess-operator[op,k]
guess-operator[op,*]
where the first one gives you the best matching operator. The second gives the top k matches. The third gives all of them with simm > 0.

Now, for a little bit of the details in the back-ground. We basically use the similarity metric on the superpositions created by the process_string() function, against all known kets "context.relevant_kets("*")" or known supported operators "context.supported_operators()":
  def process_string(s):
    one = ket(s.lower())
    return make_ngrams(one,'1,2,3','letter')
Now, a couple of examples:
-- learn a little knowledge:
the-age-of |Fred> => |age: 27>
the-age-of |Frank> => |age: 33>
the-age-of |Robert> => |age: 29>
the-age-of |Rob> => |age: 31>

-- return only the best match to "freddie":
sa: guess-ket |freddie>
0.611|Fred>

-- see all matches to "freddie":
sa: guess-ket[*] |freddie>
0.611|Fred> + 0.167|Frank> + 0.122|Robert> + 0.056|Rob>

-- now try the guess-operator[]:
sa: guess-operator[age]
0.259|op: the-age-of>

-- who is "roberto"?
sa: guess-ket |roberto>
0.844|Robert>

-- one potential use case. Guess the operator and the ket:
sa: apply(guess-operator[age] |>,guess-ket |roberto>)
0.219|age: 29>
NB: in this last example we used "guess-operator[age] |>". Note especially the |> tacked on the end. We need this so it is parsed as a (compound) superposition. In the console though, it is not mandatory, and I often get lazy and leave it out. A similar thing applies to rel-kets[op] and probably some other function operators. If something doesn't work as expected, put |> in, and that should fix it. Indeed, best practice is to always include it!

That's probably about it for this post. Though I have to wonder with my if-then machines, if guess-operator, and guess-age are redundant? Don't know. Time will tell!

If interested, the code for these are at the bottom of the functions code, with names "guess_ket" and "guess_operator". Just CTRL-F to find them.

That's it for this post.

Friday 12 February 2016

learning simple images using if-then machines

Today, let's play with simple images from ages ago. BTW, I call them "simple images" because we don't need to translate, rotate, magnify or otherwise align (which we would with more general images), and we restrict pixel values to 0 or 1. This is to make things easier. We will of course eventually try for more general or typical images sometime in the future, but they are distinctly harder! And require many layers of if-then machines. eg, the brain has roughly 20 layers in the visual cortex.

Here are our images:
|letter: H>
#   #
#   #
#   #
#####
#   #
#   #
#   #

|noisy: H>
    #
#   #
#   #
### #
#    
#   #
#   #

|noisy: H2>
#   #
#    
# ###
#####
##  #
#   #
### #

|letter: I>
#####
  #
  #
  #
  #
  #
#####

|noisy: I>
####
  #
  
  
  #
  #
# ###

|noisy: I2>
##  #
 ###
  #
  #
  ###
####
#####

|letter: O>
######
#    #
#    #
#    #
#    #
#    #
######
Now, let's define our 3 if-then machines:
load H-I-pat-rec.sw

image |node: 1: 1> => pixels |letter: H>
image |node: 1: 2> => pixels |noisy: H>
image |node: 1: 3> => pixels |noisy: H2>
then |node: 1: *> => |letter H>

image |node: 2: 1> => pixels |letter: I>
image |node: 2: 2> => pixels |noisy: I>
image |node: 2: 3> => pixels |noisy: I2>
then |node: 2: *> => |letter I>

image |node: 3: 1> => pixels |letter: O>
then |node: 3: *> => |letter O>

the |list of images> => |node: 1: 1> + |node: 1: 2> + |node: 1: 3> + |node: 2: 1> + |node: 2: 2> + |node: 2: 3> + |node: 3: 1>
which-image |*> #=> then select[1,1] similar-input[image] image |_self>
Note that today I used "select[1,1]" instead of "drop-below[]". This just means select the first element in the superposition, and noting that similar-input[op] sorts its results.
Now, put "which-image" to use:
sa: which-image |node: 2: 3>
1.0|letter I>

sa: which-image |node: 1: 2>
1.0|letter H>

-- now, choose images randomly, and see what we get:
-- noting we are leaving in the INFO: lines, that I normally chomp out. This is so we can see which kets pick-elt has chosen.
sa: which-image pick-elt the |list of images>
INFO: ket: list of images
INFO: ket: node: 1: 2
INFO: ket: node: 1: 2
INFO: ket: node: 1: 2
1.0|letter H>

sa: which-image pick-elt the |list of images>
INFO: ket: list of images
INFO: ket: node: 3: 1
INFO: ket: node: 3: 1
INFO: ket: node: 3: 1
1.0|letter O>

sa: which-image pick-elt the |list of images>
INFO: ket: list of images
INFO: ket: node: 1: 3
INFO: ket: node: 1: 3
INFO: ket: node: 1: 3
1.0|letter H>

sa: which-image pick-elt the |list of images>
INFO: ket: list of images
INFO: ket: node: 2: 2
INFO: ket: node: 2: 2
INFO: ket: node: 2: 2
1.0|letter I>

sa: which-image pick-elt the |list of images>
INFO: ket: list of images
INFO: ket: node: 2: 3
INFO: ket: node: 2: 3
INFO: ket: node: 2: 3
1.0|letter I>

-- and so on!
Now for a couple of comments:

1) if you look under the hood, the above is quite boring! We are not making much use of similar-input[op] at all, in that we are feeding in, and detecting, exact patterns. The only interesting bit is that it is pooling the different image types. Hrmm... let's try for some noisy examples:
sa: then select[1,1] similar-input[image] absolute-noise[1] image |node: 1: 1>
0.919|letter H>

sa: then select[1,1] similar-input[image] absolute-noise[1] image |node: 2: 3>
0.907|letter I>

sa: then select[1,1] similar-input[image] absolute-noise[30] image |node: 1: 2>
0.761|letter H>

sa: then select[1,1] similar-input[image] absolute-noise[30] image |node: 3: 1>
0.738|letter O>
Heh. Even at absolute-noise[30] we are still matching at over 70%. And now we are clearly using the similarity metric, and "fuzzy matching".
2) support vector machines talk about patterns to classify as linearly separable. Well, in the world of superpositions, linearly separable doesn't really even make sense. And similar-input[op] doesn't care either, and works on any superposition type.
3) "which-image" is linear, which we can see with this:
sa: which-image the |list of images>
3|letter H> + 3|letter I> + 1.0|letter O>
4) finally, here is what we now know:
sa: dump
----------------------------------------
|context> => |context: H I pat rec>

pixels |letter: H> => |pixel: 1: 1> + |pixel: 1: 5> + |pixel: 2: 1> + |pixel: 2: 5> + |pixel: 3: 1> + |pixel: 3: 5> + |pixel: 4: 1> + |pixel: 4: 2> + |pixel: 4: 3> + |pixel: 4: 4> + |pixel: 4: 5> + |pixel: 5: 1> + |pixel: 5: 5> + |pixel: 6: 1> + |pixel: 6: 5> + |pixel: 7: 1> + |pixel: 7: 5>
dim-1 |letter: H> => |dimension: 5>
dim-2 |letter: H> => |dimension: 7>

pixels |noisy: H> => |pixel: 1: 5> + |pixel: 2: 1> + |pixel: 2: 5> + |pixel: 3: 1> + |pixel: 3: 5> + |pixel: 4: 1> + |pixel: 4: 2> + |pixel: 4: 3> + |pixel: 4: 5> + |pixel: 5: 1> + |pixel: 6: 1> + |pixel: 6: 5> + |pixel: 7: 1> + |pixel: 7: 5>
dim-1 |noisy: H> => |dimension: 5>
dim-2 |noisy: H> => |dimension: 7>

pixels |noisy: H2> => |pixel: 1: 1> + |pixel: 1: 5> + |pixel: 2: 1> + |pixel: 3: 1> + |pixel: 3: 3> + |pixel: 3: 4> + |pixel: 3: 5> + |pixel: 4: 1> + |pixel: 4: 2> + |pixel: 4: 3> + |pixel: 4: 4> + |pixel: 4: 5> + |pixel: 5: 1> + |pixel: 5: 2> + |pixel: 5: 5> + |pixel: 6: 1> + |pixel: 6: 5> + |pixel: 7: 1> + |pixel: 7: 2> + |pixel: 7: 3> + |pixel: 7: 5>
dim-1 |noisy: H2> => |dimension: 5>
dim-2 |noisy: H2> => |dimension: 7>

pixels |letter: I> => |pixel: 1: 1> + |pixel: 1: 2> + |pixel: 1: 3> + |pixel: 1: 4> + |pixel: 1: 5> + |pixel: 2: 3> + |pixel: 3: 3> + |pixel: 4: 3> + |pixel: 5: 3> + |pixel: 6: 3> + |pixel: 7: 1> + |pixel: 7: 2> + |pixel: 7: 3> + |pixel: 7: 4> + |pixel: 7: 5>
dim-1 |letter: I> => |dimension: 5>
dim-2 |letter: I> => |dimension: 7>

pixels |noisy: I> => |pixel: 1: 1> + |pixel: 1: 2> + |pixel: 1: 3> + |pixel: 1: 4> + |pixel: 2: 3> + |pixel: 5: 3> + |pixel: 6: 3> + |pixel: 7: 1> + |pixel: 7: 3> + |pixel: 7: 4> + |pixel: 7: 5>
dim-1 |noisy: I> => |dimension: 5>
dim-2 |noisy: I> => |dimension: 7>

pixels |noisy: I2> => |pixel: 1: 1> + |pixel: 1: 2> + |pixel: 1: 5> + |pixel: 2: 2> + |pixel: 2: 3> + |pixel: 2: 4> + |pixel: 3: 3> + |pixel: 4: 3> + |pixel: 5: 3> + |pixel: 5: 4> + |pixel: 5: 5> + |pixel: 6: 1> + |pixel: 6: 2> + |pixel: 6: 3> + |pixel: 6: 4> + |pixel: 7: 1> + |pixel: 7: 2> + |pixel: 7: 3> + |pixel: 7: 4> + |pixel: 7: 5>
dim-1 |noisy: I2> => |dimension: 5>
dim-2 |noisy: I2> => |dimension: 7>

pixels |letter: O> => |pixel: 1: 1> + |pixel: 1: 2> + |pixel: 1: 3> + |pixel: 1: 4> + |pixel: 1: 5> + |pixel: 1: 6> + |pixel: 2: 1> + |pixel: 2: 6> + |pixel: 3: 1> + |pixel: 3: 6> + |pixel: 4: 1> + |pixel: 4: 6> + |pixel: 5: 1> + |pixel: 5: 6> + |pixel: 6: 1> + |pixel: 6: 6> + |pixel: 7: 1> + |pixel: 7: 2> + |pixel: 7: 3> + |pixel: 7: 4> + |pixel: 7: 5> + |pixel: 7: 6>
dim-1 |letter: O> => |dimension: 6>
dim-2 |letter: O> => |dimension: 7>

image |node: 1: 1> => |pixel: 1: 1> + |pixel: 1: 5> + |pixel: 2: 1> + |pixel: 2: 5> + |pixel: 3: 1> + |pixel: 3: 5> + |pixel: 4: 1> + |pixel: 4: 2> + |pixel: 4: 3> + |pixel: 4: 4> + |pixel: 4: 5> + |pixel: 5: 1> + |pixel: 5: 5> + |pixel: 6: 1> + |pixel: 6: 5> + |pixel: 7: 1> + |pixel: 7: 5>

image |node: 1: 2> => |pixel: 1: 5> + |pixel: 2: 1> + |pixel: 2: 5> + |pixel: 3: 1> + |pixel: 3: 5> + |pixel: 4: 1> + |pixel: 4: 2> + |pixel: 4: 3> + |pixel: 4: 5> + |pixel: 5: 1> + |pixel: 6: 1> + |pixel: 6: 5> + |pixel: 7: 1> + |pixel: 7: 5>

image |node: 1: 3> => |pixel: 1: 1> + |pixel: 1: 5> + |pixel: 2: 1> + |pixel: 3: 1> + |pixel: 3: 3> + |pixel: 3: 4> + |pixel: 3: 5> + |pixel: 4: 1> + |pixel: 4: 2> + |pixel: 4: 3> + |pixel: 4: 4> + |pixel: 4: 5> + |pixel: 5: 1> + |pixel: 5: 2> + |pixel: 5: 5> + |pixel: 6: 1> + |pixel: 6: 5> + |pixel: 7: 1> + |pixel: 7: 2> + |pixel: 7: 3> + |pixel: 7: 5>

then |node: 1: *> => |letter H>

image |node: 2: 1> => |pixel: 1: 1> + |pixel: 1: 2> + |pixel: 1: 3> + |pixel: 1: 4> + |pixel: 1: 5> + |pixel: 2: 3> + |pixel: 3: 3> + |pixel: 4: 3> + |pixel: 5: 3> + |pixel: 6: 3> + |pixel: 7: 1> + |pixel: 7: 2> + |pixel: 7: 3> + |pixel: 7: 4> + |pixel: 7: 5>

image |node: 2: 2> => |pixel: 1: 1> + |pixel: 1: 2> + |pixel: 1: 3> + |pixel: 1: 4> + |pixel: 2: 3> + |pixel: 5: 3> + |pixel: 6: 3> + |pixel: 7: 1> + |pixel: 7: 3> + |pixel: 7: 4> + |pixel: 7: 5>

image |node: 2: 3> => |pixel: 1: 1> + |pixel: 1: 2> + |pixel: 1: 5> + |pixel: 2: 2> + |pixel: 2: 3> + |pixel: 2: 4> + |pixel: 3: 3> + |pixel: 4: 3> + |pixel: 5: 3> + |pixel: 5: 4> + |pixel: 5: 5> + |pixel: 6: 1> + |pixel: 6: 2> + |pixel: 6: 3> + |pixel: 6: 4> + |pixel: 7: 1> + |pixel: 7: 2> + |pixel: 7: 3> + |pixel: 7: 4> + |pixel: 7: 5>

then |node: 2: *> => |letter I>

image |node: 3: 1> => |pixel: 1: 1> + |pixel: 1: 2> + |pixel: 1: 3> + |pixel: 1: 4> + |pixel: 1: 5> + |pixel: 1: 6> + |pixel: 2: 1> + |pixel: 2: 6> + |pixel: 3: 1> + |pixel: 3: 6> + |pixel: 4: 1> + |pixel: 4: 6> + |pixel: 5: 1> + |pixel: 5: 6> + |pixel: 6: 1> + |pixel: 6: 6> + |pixel: 7: 1> + |pixel: 7: 2> + |pixel: 7: 3> + |pixel: 7: 4> + |pixel: 7: 5> + |pixel: 7: 6>

then |node: 3: *> => |letter O>

the |list of images> => |node: 1: 1> + |node: 1: 2> + |node: 1: 3> + |node: 2: 1> + |node: 2: 2> + |node: 2: 3> + |node: 3: 1>

which-image |*> #=> then select[1,1] similar-input[image] image |_self>
----------------------------------------
And that's it for this post. And I need thinking time to find more interesting if-then machine examples.

Thursday 11 February 2016

learning days of the week using if-then machines

Today, an example of learning days of the week using 7 if-then machines. Note that if-then machines are probably over-kill if you spell your days correctly. In this post we make use of string similarity using letter-ngrams.

Here is the code:
  context weekday if-then machines
  ngrams |*> #=> letter-ngrams[1,2,3] lower-case |_self>

  day |node: 1: 1> => ngrams |Monday>
  day |node: 1: 2> => ngrams |mon>
  day |node: 1: 3> => ngrams |Mo>
  previous |node: 1: *> => |Sunday>
  id |node: 1: *> => |Monday>
  next |node: 1: *> => |Tuesday>
 
  day |node: 2: 1> => ngrams |Tuesday>
  day |node: 2: 2> => ngrams |tue>
  day |node: 2: 3> => ngrams |Tu>
  previous |node: 2: *> => |Monday>
  id |node: 2: *> => |Tuesday>
  next |node: 2: *> => |Wednesday>

  day |node: 3: 1> => ngrams |Wednesday>
  day |node: 3: 2> => ngrams |wed>
  day |node: 3: 3> => ngrams |We>
  previous |node: 3: *> => |Tuesday>
  id |node: 3: *> => |Wednesday>
  next |node: 3: *> => |Thursday>

  day |node: 4: 1> => ngrams |Thursday>
  day |node: 4: 2> => ngrams |thurs>
  day |node: 4: 3> => ngrams |Th>
  previous |node: 4: *> => |Wednesday>
  id |node: 4: *> => |Thursday>
  next |node: 4: *> => |Friday>

  day |node: 5: 1> => ngrams |Friday>
  day |node: 5: 2> => ngrams |fri>
  day |node: 5: 3> => ngrams |Fr>
  previous |node: 5: *> => |Thursday>
  id |node: 5: *> => |Friday>
  next |node: 5: *> => |Saturday>

  day |node: 6: 1> => ngrams |Saturday>
  day |node: 6: 2> => ngrams |sat>
  day |node: 6: 3> => ngrams |Sa>
  previous |node: 6: *> => |Friday>
  id |node: 6: *> => |Saturday>
  next |node: 6: *> => |Sunday>

  day |node: 7: 1> => ngrams |Sunday>
  day |node: 7: 2> => ngrams |sun>
  day |node: 7: 3> => ngrams |Su>
  previous |node: 7: *> => |Saturday>
  id |node: 7: *> => |Sunday>
  next |node: 7: *> => |Monday>

  yesterday |*> #=> previous drop-below[0.65] similar-input[day] ngrams |_self>
  today |*> #=> id drop-below[0.65] similar-input[day] ngrams |_self>
  tomorrow |*> #=> next drop-below[0.65] similar-input[day] ngrams |_self>
Now, some example usages in the console:
-- correct spelling means coeff = 1:
sa: tomorrow |sun>
1.0|Monday>

-- spelling is not perfect, but close enough (with respect to strings mapped to letter-ngrams) that we can guess what was meant:
sa: tomorrow |tues>
0.667|Wednesday>

-- making use of operator exponentiation. In this case equivalent to "tomorrow tomorrow tomorrow"
-- also note the coeff propagates. If we shoved a "clean" sigmoid in the "yesterday, today and tomorrow" operators, we could change that behaviour.
-- eg: yesterday |*> #=> previous clean drop-below[0.65] similar-input[day] ngrams |_self>
sa: tomorrow^3 |tues>
0.667|Friday>

-- "tomorrow" and "yesterday" are perfect inverses of each other:
sa: tomorrow yesterday |fri>
1.0|Friday>

sa: yesterday tomorrow |fri>
1.0|Friday>

-- mapping abbreviation to the full word:
sa: today |Sa>
|Saturday>

sa: yesterday |thurs>
|Wednesday>

-- typo, "thrusday" instead of "thursday", but the code guessed what we meant.
-- this is one of the main benefits of the if-then machines, you usually don't have to get the input exactly right (depending on how you set drop threshold t).
sa: yesterday |thrusday>
0.667|Wednesday>

-- this is an example of over-counting, I suppose you could call it.
-- since "thursd" matched both:
-- day |node: 4: 1> => ngrams |Thursday>
-- day |node: 4: 2> => ngrams |thurs>
-- we briefly mentioned this possibility in my first if-then machine post.
sa: yesterday |thursd>
1.514|Wednesday>

-- Next, we have a couple of function operators that return todays time and date:
sa: current-time
|time: 20:33:16>

sa: current-date
|date: 2016-02-11>

-- and we have another function operator that converts dates to days of the week:
-- what day of the week is New Year:
sa: day-of-the-week |date: 2016-1-1>
|day: Friday>

-- what day of the week is today?:
sa: day-of-the-week current-date
|day: Thursday>

-- what day was it three days ago?
-- NB: not a 100% match because of the "day: " prefix.
sa: yesterday^3 day-of-the-week current-date
0.702|Monday>

-- if you care about that one fix is to remove the category or extract the value:
-- another fix is to add more patterns to our if-then machines
-- eg:
-- day |node: 2: 4> => ngrams |day: Tuesday>
-- day |node: 2: 5> => ngrams |day: tue>
-- day |node: 2: 6> => ngrams |day: Tu>
-- there are other possible fixes too.
-- eg:
-- ngrams |*> #=> letter-ngrams[1,2,3] lower-case extract-value |_self>
sa: extract-value day-of-the-week current-date
|Thursday>

-- what day was it three days ago?
sa: yesterday^3 extract-value day-of-the-week current-date
1.0|Monday>

-- what day is it five days from now?
sa: tomorrow^5 extract-value day-of-the-week current-date
1.0|Tuesday>

-- now, our "tomorrow, yesterday and today" operators are linear (since they are defined with a |*> rule).
-- so a quick demonstration of that:
sa: tomorrow^3 (|Monday> + |Tuesday> + |Saturday>)
1.0|Thursday> + 1.0|Friday> + |Tuesday>
-- and similarly for the other two operators.

-- finally, weekdays are mod 7:
sa: tomorrow^7 |thurs>
1.0|Thursday>

sa: yesterday^21 |thurs>
|Thursday>
I guess that is about it. A fairly simple, somewhat useful, 7 if-then machine system. And an observation I want to make. Usually operator definition time is on the ugly side. As it kind of is above. But operator application time is usually quite clean. I think this is not a bad property to have, though I didn't really design it that way, it was just the way it turned out. So perhaps one use case is that if defining desired operators is too messy for you personally, then find them implemented elsewhere on the net and just web-load the sw file. Heh, assuming I can get anyone interested in the sw file format!

A couple of comments:
1) I had to hand tweak the drop-below threshold to 0.65. If I set it too much higher than that then I wasn't matching things I wanted to. And if I set it to 0.6 then "Sunday" and "Monday" matched.
sa: id drop-below[0.6] similar-input[day] ngrams |Monday>
1.0|Monday> + 0.6|Sunday>
2) If my proposition that if-then machines are a fairly good mathematical approximation to biological neurons, then the above is only a 7 neuron system. The brain has trillions of neurons! That is a lot of processing power!! Though our ngrams operator probably needs a few neurons too. I don't really know at this point how many.
3) here is one way to find the full set of days, given a starting day. Not sure it is all that useful in this particular case, but hey, probably is for other if-then machine systems.
sa: exp-max[tomorrow] |Monday>
2|Monday> + 1.0|Tuesday> + |Wednesday> + 1.0|Thursday> + |Friday> + 1.0|Saturday> + 1.0|Sunday>
Whether we want to tweak exp-max[] so that it doesn't over-count, I'm not yet sure. Probably cleaner if we did.
4) we can define things like the "day-after-tomorrow" operator:
-- define the operator:
sa: day-after-tomorrow |*> #=> tomorrow^2 day-of-the-week current-date |>

-- invoke it:
sa: day-after-tomorrow |x>
0.702|Saturday>
Noting the 0.7 coeff is from the "day: " prefix. And we could define plenty of others, like "day-before-yesterday", and so on.
5) for completeness, here is what we now know:
sa: dump
----------------------------------------
|context> => |context: weekday if-then machines>

ngrams |*> #=> letter-ngrams[1,2,3] lower-case |_self>
yesterday |*> #=> previous drop-below[0.65] similar-input[day] ngrams |_self>
today |*> #=> id drop-below[0.65] similar-input[day] ngrams |_self>
tomorrow |*> #=> next drop-below[0.65] similar-input[day] ngrams |_self>
day-after-tomorrow |*> #=> tomorrow^2 day-of-the-week current-date |>

day |node: 1: 1> => |m> + |o> + |n> + |d> + |a> + |y> + |mo> + |on> + |nd> + |da> + |ay> + |mon> + |ond> + |nda> + |day>

day |node: 1: 2> => |m> + |o> + |n> + |mo> + |on> + |mon>

day |node: 1: 3> => |m> + |o> + |mo>

previous |node: 1: *> => |Sunday>
id |node: 1: *> => |Monday>
next |node: 1: *> => |Tuesday>

day |node: 2: 1> => |t> + |u> + |e> + |s> + |d> + |a> + |y> + |tu> + |ue> + |es> + |sd> + |da> + |ay> + |tue> + |ues> + |esd> + |sda> + |day>

day |node: 2: 2> => |t> + |u> + |e> + |tu> + |ue> + |tue>

day |node: 2: 3> => |t> + |u> + |tu>

previous |node: 2: *> => |Monday>
id |node: 2: *> => |Tuesday>
next |node: 2: *> => |Wednesday>

day |node: 3: 1> => |w> + 2|e> + 2|d> + |n> + |s> + |a> + |y> + |we> + |ed> + |dn> + |ne> + |es> + |sd> + |da> + |ay> + |wed> + |edn> + |dne> + |nes> + |esd> + |sda> + |day>

day |node: 3: 2> => |w> + |e> + |d> + |we> + |ed> + |wed>

day |node: 3: 3> => |w> + |e> + |we>

previous |node: 3: *> => |Tuesday>
id |node: 3: *> => |Wednesday>
next |node: 3: *> => |Thursday>

day |node: 4: 1> => |t> + |h> + |u> + |r> + |s> + |d> + |a> + |y> + |th> + |hu> + |ur> + |rs> + |sd> + |da> + |ay> + |thu> + |hur> + |urs> + |rsd> + |sda> + |day>

day |node: 4: 2> => |t> + |h> + |u> + |r> + |s> + |th> + |hu> + |ur> + |rs> + |thu> + |hur> + |urs>

day |node: 4: 3> => |t> + |h> + |th>

previous |node: 4: *> => |Wednesday>
id |node: 4: *> => |Thursday>
next |node: 4: *> => |Friday>

day |node: 5: 1> => |f> + |r> + |i> + |d> + |a> + |y> + |fr> + |ri> + |id> + |da> + |ay> + |fri> + |rid> + |ida> + |day>

day |node: 5: 2> => |f> + |r> + |i> + |fr> + |ri> + |fri>

day |node: 5: 3> => |f> + |r> + |fr>

previous |node: 5: *> => |Thursday>
id |node: 5: *> => |Friday>
next |node: 5: *> => |Saturday>

day |node: 6: 1> => |s> + 2|a> + |t> + |u> + |r> + |d> + |y> + |sa> + |at> + |tu> + |ur> + |rd> + |da> + |ay> + |sat> + |atu> + |tur> + |urd> + |rda> + |day>

day |node: 6: 2> => |s> + |a> + |t> + |sa> + |at> + |sat>

day |node: 6: 3> => |s> + |a> + |sa>

previous |node: 6: *> => |Friday>
id |node: 6: *> => |Saturday>
next |node: 6: *> => |Sunday>

day |node: 7: 1> => |s> + |u> + |n> + |d> + |a> + |y> + |su> + |un> + |nd> + |da> + |ay> + |sun> + |und> + |nda> + |day>

day |node: 7: 2> => |s> + |u> + |n> + |su> + |un> + |sun>

day |node: 7: 3> => |s> + |u> + |su>

previous |node: 7: *> => |Saturday>
id |node: 7: *> => |Sunday>
next |node: 7: *> => |Monday>
----------------------------------------
And I guess that is it for this post.

Saturday 6 February 2016

learning a sequence using if-then machines

Last post I claimed that we can easily learn sequences using if-then machines. This post is just to give an example of that.

Let's dive in:
context if-then machine learning a sequence

-- define our superpositions:
-- let's make them random 10 dimensional, with coeffs in range [0,20]
the |sp1> => absolute-noise[20] 0 range(|x: 1>,|x: 10>)
the |sp2> => absolute-noise[20] 0 range(|x: 1>,|x: 10>)
the |sp3> => absolute-noise[20] 0 range(|x: 1>,|x: 10>)
the |sp4> => absolute-noise[20] 0 range(|x: 1>,|x: 10>)
the |sp5> => absolute-noise[20] 0 range(|x: 1>,|x: 10>)

-- define our if-then machines:
-- ie, learn the sequence of superpositions
seq |node: 1: 1> => the |sp1>
then |node: 1: *> => the |sp2>

seq |node: 2: 1> => the |sp2>
then |node: 2: *> => the |sp3>

seq |node: 3: 1> => the |sp3>
then |node: 3: *> => the |sp4>

seq |node: 4: 1> => the |sp4>
then |node: 4: *> => the |sp5>

seq |node: 5: 1> => the |sp5>
then |node: 5: *> => |the finish line>

-- define the input superposition:
the |input> => the |sp1>

-- see what we have:
table[node,coeff] 100 similar-input[seq] the |input>
+------+--------+
| node | coeff  |
+------+--------+
| 1: 1 | 100.0  |
| 5: 1 | 69.718 |
| 2: 1 | 65.306 |
| 3: 1 | 65.192 |
| 4: 1 | 62.993 |
+------+--------+

table[node,coeff] 100 similar-input[seq] then drop-below[0.9] similar-input[seq] the |input>
+------+--------+
| node | coeff  |
+------+--------+
| 2: 1 | 100    |
| 1: 1 | 65.306 |
| 3: 1 | 64.579 |
| 4: 1 | 62.829 |
| 5: 1 | 52.732 |
+------+--------+

table[node,coeff] 100 similar-input[seq] then drop-below[0.9] similar-input[seq] then drop-below[0.9] similar-input[seq] the |input>
+------+--------+
| node | coeff  |
+------+--------+
| 3: 1 | 100    |
| 5: 1 | 79.326 |
| 4: 1 | 73.162 |
| 1: 1 | 65.192 |
| 2: 1 | 64.579 |
+------+--------+

table[node,coeff] 100 similar-input[seq] then drop-below[0.9] similar-input[seq] then drop-below[0.9] similar-input[seq] then drop-below[0.9] similar-input[seq] the |input>
+------+--------+
| node | coeff  |
+------+--------+
| 4: 1 | 100    |
| 5: 1 | 76.359 |
| 3: 1 | 73.162 |
| 1: 1 | 62.993 |
| 2: 1 | 62.829 |
+------+--------+

table[node,coeff] 100 similar-input[seq] then drop-below[0.9] similar-input[seq] then drop-below[0.9] similar-input[seq] then drop-below[0.9] similar-input[seq] then drop-below[0.9] similar-input[seq] the |input>
+------+--------+
| node | coeff  |
+------+--------+
| 5: 1 | 100.0  |
| 3: 1 | 79.326 |
| 4: 1 | 76.359 |
| 1: 1 | 69.718 |
| 2: 1 | 52.732 |
+------+--------+

sa: then drop-below[0.9] similar-input[seq] then drop-below[0.9] similar-input[seq] then drop-below[0.9] similar-input[seq] then drop-below[0.9] similar-input[seq] then drop-below[0.9] similar-input[seq] the |input>
1.0|the finish line>

-- finally, see what the ugly details look like:
sa: dump
----------------------------------------
|context> => |context: if-then machine learning a sequence>

the |sp1> => 12.363|x: 1> + 7.862|x: 2> + 4.541|x: 3> + 2.752|x: 4> + 15.782|x: 5> + 13.444|x: 6> + 8.522|x: 7> + 7.512|x: 8> + 11.056|x: 9> + 17.304|x: 10>
the |sp2> => 8.05|x: 1> + 4.543|x: 2> + 14.629|x: 3> + 3.443|x: 4> + 4.74|x: 5> + 1.059|x: 6> + 3.91|x: 7> + 17.714|x: 8> + 14.833|x: 9> + 11.074|x: 10>
the |sp3> => 19.846|x: 1> + 19.852|x: 2> + 19.825|x: 3> + 6.605|x: 4> + 10.253|x: 5> + 8.096|x: 6> + 1.937|x: 7> + 7.358|x: 8> + 12.041|x: 9> + 1.345|x: 10>
the |sp4> => 14.787|x: 1> + 10.035|x: 2> + 3.728|x: 3> + 16.038|x: 4> + 5.647|x: 5> + 3.857|x: 6> + 3.552|x: 7> + 7.227|x: 8> + 16.747|x: 9> + 1.412|x: 10>
the |sp5> => 18.044|x: 1> + 18.396|x: 2> + 8.64|x: 3> + 14.424|x: 4> + 19.749|x: 5> + 6.61|x: 6> + 7.26|x: 7> + 4.446|x: 8> + 9.583|x: 9> + 1.272|x: 10>

seq |node: 1: 1> => 12.363|x: 1> + 7.862|x: 2> + 4.541|x: 3> + 2.752|x: 4> + 15.782|x: 5> + 13.444|x: 6> + 8.522|x: 7> + 7.512|x: 8> + 11.056|x: 9> + 17.304|x: 10>
then |node: 1: *> => 8.05|x: 1> + 4.543|x: 2> + 14.629|x: 3> + 3.443|x: 4> + 4.74|x: 5> + 1.059|x: 6> + 3.91|x: 7> + 17.714|x: 8> + 14.833|x: 9> + 11.074|x: 10>

seq |node: 2: 1> => 8.05|x: 1> + 4.543|x: 2> + 14.629|x: 3> + 3.443|x: 4> + 4.74|x: 5> + 1.059|x: 6> + 3.91|x: 7> + 17.714|x: 8> + 14.833|x: 9> + 11.074|x: 10>
then |node: 2: *> => 19.846|x: 1> + 19.852|x: 2> + 19.825|x: 3> + 6.605|x: 4> + 10.253|x: 5> + 8.096|x: 6> + 1.937|x: 7> + 7.358|x: 8> + 12.041|x: 9> + 1.345|x: 10>

seq |node: 3: 1> => 19.846|x: 1> + 19.852|x: 2> + 19.825|x: 3> + 6.605|x: 4> + 10.253|x: 5> + 8.096|x: 6> + 1.937|x: 7> + 7.358|x: 8> + 12.041|x: 9> + 1.345|x: 10>
then |node: 3: *> => 14.787|x: 1> + 10.035|x: 2> + 3.728|x: 3> + 16.038|x: 4> + 5.647|x: 5> + 3.857|x: 6> + 3.552|x: 7> + 7.227|x: 8> + 16.747|x: 9> + 1.412|x: 10>

seq |node: 4: 1> => 14.787|x: 1> + 10.035|x: 2> + 3.728|x: 3> + 16.038|x: 4> + 5.647|x: 5> + 3.857|x: 6> + 3.552|x: 7> + 7.227|x: 8> + 16.747|x: 9> + 1.412|x: 10>
then |node: 4: *> => 18.044|x: 1> + 18.396|x: 2> + 8.64|x: 3> + 14.424|x: 4> + 19.749|x: 5> + 6.61|x: 6> + 7.26|x: 7> + 4.446|x: 8> + 9.583|x: 9> + 1.272|x: 10>

seq |node: 5: 1> => 18.044|x: 1> + 18.396|x: 2> + 8.64|x: 3> + 14.424|x: 4> + 19.749|x: 5> + 6.61|x: 6> + 7.26|x: 7> + 4.446|x: 8> + 9.583|x: 9> + 1.272|x: 10>
then |node: 5: *> => |the finish line>

the |input> => 12.363|x: 1> + 7.862|x: 2> + 4.541|x: 3> + 2.752|x: 4> + 15.782|x: 5> + 13.444|x: 6> + 8.522|x: 7> + 7.512|x: 8> + 11.056|x: 9> + 17.304|x: 10>
----------------------------------------
And if you follow the tables, it works exactly as expected. Note though that we chose our if-then machines to be exact matches (ie, 100%) to the input superposition. I did that for demonstration purposes. How about a tweak on the above, where that is not the case. We will use absolute-noise[1] to nosiy up our superpositions:
-- define a new layer of input patterns seq2 (note, we don't need to (re)define the "then" operator, since we are using the same ones as above):
seq2 |node: 1: 1> => absolute-noise[1] the |sp1>
seq2 |node: 2: 1> => absolute-noise[1] the |sp2>
seq2 |node: 3: 1> => absolute-noise[1] the |sp3>
seq2 |node: 4: 1> => absolute-noise[1] the |sp4>
seq2 |node: 5: 1> => absolute-noise[1] the |sp5>

-- now put it to use:
table[node,coeff] 100 similar-input[seq2] the |input>
+------+--------+
| node | coeff  |
+------+--------+
| 1: 1 | 98.604 |
| 5: 1 | 70.492 |
| 3: 1 | 65.944 |
| 2: 1 | 65.777 |
| 4: 1 | 63.699 |
+------+--------+

table[node,coeff] 100 similar-input[seq2] then drop-below[0.9] similar-input[seq2] the |input>
+------+--------+
| node | coeff  |
+------+--------+
| 2: 1 | 98.698 |
| 1: 1 | 66.038 |
| 3: 1 | 65.322 |
| 4: 1 | 63.723 |
| 5: 1 | 53.78  |
+------+--------+

table[node,coeff] 100 similar-input[seq2] then drop-below[0.9] similar-input[seq2] then drop-below[0.9] similar-input[seq2] the |input>
+------+--------+
| node | coeff  |
+------+--------+
| 3: 1 | 98.775 |
| 5: 1 | 79.902 |
| 4: 1 | 73.249 |
| 1: 1 | 66.02  |
| 2: 1 | 65.681 |
+------+--------+

table[node,coeff] 100 similar-input[seq2] then drop-below[0.9] similar-input[seq2] then drop-below[0.9] similar-input[seq2] then drop-below[0.9] similar-input[seq2] the |input>
+------+--------+
| node | coeff  |
+------+--------+
| 4: 1 | 98.632 |
| 5: 1 | 76.244 |
| 3: 1 | 74.323 |
| 1: 1 | 64.251 |
| 2: 1 | 63.981 |
+------+--------+

table[node,coeff] 100 similar-input[seq2] then drop-below[0.9] similar-input[seq2] then drop-below[0.9] similar-input[seq2] then drop-below[0.9] similar-input[seq2] then drop-below[0.9] similar-input[seq2] the |input>
+------+--------+
| node | coeff  |
+------+--------+
| 5: 1 | 98.495 |
| 3: 1 | 80.222 |
| 4: 1 | 76.313 |
| 1: 1 | 70.211 |
| 2: 1 | 53.996 |
+------+--------+

sa: then drop-below[0.9] similar-input[seq2] then drop-below[0.9] similar-input[seq2] then drop-below[0.9] similar-input[seq2] then drop-below[0.9] similar-input[seq2] then drop-below[0.9] similar-input[seq2] the |input>
0.985|the finish line>
Note that as desired we again find the |node: 1: 1>, |node: 2: 1> ... |node: 5: 1> sequence, but this time with a roughly 98% rather than a 100% match. Hopefully that makes my point.

A couple of comments:
1) if-then machines work with any superpositions.
2) the then operator can also have side effects. eg using stored rules. This is a big deal! And makes if-then machines even more powerful.
3) the above are quite simple if-then machines in that there is no pooling. ie, only one input superposition triggers each machine. A full if-then machine can have many "pooled" inputs.
4) once again, a whinge about my parser. If that was finished, we could short-cut the above using:
next (*) #=> then drop-below[0.9] similar-input[seq] |_self>
next2 (*) #=> then drop-below[0.9] similar-input[seq2] |_self>

-- after which we would use:
table[node,coeff] 100 similar-input[seq] next^k the |input>
table[node,coeff] 100 similar-input[seq2] next2^k the |input>
5) for any sufficiently long if-then machine sequence with the matches below 100%, eventually you will reach a point where the result is less than the drop-below threshold t leaving you with the empty ket |>. Which kind of makes sense. If you are reasoning with probabilities, not certainties, for a long enough chain you can't be sure of your conclusion. On the flip side, if your matches are pretty much 100%, as in the world of mathematics, then you should be able to have long sequences and still be above the drop-below threshold.
6) I'm pretty sure temporal pooling and spatial pooling look the same, as far as if-then machines are concerned. In that case the difference is where the superpositions come from, not the structure of the if-then machine.

Finally, this is what we have now:
----------------------------------------
|context> => |context: fixed if-then machine learning a sequence>

the |sp1> => 12.363|x: 1> + 7.862|x: 2> + 4.541|x: 3> + 2.752|x: 4> + 15.782|x: 5> + 13.444|x: 6> + 8.522|x: 7> + 7.512|x: 8> + 11.056|x: 9> + 17.304|x: 10>
the |sp2> => 8.05|x: 1> + 4.543|x: 2> + 14.629|x: 3> + 3.443|x: 4> + 4.74|x: 5> + 1.059|x: 6> + 3.91|x: 7> + 17.714|x: 8> + 14.833|x: 9> + 11.074|x: 10>
the |sp3> => 19.846|x: 1> + 19.852|x: 2> + 19.825|x: 3> + 6.605|x: 4> + 10.253|x: 5> + 8.096|x: 6> + 1.937|x: 7> + 7.358|x: 8> + 12.041|x: 9> + 1.345|x: 10>
the |sp4> => 14.787|x: 1> + 10.035|x: 2> + 3.728|x: 3> + 16.038|x: 4> + 5.647|x: 5> + 3.857|x: 6> + 3.552|x: 7> + 7.227|x: 8> + 16.747|x: 9> + 1.412|x: 10>
the |sp5> => 18.044|x: 1> + 18.396|x: 2> + 8.64|x: 3> + 14.424|x: 4> + 19.749|x: 5> + 6.61|x: 6> + 7.26|x: 7> + 4.446|x: 8> + 9.583|x: 9> + 1.272|x: 10>

seq |node: 1: 1> => 12.363|x: 1> + 7.862|x: 2> + 4.541|x: 3> + 2.752|x: 4> + 15.782|x: 5> + 13.444|x: 6> + 8.522|x: 7> + 7.512|x: 8> + 11.056|x: 9> + 17.304|x: 10>
seq2 |node: 1: 1> => 13.351|x: 1> + 8.69|x: 2> + 4.772|x: 3> + 3.054|x: 4> + 16.642|x: 5> + 13.708|x: 6> + 9.148|x: 7> + 8.439|x: 8> + 11.983|x: 9> + 17.609|x: 10>
then |node: 1: *> => 8.05|x: 1> + 4.543|x: 2> + 14.629|x: 3> + 3.443|x: 4> + 4.74|x: 5> + 1.059|x: 6> + 3.91|x: 7> + 17.714|x: 8> + 14.833|x: 9> + 11.074|x: 10>

seq |node: 2: 1> => 8.05|x: 1> + 4.543|x: 2> + 14.629|x: 3> + 3.443|x: 4> + 4.74|x: 5> + 1.059|x: 6> + 3.91|x: 7> + 17.714|x: 8> + 14.833|x: 9> + 11.074|x: 10>
seq2 |node: 2: 1> => 8.568|x: 1> + 4.859|x: 2> + 15.537|x: 3> + 3.768|x: 4> + 5.416|x: 5> + 1.809|x: 6> + 4.228|x: 7> + 18.566|x: 8> + 15.799|x: 9> + 11.21|x: 10>
then |node: 2: *> => 19.846|x: 1> + 19.852|x: 2> + 19.825|x: 3> + 6.605|x: 4> + 10.253|x: 5> + 8.096|x: 6> + 1.937|x: 7> + 7.358|x: 8> + 12.041|x: 9> + 1.345|x: 10>

seq |node: 3: 1> => 19.846|x: 1> + 19.852|x: 2> + 19.825|x: 3> + 6.605|x: 4> + 10.253|x: 5> + 8.096|x: 6> + 1.937|x: 7> + 7.358|x: 8> + 12.041|x: 9> + 1.345|x: 10>
seq2 |node: 3: 1> => 20.38|x: 1> + 20.207|x: 2> + 20.263|x: 3> + 7.372|x: 4> + 10.786|x: 5> + 8.449|x: 6> + 2.488|x: 7> + 8.009|x: 8> + 12.633|x: 9> + 1.406|x: 10>
then |node: 3: *> => 14.787|x: 1> + 10.035|x: 2> + 3.728|x: 3> + 16.038|x: 4> + 5.647|x: 5> + 3.857|x: 6> + 3.552|x: 7> + 7.227|x: 8> + 16.747|x: 9> + 1.412|x: 10>

seq |node: 4: 1> => 14.787|x: 1> + 10.035|x: 2> + 3.728|x: 3> + 16.038|x: 4> + 5.647|x: 5> + 3.857|x: 6> + 3.552|x: 7> + 7.227|x: 8> + 16.747|x: 9> + 1.412|x: 10>
seq2 |node: 4: 1> => 15.785|x: 1> + 10.785|x: 2> + 3.955|x: 3> + 16.93|x: 4> + 5.953|x: 5> + 4.312|x: 6> + 3.647|x: 7> + 7.997|x: 8> + 17.241|x: 9> + 2.228|x: 10>
then |node: 4: *> => 18.044|x: 1> + 18.396|x: 2> + 8.64|x: 3> + 14.424|x: 4> + 19.749|x: 5> + 6.61|x: 6> + 7.26|x: 7> + 4.446|x: 8> + 9.583|x: 9> + 1.272|x: 10>

seq |node: 5: 1> => 18.044|x: 1> + 18.396|x: 2> + 8.64|x: 3> + 14.424|x: 4> + 19.749|x: 5> + 6.61|x: 6> + 7.26|x: 7> + 4.446|x: 8> + 9.583|x: 9> + 1.272|x: 10>
seq2 |node: 5: 1> => 18.631|x: 1> + 19.0|x: 2> + 9.594|x: 3> + 14.643|x: 4> + 20.408|x: 5> + 7.457|x: 6> + 7.315|x: 7> + 4.685|x: 8> + 10.152|x: 9> + 1.88|x: 10>
then |node: 5: *> => |the finish line>

the |input> => 12.363|x: 1> + 7.862|x: 2> + 4.541|x: 3> + 2.752|x: 4> + 15.782|x: 5> + 13.444|x: 6> + 8.522|x: 7> + 7.512|x: 8> + 11.056|x: 9> + 17.304|x: 10>
----------------------------------------
Update: now we have all these superpositions, let's go on and show pooling.
-- define our if-then machine:
seq3 |node: 17: 1> => the |sp1>
seq3 |node: 17: 2> => the |sp2>
seq3 |node: 17: 3> => the |sp3>
seq3 |node: 17: 4> => the |sp4>
seq3 |node: 17: 5> => the |sp5>
then |node: 17: *> => |the SP sequence>

-- now, let's do some testing first.
-- randomly pick one of {sp1,sp2,sp3,sp4,sp5}, add some noise, and see what we have:
sa: table[node,coeff] 100 similar-input[seq3] absolute-noise[5] the pick-elt split |sp1 sp2 sp3 sp4 sp5>
+-------+--------+
| node  | coeff  |
+-------+--------+
| 17: 5 | 92.791 |
| 17: 3 | 80.064 |
| 17: 4 | 76.8   |
| 17: 1 | 73.887 |
| 17: 2 | 59.433 |
+-------+--------+
-- so the input must have been sp5

sa: table[node,coeff] 100 similar-input[seq3] absolute-noise[5] the pick-elt split |sp1 sp2 sp3 sp4 sp5>
+-------+--------+
| node  | coeff  |
+-------+--------+
| 17: 4 | 93.223 |
| 17: 5 | 78.173 |
| 17: 3 | 72.651 |
| 17: 2 | 68.824 |
| 17: 1 | 67.369 |
+-------+--------+
-- the input must have been sp4

sa: table[node,coeff] 100 similar-input[seq3] absolute-noise[5] the pick-elt split |sp1 sp2 sp3 sp4 sp5>
+-------+--------+
| node  | coeff  |
+-------+--------+
| 17: 4 | 91.116 |
| 17: 5 | 81.028 |
| 17: 3 | 77.603 |
| 17: 2 | 67.575 |
| 17: 1 | 66.832 |
+-------+--------+
-- sp4 again

sa: table[node,coeff] 100 similar-input[seq3] absolute-noise[5] the pick-elt split |sp1 sp2 sp3 sp4 sp5>
+-------+--------+
| node  | coeff  |
+-------+--------+
| 17: 2 | 89.845 |
| 17: 1 | 74.979 |
| 17: 4 | 69.849 |
| 17: 3 | 68.51  |
| 17: 5 | 61.79  |
+-------+--------+
-- the input must have been sp2

-- now ramp up the noise from absolute-noise[5] to absolute-noise[10], and use the full if-then machine:
sa: then drop-below[0.9] similar-input[seq3] absolute-noise[10] the pick-elt split |sp1 sp2 sp3 sp4 sp5>
|>

sa: then drop-below[0.9] similar-input[seq3] absolute-noise[10] the pick-elt split |sp1 sp2 sp3 sp4 sp5>
|>

sa: then drop-below[0.9] similar-input[seq3] absolute-noise[10] the pick-elt split |sp1 sp2 sp3 sp4 sp5>
0.901|the SP sequence>

sa: then drop-below[0.9] similar-input[seq3] absolute-noise[10] the pick-elt split |sp1 sp2 sp3 sp4 sp5>
|>

sa: then drop-below[0.9] similar-input[seq3] absolute-noise[10] the pick-elt split |sp1 sp2 sp3 sp4 sp5>
0.933|the SP sequence>
And there we have it! Pooling of {sp1,sp2,sp3,sp4,sp5}, and an output of "the SP sequence". And since we added so much noise, it is only sometimes above the drop-below threshold t (in this case 0.9). And since it is all so abstract, this thing is very general.

Note that pooling is a very important concept. Basically it means you can have multiple different representations of the same thing, even though they may look nothing alike. For example, a friends face from different angles in terms of pixels is very different, yet they all trigger the "hey, that's my friend". Another is, the lyrics or notes in a song. Despite being different, they all map to the same song name.

Next, instead of adding noise to the incoming superposition, we project down from 10 dimensions to 9 (using pick[9]):
sa: then drop-below[0.9] similar-input[seq3] pick[9] the pick-elt split |sp1 sp2 sp3 sp4 sp5>
0.987|the SP sequence>

sa: then drop-below[0.9] similar-input[seq3] pick[9] the pick-elt split |sp1 sp2 sp3 sp4 sp5>
0.957|the SP sequence>

sa: then drop-below[0.9] similar-input[seq3] pick[9] the pick-elt split |sp1 sp2 sp3 sp4 sp5>
0.983|the SP sequence>

sa: then drop-below[0.9] similar-input[seq3] pick[9] the pick-elt split |sp1 sp2 sp3 sp4 sp5>
|>

sa: then drop-below[0.9] similar-input[seq3] pick[9] the pick-elt split |sp1 sp2 sp3 sp4 sp5>
0.973|the SP sequence>
Next, I tried projecting down to 8 dimensions (using pick[8]), but almost always the result was below threshold. BTW, the current pick[n] code changes the order of the superposition. This of course has no impact on similar-input[op], and hence if-then machines. Most of the time changing the ordering of a superposition does not change the meaning of that superposition. Though some of the time it is of course useful to sort superpositions, and we have operators for that (ket-sort, coeff-sort, sort-by[], and so on).

Also, I should note that if-then machines are, in general, fairly tolerant of adding noise (using absolute-noise[t]) and removing elements from the superposition (pick[n]). They become more so if you decrease t, and less so if you increase t. Though you don't want t too small, else it will match more than you would like. And if you increase t to 0.98 or higher, then you are in the maths world of black and white/true and false.

Update: note that if you have a long line of code, and don't fully understand it, then we can always decompose that sequence into smaller steps. eg, given:
then drop-below[0.9] similar-input[seq3] pick[9] the pick-elt split |sp1 sp2 sp3 sp4 sp5>
we can work through it step by step:
sa: split |sp1 sp2 sp3 sp4 sp5>
|sp1> + |sp2> + |sp3> + |sp4> + |sp5>

sa: pick-elt split |sp1 sp2 sp3 sp4 sp5>
|sp2>

sa: the |sp2>
8.05|x: 1> + 4.543|x: 2> + 14.629|x: 3> + 3.443|x: 4> + 4.74|x: 5> + 1.059|x: 6> + 3.91|x: 7> + 17.714|x: 8> + 14.833|x: 9> + 11.074|x: 10>

sa: pick[9] the |sp2>
4.74|x: 5> + 8.05|x: 1> + 14.833|x: 9> + 3.91|x: 7> + 4.543|x: 2> + 1.059|x: 6> + 11.074|x: 10> + 17.714|x: 8> + 3.443|x: 4>

sa: similar-input[seq3] (4.74|x: 5> + 8.05|x: 1> + 14.833|x: 9> + 3.91|x: 7> + 4.543|x: 2> + 1.059|x: 6> + 11.074|x: 10> + 17.714|x: 8> + 3.443|x: 4>)
0.826|node: 17: 2> + 0.692|node: 17: 1> + 0.663|node: 17: 4> + 0.526|node: 17: 3> + 0.512|node: 17: 5>

sa: drop-below[0.9] (0.826|node: 17: 2> + 0.692|node: 17: 1> + 0.663|node: 17: 4> + 0.526|node: 17: 3> + 0.512|node: 17: 5>)
|>

sa: then |>
|>
Noting that random steps like pick[n] and pick-elt complicate this, in that each time you use them you will get different answers. Hence why I copied the sp result from the previous line into the next step.

Thursday 4 February 2016

introducing the if-then machine

I think this thing is going to be seriously interesting! Let me define it, and then I will try to explain it:
seq |node 1: 1> => sp1-1
seq |node 1: 2> => sp1-2
seq |node 1: 3> => sp1-3
...
seq |node 1: n> => sp1-n

then |node 1: *> => sp2
next (*) #=> then drop-below[t] similar-input[seq] |_self>
where {sp1-k} and sp2 are all superpositions. And we call this collection of rules the if-then machine.

This has the useful and cool property that if any one of the sp1-k superpositions matches close enough to the input (we can call this a pooling of the sp1-k superpositions), then it spits out sp2 as a kind of prediction. t is a parameter that determines how close that match has to be for there to be any output. If the input is not close enough, then "next input-sp" just returns the empty ket |>. In the world of mathematics where we want things to be black and white and true and false, we might want t = 0.99 or something. If we want to be generous and tolerant, then maybe t = 0.6 (ie, 60% match is good enough). Or we could make it dynamic. If this particular if-then machine "fires too much" then increase t. If it "fires too little" then decrease it. The exact details of that is something we can work out later.

BTW, I don't think I have defined similar-input[op] just yet. It is very close to its brother similar[op] ket (see eg here). Indeed, they are only a couple of lines of python apart.
similar[some-op] |x>
returns a list of kets that have superpositions defined with respect to some-op, sorted by their similarity to the superposition defined by "some-op |x>".

In contrast:
similar-input[some-op] some-sp
returns a list of kets that have superpositions defined with respect to some-op, sorted by their similarity to "some-sp".

In context with our if-then machine defined above, this means "seq" is the operator that determines which "layer" of if-then machines to match our input sp with. If seq is not defined for a series of nodes, then similar-input[seq] won't compare the input against those nodes. Say seq2 is instead defined for those nodes, then similar-input[seq2] will compare the input against those. To see what nodes are in a particular "layer", in the console type "rel-kets[seq]" or "rel-kets[seq2]".

Next. In general there is no restriction on the superpositions sp1-k. But sometimes it is useful for them to be somewhat "orthogonal". Though I don't mean that in a strict maths sense of vector dot product equals zero, or similar. I mean in the sense that if one of sp1-k matches input-sp, then ideally we don't want any of the others to do so. Because if they did, we would get a double counting effect.

Let me try to show this double counting effect. Say "node 1: 3" matches input-sp with 80% and "node 1: 17" matches with 92% (and the rest barely at all), then we would have:
drop-below[0.75] similar-input[seq] input-sp
= 0.92|node 1: 17> + 0.8|node 1: 3>

then drop-below[0.75] similar-input[seq] input-sp
= then (0.92|node 1: 17> + 0.8|node 1: 3>)
= 0.92 then |node 1: 17> + 0.8 then |node 1: 3>
= 0.92 sp2 + 0.8 sp2
= 1.72 sp2
And note the coeff of sp2 is greater than 1. So what can we do about it? One possibility, and probably the one I will go for, is average-categorize. Though probably a tweak of that, but it is close to what I want. Basically, if the superpositions are similar enough with respect to simm(), add them, if not, then keep them distinct. But I should note that sometimes we do want this double counting, or triple counting, or whatever, depending on how many matches there are above the t threshold.

I guess now I should give an example. How about a simple 3 if-then machine example:
----------------------------------------
|context> => |context: if-then machine>

seq |node 1: 1> => |X>
seq |node 1: 2> => |Y>
seq |node 1: 3> => |X> + |Y>
then |node 1: *> => |X or Y>

seq |node 2: 1> => |X> + |Y>
then |node 2: *> => |X and Y>

seq |node 3: 1> => |X> + |Y> + |Z>
then |node 3: *> => |X and Y and Z>
----------------------------------------
Now, test it's properties. First, without the "then" operator, and the drop-below[t] operator:
sa: similar-input[seq] |Y>
|node 1: 2> + 0.5|node 1: 3> + 0.5|node 2: 1> + 0.333|node 3: 1>

sa: similar-input[seq] (|X> + |Y>)
|node 1: 3> + |node 2: 1> + 0.667|node 3: 1> + 0.5|node 1: 1> + 0.5|node 1: 2>

sa: similar-input[seq] (|X> + |Y> + |Z>)
|node 3: 1> + 0.667|node 1: 3> + 0.667|node 2: 1> + 0.333|node 1: 1> + 0.333|node 1: 2>
Set t = 0.8 and repeat:
sa: drop-below[0.8] similar-input[seq] |Y>
|node 1: 2>

sa: drop-below[0.8] similar-input[seq] (|X> + |Y>)
|node 1: 3> + |node 2: 1>

sa: drop-below[0.8] similar-input[seq] (|X> + |Y> + |Z>)
|node 3: 1>
Finally, apply the "then" operator:
sa: then drop-below[0.8] similar-input[seq] |X>
|X or Y>

sa: then drop-below[0.8] similar-input[seq] |Y>
|X or Y>

sa: then drop-below[0.8] similar-input[seq] |Z>
|>

sa: then drop-below[0.8] similar-input[seq] (|X> + |Y>)
|X or Y> + |X and Y>

sa: then drop-below[0.8] similar-input[seq] (|X> + |Y> + |Z>)
|X and Y and Z>
Now, once more, this time using t = 0.6
sa: then drop-below[0.6] similar-input[seq] |X>
|X or Y>

sa: then drop-below[0.6] similar-input[seq] |Y>
|X or Y>

sa: then drop-below[0.6] similar-input[seq] |Z>
|>

sa: then drop-below[0.6] similar-input[seq] (|X> + |Y>)
|X or Y> + |X and Y> + 0.667|X and Y and Z>

sa: then drop-below[0.6] similar-input[seq] (|X> + |Y> + |Z>)
|X and Y and Z> + 0.667|X or Y> + 0.667|X and Y>
OK. That is a pretty simple example! But in practice this thing is very general and very powerful. Mostly because superpositions can be almost anything.

Now, on to the really interesting bit. I propose that the if-then machine roughly corresponds to a single neuron. It certainly has a lot more power than a support vector machine (for example, the input does not need to be linearly separable), which also makes that claim. The if-then machine is itself very powerful, which means an entire brain of them would be unimaginably powerful.

Some notes:
1) Jeff Hawkins and the HTM (hierarchical temporal memory) guys claim SDR's (sparse distributed representations) are the brains data-type. Superpositions are more general than SDR's since they can have coeffs other than {0,1}, so I in turn claim superpositions are the brains data-type.
2) we can tweak our if-then machine in various ways. We could put in sigmoids, or an inhibition layer so that nearby neurons inhibit their neighbours and so on. Plus probably other tweaks.
3) the current parser does not yet handle learning superpositions rules. eg, in this case: "next (*) #=> ...". Hence I had to type the full "then drop-below[0.8] similar-input[seq]". If we could handle them our examples would instead look like this:
sa: next (*) #=> then drop-below[0.8] similar-input[seq] |_self>
sa: next |X>
sa: next |Y>
sa: next |Z>
sa: next (|X> + |Y>)
sa: next (|X> + |Y> + |Z>)
4) the above if-then machine used a static learn rule: "then |node 1: *> => ...". We could instead use some dynamic rule: "then |node 1: *> #=> some-action". NB: the symbol for a stored-rule "#=>".
5) our if-then machine is very close to my claimed "general supervised pattern recognition algo".
6) we should be able to use the if-then machine to learn sequences. I need to look into that, but I'm pretty sure of it. And to find the k'th element in the learned sequence we would do something like: "next^k SP". Something like this I suppose:
seq |node 1: 1> => sp1
then |node 1: *> => sp2

seq |node 2: 1> => sp2
then |node 2: *> => sp3

seq |node 3: 1> => sp3
then |node 3: *> => sp4

seq |node 4: 1> => sp4
then |node 4: *> => sp5

seq |node 5: 1> => sp5
then |node 5: *> => sp6
7) we could then pool a sequence into a single output:
seq2 |node P: 0> => SP
seq2 |node P: 1> => next SP
seq2 |node P: 2> => next^2 SP
seq2 |node P: 3> => next^3 SP
...
seq2 |node P: n> => next^n SP
then |node P: *> => |the-SP-sequence>
8) we need some mechanism to find the desired superpositions.
9) if not obvious, I should mention the if-then machine is not restricted to Boolean values. It can be considered a generalization, though if you set t close enough to 1, then it emulates a Boolean world.

Update: a bigger slightly more interesting example:
-- define our context:
sa: context bigger if-then machine

-- define our object -> superposition operator:
sa: ngrams |*> #=> letter-ngrams[1,2,3] |_self>

-- define our 2 if-then machines:
sa: seq |node 1> => ngrams |the cat sat on the mat>
sa: then |node 1> => |the cat sat on the mat>
sa: seq |node 2> => ngrams |the man on the moon>
sa: then |node 2> => |the man on the moon>

-- see what we have:
sa: dump
----------------------------------------
|context> => |context: bigger if-then machine>

ngrams |*> #=> letter-ngrams[1,2,3] |_self>

seq |node 1> => 5|t> + 2|h> + 2|e> + 5| > + |c> + 3|a> + |s> + |o> + |n> + |m> + 2|th> + 2|he> + 2|e > + | c> + |ca> + 3|at> + 2|t > + | s> + |sa> + | o> + |on> + |n > + | t> + | m> + |ma> + 2|the> + 2|he > + |e c> + | ca> + |cat> + 2|at > + |t s> + | sa> + |sat> + |t o> + | on> + |on > + |n t> + | th> + |e m> + | ma> + |mat>
then |node 1> => |the cat sat on the mat>

seq |node 2> => 2|t> + 2|h> + 2|e> + 4| > + 2|m> + |a> + 3|n> + 3|o> + 2|th> + 2|he> + 2|e > + 2| m> + |ma> + |an> + 2|n > + | o> + 2|on> + | t> + |mo> + |oo> + 2|the> + 2|he > + 2|e m> + | ma> + |man> + |an > + |n o> + | on> + |on > + |n t> + | th> + | mo> + |moo> + |oon>
then |node 2> => |the man on the moon>
----------------------------------------

-- now put it to use (we don't have drop-below[t], so effectively t = 0)
-- though first define a short-cut for our if-then machine:
sa: guess-phrase |*> #=> then similar-input[seq] ngrams |_self>

sa: guess-phrase |the >
0.381|the cat sat on the mat> + 0.37|the man on the moon>

sa: guess-phrase |the cat>
0.548|the cat sat on the mat> + 0.37|the man on the moon>

sa: guess-phrase |the cat sat>
0.717|the cat sat on the mat> + 0.356|the man on the moon>

sa: guess-phrase |the cat sat on the>
0.862|the cat sat on the mat> + 0.544|the man on the moon>

-- NB: I deliberately chose "the cat sat on the moon" to lift up the probability of "the man on the moon":
sa: guess-phrase |the cat sat on the moon>
0.866|the cat sat on the mat> + 0.675|the man on the moon>

sa: guess-phrase |the man on the>
0.805|the man on the moon> + 0.602|the cat sat on the mat>

-- now with the exact phrases:
sa: guess-phrase |the cat sat on the mat>
1.0|the cat sat on the mat> + 0.59|the man on the moon>

sa: guess-phrase |the man on the moon>
1.0|the man on the moon> + 0.59|the cat sat on the mat>
So, you have to read the coeffs above carefully, but if you do, you will see it works really quite well. Much better than a Boolean system of logic.