Tuesday, September 21, 2004

On Intelligence

cover

Jeff Hawkins has written a book called On Intelligence, subtitled How a new understanding of the brain will lead to the creation of truly intelligent machines.

I first met Jeff Hawkins in 1997 when I visited Palm Computing for a developer training session. At the time, I was one of a few independent software developers writing applications for the Pilot handheld computer (later known as the PalmPilot). Hawkins invented the PalmPilot and brought handheld computers into the mainstream. Jeff is smart and confident, yet he maintains a sort of informal humor that's very charismatic. He's what you might call an inspirational engineer. It's almost as if he's the kind of transcendent nerd that we mere mortal nerds aspire to be (Sorry, Jeff!). After reading this book, I'm ready to follow Jeff on a new adventure...

Hawkins has been interested in how the brain works for a long time. Over almost two decades, he has put together a model of how the brain works. On Intelligence describes this model in enough detail to make his theory testable.

According to Hawkins, the brain is continuously making predictions based on learned patterns. It then compares the predictions with the real world enabling our brain to draw attention to those things of interest in our environment. Memorized patterns also enable us to react quickly enough to catch a ball or run down stairs without falling over. Despite having relatively slow bilogical circuits in our head, we can perform tasks that elude even the fastest computers.

Hawkins points out that the neocortex where the higher functions reside is highly generalized in structure. The regions of the neocortex that process audition are functionally the same as those that handle vision or touch. The differences lie in the way these regions of the neocortex are wired to our senses and motor controls, and the way these regions have been trained through experience.

Hawkins presents a model in which the basic component of the neocortex is an auto-associative memory. Auto-associative memories are neural networks that can learn a pattern, then regurgitate the pattern given just a piece of the original pattern. For example, suppose you have trained a collection of auto-associative memories so that each one recognizes a picture of a letter of the alphabet. If we now expose the collection of memories to a picure of a dot above a line (i.e., the top half of the letter i or j), the memories for the letters i and j will both be activated. In other words, the memories of i and j were retrieved when only part of the letters were seen. This enables us to see a partially obscured object, yet still recognize the object. This is very difficult for computer software to do today, even with billions of computer operations per second.

Auto-associative memories can learn to recognize patterns in an invariant way. For example, it doesn't matter what font or color I use to write this blog; you can recognize the letters and words independent of their size and color. Even changes in their geometry won't fool your brain. Your brain has invariant representations of letters that are triggered by their general shapes, not by specific instances of letters you have seen in a particular size or font.

Hawkins argues that a hierarchy of auto-associative memories creates representations of patterns that are more and more invariant with each layer. Eventually, you have an auto-associative memory unit at the top of the hierarchy that is triggered by higher level concepts like "bee".

I'll try to describe in my own words how this might work. Perhaps one layer of a-a memories recognizes colors as yellow or black independent of minor color differences. Another layer recognizes the stripe pattern independent of the size of the stripes. Another layer hears the pitch of the bee's wings flapping. Another layer correlates the visual appearance with the auditory effect. Yet another layer remembers the sting of the bee. Finally a layer correlates all of the effects together.

Now the magic of this setup is feedback. If you can trigger the top layer of the hierarchy, all the other layers light up at the same time. You think of a bee and you see stripe patterns and think about the pain of being stung.

Hawkins has devised more than just an abstract model of thought. He has a detailed model of how the various types of neurons work together to make all of this happen. He also backs up the abstract model with known mental phenomena. In the appendix, Hawkins describes several experimental predictions, allowing the scientific community to falsify the model.

Hawkins is quite critical of traditional, symbol-based AI. He describes how in his model the brain is able to outperform supercomputers, and why it's not just a matter of making faster computers. Still, Hawkins is optimistic about the prospects for building intelligent machines (he doesn't like to call them computers). He ends the book by inspiring the reader to make commercial and scientific use of this new model of intelligence.

The book was co-authored by Sandra Blakeslee, a New York Times science writer. It reads quite well, and there are a lot of examples to illustrate each point the authors want to get across. I found that there were actually too many such examples, but then, I was pretty impatient to get to the good bits!

I don't have enough knowledge of neuroscience to fault anything Hawkins said about physiology. As for his model, it is very compelling. Just last month, I cooked up a similar hierarchical model of intelligence based on some reading I did on artificial neural networks. It was interesting to see how the brain actually implements some of the feedback that my theoretical model relied upon.

There are several areas that I still have questions about. One question is about emotion and blood chemistry. How does the brain know what is important? Is it really just a matter of seeing patterns in the world that don't match predictions? Do chemical signals influence how the neocortex stores information? Body chemistry might provide the neurological equivalent of global variables in computer software. It seems to me that this global effect would be a natural way that evolution could provide motivation for learning and action. This seems to be missing from Hawkins's approach.

Reservations? I had a couple.

Hawkins offered some praise for Searle's Chinese Room argument. I find the Chinese Room argument unconvincing. The argument is analogous to arguing that humans aren't really intelligent because none of the neurons in our brains really understands anything by itself - it just transmits electrical impulses according to some simple rules. Hawkins sees true intelligence as a matter of "world simulation", so, to the extent that the Chinese Room does not do this, it is not intelligent by his definition. Yet, in order for the Chinese Room to give the correct answer, it must be doing such modeling anyway.

Finally, Hawkins argues that our intelligent machines will not have emotions. However, I'm not so sure that this will be the case. A general purpose intelligence based on this model might have to have emotion in order to be correctly motivated.

"Uncle Owen! This R2 unit has a bad motivator!"

I recommend this book to anyone interested in models of consciousness and intelligence (artificial or otherwise). The book is non-technical, but it's still detailed enough to get its concepts across.

No comments: