Emerging Tech: Jeff Hawkins reinvents artificial intelligenceBy Edward H. Baker | Posted 05-01-2006
Emerging Tech: Jeff Hawkins reinvents artificial intelligence
Jeff Hawkins has a lot on his mind-not least, a new theory about how the brain works. And he's confident his theory will change the entire computing industry.
In Silicon Valley, Hawkins is best known as the founder of Palm Computing Inc. and Handspring Inc., and as the mastermind behind the Palm Pilot and Treo line of smartphones. But a second passion predates Hawkins' fondness for the wireless world: He's nuts about the brain. Hawkins began his career at Intel, in 1979, and while there he made an unsuccessful bid to convince then-Chairman Gordon Moore to launch a research group on neurology and artificial intelligence. Still, Hawkins didn't let that initial setback diminish his enthusiasm, and, in fact, he's spent much of the past quarter century studying the physiology, philosophy and psychology of the brain, even entering a Ph.D. program in biophysics at the University of California at Berkeley, in 1986.
In 2002, Hawkins founded the Redwood Neuroscience Institute (now known as the Redwood Center for Theoretic Neuroscience at the University of California at Berkeley) as a means to develop a rigorous theory of how the human neocortex works. There, he developed a new theory: that the brain makes predictions about the world through pattern recognition and memory, recalling event sequences and their "nested" relationships. Hawkins calls his theory the "memory-prediction framework," and he believes it is the missing link in creating truly intelligent machines. Hawkins published his ideas in his 2004 book, On Intelligence, which he coauthored with New York Times science writer Sandra Blakeslee. And in March 2005, Hawkins founded Numenta Inc., a privately held company in Menlo Park, Calif., that seeks to build intelligent machines based on the theories set forth in his book.
Senior Reporter Debra D'Agostino and Editor Edward Baker spoke with Hawkins about the plausibility of his memory-prediction framework, how it might be translated into software and applied to complex problems - and what that could mean for business. An edited version of their conversation follows.
Can Computers Think Like
CIO INSIGHT: In your book, On Intelligence, you claim to have discovered a new understanding of how the brain works—and how machines can be built to model the brain. It's a powerful idea, but also a controversial one. Can you explain the essence of your theory?
HAWKINS:First of all, the theory explains how the neocortex works—not the entire brain. The neocortex makes up roughly half of a human brain; it's where all high-level thought and perception takes place. It's the place where you perceive the world. And it's a type of memory system, though it's different from that of a computer in that it is composed of a tree-shaped hierarchy of memory regions that store how patterns flow over time, like the notes in a melody. We call this Hierarchical Temporal Memory (HTM).
Computers must be programmed to solve problems, but HTM systems are self-learning. By exposing them to patterns from sensors (just like the neocortex receives information from the eyes and ears), HTMs can automatically build a model of the world. With this model, an HTM can recognize familiar objects and use that data to predict the future. So we're not claiming to build brains here. We are building things that we think can do what half of a human brain does.
How have people reacted to your hypothesis?
If you said to someone that you want to figure out how the brain works and then build machines that work the same way, most people would laugh at you. They'd say it's ridiculous, that people have been trying for decades and haven't made any progress. But it isn't ridiculous. Why shouldn't we be able to figure out how brains work? We understand how kidneys work, and how other organs work, so why not the brain? In fact, it ought to be pretty straightforward. It's only our ignorance that makes things look hard.
So the response to the book has been mixed. We've had a stream of business-oriented researchers who want to talk about it, and several prominent scientists who think this is a landmark book. Many other scientists have dismissed our theory. But a gentleman named Dileep George, who was working at the Redwood Neuroscience Institute [as a graduate research fellow], actually came up with a mathematical formulation for the biological theory in the book. And he did a convincing enough job that we're certain it can be built to solve practical problems. So we started a company called Numenta. Its focus is essentially on building a platform—like an operating system, but different.
What do you expect this platform will be able to do?
We believe that we have come up with a new algorithm, a new way of computing—though it isn't a computer. It's a new way of processing information. HTMs essentially do three things. First, they discover how the world works by sensing and analyzing the world around them. Second, they recognize new inputs as part of its model, which we call pattern recognition. Finally, they make predictions of what will happen in the future. We think we can build machines that are in some sense smarter than humans, that have more memory, that are faster and can process data nonstop, because they use hierarchical and temporal data to predict outcomes—the same way the human brain works.
Now, what do we mean by hierarchical? Well, there's a hierarchical nature to many things—weather and markets and businesses and biological organisms are all structured hierarchically. When you're born, you know almost nothing. Then, over time, you get sensory inputs. Over a period of years, these inputs help you build a hierarchical model of the world. So you start to understand things like words and sentences, chairs and computers and ideas.
Businesses are hierarchical, too—not just the way the people's roles are structured, but how the different parts of a business interact. Let's say I was looking at the manufacturing side of a business, and I wanted to know why a certain metric, such as yield, is going down. Chances are, it's correlated with something else going on nearby, maybe something going on in the supply chain or something like that. It's probably not going to be related, at that level, to something like the rate we pay for advertising. A human would look at that data and try to find the underlying causes, come to a conclusion, and then act upon it.
That's what our systems can do. If there's really an underlying cause to the problem, the goal of the HTM system is to find it. You take some data from some kind of system—visual or financial, it doesn't matter. You feed it into the system's hierarchical temporal memory, and over time it builds a model of underlying causes.
How is that different from a traditional computer?
It's very different. You have to tell a traditional computer what to look for. A big parallel computer that's modeling fluid dynamics—like the weather or a jet engine, for example—tries to model each element, each particle or cubic volume of air. That's just solving mathematical equations. Humans don't operate that way. We don't predict the future by looking at every molecule. We look at problems and seek out high-level causes. We say to ourselves, "I noticed that whenever a storm front comes, there's usually a cold day the next day." As a result, we have these concepts called storms and hurricanes—high-level concepts we have been able to deduce by looking at low-level data.
That's what our HTM technology will try to do: discover the underlying causes in the world. If you hook the system up to the right data and expose that system to the data over a long enough period of time, it can build a model of that environment, just like a human brain does. It will automatically come up with a way of representing the world just like humans do, and draw conclusions based on that model.
It's compelling, but how do you know it can be done?
If you go back 50 or 60 years, when they were building the very first computers, people knew that a computer could be built, even though they didn't have transistors or circuits or hard disks. It's the same thing in this case, though we hope to build it much faster.
We did a prototype before we launched Numenta. It wasn't designed to do anything really useful, but our HTM system solved the very difficult problem of pattern recognition, which no one else has been able to solve.
What was the problem??
When you look at a picture of, say, a cat, there's almost an infinite number of variations of what a cat might look like. Humans have no problem recognizing any of them as a cat. Computers, on the other hand, can't do that. I know a scientist who proposed that the grand challenge of vision research is to be able to have a computer that can distinguish a picture of a cat from that of a dog. That tells you where the state of the art is in computer vision—it's gone nowhere.
We built a machine that solved that issue. They're not impressive-looking pictures, mind you, just silly little line drawings of cats and dogs. Nothing realistic like you'd recognize in a photograph. But our model shows these things can be done. Now we are in the process of building a sophisticated, large-scale tool set that will allow people to build systems that can deal with real-world data and the large volume of data that comes from real-world problems. The kind of systems we're building work just like a human brain—one that lives, breathes and eats manufacturing data or financial-market information 24 hours a day, and never gets tired of it.
What's the value of such machines?
That's kind of like asking what's the value of building a computer. They asked that question 50 years ago, and someone said, "Well, we can do military ballistic tables" —that's what they first did with computers—or "We might be able to tabulate the accounts of a business." We have a lot of ideas as to how HTMs will be used. The obvious ones are things that humans do. Take vision. There's a crying need for machines that can look at things and know what they're looking at. Right now, when you search on Google for images, someone has written in the data about what those images are. Speech recognition is another problem people have been trying to solve for a long time, and they haven't done a very good job at it. You don't want to do speech recognition; you want to do speech understanding, which is closer to what we're doing.
Automakers are building cars with lots of sensors. They want to know if a dangerous situation is occurring on the road around the car, or if the driver is getting drowsy, and whether the car should warn the driver, or slow down. It sounds easy, but it's actually very difficult to take this data and interpret it. Humans can do it, but in general, there are no machines that go about this the way humans do, no machines that have the insight of a human being. It's sort of the difference between playing chess with a human and playing chess with a computer. The computer can beat the human by being fast and using brute force, but it doesn't really understand what it's doing. Humans have deep intuition and understanding; they have a deeper model of what's going on in front of them.
Meanwhile, some of our customers are looking at complex manufacturing processes. There's all this data, and people sit there and sort through that data to figure out patterns and try to understand what causes the yield to rise and fall. This machine can do that. It can look at disparate data and build a model of how manufacturing lines work and how the yield is affected by various things.
Why has it taken this long to get to a point where we can start talking about actually making these intelligent predictions?
When people ask me about the success of the Palm Pilot, I always point out that there was nothing new in it. There wasn't a single piece of new anything in that product. What was new was the understanding of how to put the ingredients together. That's what we're doing here. We have a deeper understanding of how the brain works, and we can take a little bit of this and a little bit of that, and model it.
But it's a hard problem. First, we had to collect lots of information about the brain so people could sit down and figure out a theory about how the brain works. Before that, I don't think you could have figured this out. Also, our platform requires lots of memory, and a lot of CPU horsepower. So ten years ago we probably couldn't have done it. But today, we can.
One of the reasons I started Numenta is because I want to bring the urgency of economic markets to this scientific problem. From that point of view, it's a bit like the human genome project. The sequencing of the human genome began purely as an academic thing, and it was going to take a decade to complete until someone turned it into a business. Then it ended up taking about 18 months, because suddenly there was profit involved. I'm consciously trying to promote this understanding of the brain, and I'm going to make it happen faster by providing economic incentives for people to work on it.
It's a daunting task, but I think the hardest part is behind us. That's when I was pursuing this without a name, without any money; I was just some guy just trying to do this stuff on his own. Now, after years of working in this field,
I have a lot of experience, and it's starting to come together. We understand it well enough that I can speak confidently that this stuff will happen. And I am certain that over the years we will need to create a whole new set of programming tools and hardware. The earliest Numenta could release our first tool set is by the end of 2006. We know exactly what we need to do; it's just a matter of turning the crank.
Your theory presupposes that consciousness plays no part in the decision-making process, a notion to which many people object. What if your theory is wrong?
It's true that we haven't actually proven any of this stuff. We built a small model that shows it can work, and we understand the theory quite well, but we haven't actually built a system that does any of the things we're talking about. But I would be really, really surprised if the brain doesn't work like this. It's clear to me that what we are building will work. I am as certain about this as I can be about anything.