Can Computers Think Like
How to Increase the Reliability of Your IT Infrastructure Using Predictive Analytics REGISTER >
Palm Computing founder Jeff Hawkins has developed a controversial theory of how the brain works, and he's using it to build a new race of computers.
CIO INSIGHT: In your book, On Intelligence, you claim to have discovered a new understanding of how the brain works—and how machines can be built to model the brain. It's a powerful idea, but also a controversial one. Can you explain the essence of your theory?
HAWKINS:First of all, the theory explains how the neocortex works—not the entire brain. The neocortex makes up roughly half of a human brain; it's where all high-level thought and perception takes place. It's the place where you perceive the world. And it's a type of memory system, though it's different from that of a computer in that it is composed of a tree-shaped hierarchy of memory regions that store how patterns flow over time, like the notes in a melody. We call this Hierarchical Temporal Memory (HTM).
Computers must be programmed to solve problems, but HTM systems are self-learning. By exposing them to patterns from sensors (just like the neocortex receives information from the eyes and ears), HTMs can automatically build a model of the world. With this model, an HTM can recognize familiar objects and use that data to predict the future. So we're not claiming to build brains here. We are building things that we think can do what half of a human brain does.
How have people reacted to your hypothesis?
If you said to someone that you want to figure out how the brain works and then build machines that work the same way, most people would laugh at you. They'd say it's ridiculous, that people have been trying for decades and haven't made any progress. But it isn't ridiculous. Why shouldn't we be able to figure out how brains work? We understand how kidneys work, and how other organs work, so why not the brain? In fact, it ought to be pretty straightforward. It's only our ignorance that makes things look hard.
So the response to the book has been mixed. We've had a stream of business-oriented researchers who want to talk about it, and several prominent scientists who think this is a landmark book. Many other scientists have dismissed our theory. But a gentleman named Dileep George, who was working at the Redwood Neuroscience Institute [as a graduate research fellow], actually came up with a mathematical formulation for the biological theory in the book. And he did a convincing enough job that we're certain it can be built to solve practical problems. So we started a company called Numenta. Its focus is essentially on building a platform—like an operating system, but different.
What do you expect this platform will be able to do?
We believe that we have come up with a new algorithm, a new way of computing—though it isn't a computer. It's a new way of processing information. HTMs essentially do three things. First, they discover how the world works by sensing and analyzing the world around them. Second, they recognize new inputs as part of its model, which we call pattern recognition. Finally, they make predictions of what will happen in the future. We think we can build machines that are in some sense smarter than humans, that have more memory, that are faster and can process data nonstop, because they use hierarchical and temporal data to predict outcomes—the same way the human brain works.
Now, what do we mean by hierarchical? Well, there's a hierarchical nature to many things—weather and markets and businesses and biological organisms are all structured hierarchically. When you're born, you know almost nothing. Then, over time, you get sensory inputs. Over a period of years, these inputs help you build a hierarchical model of the world. So you start to understand things like words and sentences, chairs and computers and ideas.
Businesses are hierarchical, too—not just the way the people's roles are structured, but how the different parts of a business interact. Let's say I was looking at the manufacturing side of a business, and I wanted to know why a certain metric, such as yield, is going down. Chances are, it's correlated with something else going on nearby, maybe something going on in the supply chain or something like that. It's probably not going to be related, at that level, to something like the rate we pay for advertising. A human would look at that data and try to find the underlying causes, come to a conclusion, and then act upon it.
That's what our systems can do. If there's really an underlying cause to the problem, the goal of the HTM system is to find it. You take some data from some kind of system—visual or financial, it doesn't matter. You feed it into the system's hierarchical temporal memory, and over time it builds a model of underlying causes.
How is that different from a traditional computer?
It's very different. You have to tell a traditional computer what to look for. A big parallel computer that's modeling fluid dynamics—like the weather or a jet engine, for example—tries to model each element, each particle or cubic volume of air. That's just solving mathematical equations. Humans don't operate that way. We don't predict the future by looking at every molecule. We look at problems and seek out high-level causes. We say to ourselves, "I noticed that whenever a storm front comes, there's usually a cold day the next day." As a result, we have these concepts called storms and hurricanes—high-level concepts we have been able to deduce by looking at low-level data.
That's what our HTM technology will try to do: discover the underlying causes in the world. If you hook the system up to the right data and expose that system to the data over a long enough period of time, it can build a model of that environment, just like a human brain does. It will automatically come up with a way of representing the world just like humans do, and draw conclusions based on that model.
It's compelling, but how do you know it can be done?
If you go back 50 or 60 years, when they were building the very first computers, people knew that a computer could be built, even though they didn't have transistors or circuits or hard disks. It's the same thing in this case, though we hope to build it much faster.
We did a prototype before we launched Numenta. It wasn't designed to do anything really useful, but our HTM system solved the very difficult problem of pattern recognition, which no one else has been able to solve.
What was the problem??
When you look at a picture of, say, a cat, there's almost an infinite number of variations of what a cat might look like. Humans have no problem recognizing any of them as a cat. Computers, on the other hand, can't do that. I know a scientist who proposed that the grand challenge of vision research is to be able to have a computer that can distinguish a picture of a cat from that of a dog. That tells you where the state of the art is in computer vision—it's gone nowhere.
We built a machine that solved that issue. They're not impressive-looking pictures, mind you, just silly little line drawings of cats and dogs. Nothing realistic like you'd recognize in a photograph. But our model shows these things can be done. Now we are in the process of building a sophisticated, large-scale tool set that will allow people to build systems that can deal with real-world data and the large volume of data that comes from real-world problems. The kind of systems we're building work just like a human brain—one that lives, breathes and eats manufacturing data or financial-market information 24 hours a day, and never gets tired of it.