The gnomes at IBM’s research labs were not content to make merely a genius computer that could beat any human at the game of jeopardy. They had to go and create a new kind of machine intelligence that mimics the actual human brain.
Watson, the reigning jeopardy champ, is smart, but it’s still recognizably a computer. This new stuff is something completely different. IBM is setting out to build an electronic brain from the ground up.
Cognitive computing, as the new field is called, takes computing concepts to a whole new level. Earlier this week, Dharmendra Modha, who works at IBM’s Almaden Research Center, regaled a roomful of analysts with what cognitive computing can do and how IBM is going about making a machine that thinks the way we do. His own blog on the subject is here.
First Modha described the challenges, which involve aspects of neuroscience, supercomputing, and nanotechnology.
The human brain integrates memory and processing together, weighs less than 3 lbs, occupies about a two-liter volume, and uses less power than a light bulb. It operates as a massively parallel distributed processor. It is event driven, that is, it reacts to things in its environment, uses little power when active and even less while resting. It is a reconfigurable, fault-tolerant learning system. It is excellent at pattern recognition and teasing out relationships.
A computer, on the other hand, has separate memory and processing. It does its work sequentially for the most part and is run by a clock. The clock, like a drum majorette in a military band, drives every instruction and piece of data to its next location — musical chairs with enough chairs. As clock rates increase to drive data faster, power consumption goes up dramatically, and even at rest these machines need a lot of electricity. More importantly, computers have to be programmed. They are hard wired and fault prone. They are good at executing defined algorithms and performing analytics.
With $41 million in funding from the Defense Advanced Research Projects Agency (DARPA), the scientists at the Almaden lab set out to make a brain in a project called Systems of Neuromorphic Adaptive Plastic Scalable Electronics (SyNAPSE).
The rough analogy between a brain and a computer posits roles for cell types — neurons, axons, and synapses — that correspond to machine elements — processors, communications links, and memory. The matches are not exact, as brain cells’ functions are less distinct from each other than the computer elements. But the key is that the brain elements all reside near each other, and activity in any given complex is stimulated by activity from adjacent complexes. That is, thoughts stimulate other thoughts.
Modha and his team set out to map and synthesize a wiring diagram for the brain, no trivial task, as the brain has 22 billion neurons and 220 trillion synapses. In May 2009, the team managed to simulate a system with 1 billion neurons, roughly the brain of a lower mammal. Except that it operates at one-thousandth of real time, not enough to perform what Modha called “the four Fs”: food, fight, flight, and mating.
But the structure of this machine is entirely different from today’s commercial computers. The memory and processing elements are built close together. It has no clock. Operations are asynchronous and event driven; that is, they have no predetermined order or schedule. And instead of being programmed, they learn. Just like us.
Part of getting the power down to brain-like levels is not storing temporary results (caching, in industry jargon). Sensing stimulates action, which is sensed and acted upon further. And so on.
The team recently built a smaller hardware version of the brain simulation, one with just 256 neurons, 262,000 programmable synapses, and 65,000 learning synapses. The good news is that this machine runs at within an order of magnitude of the power that a real brain consumes. With its primitive capabilities, this brainlette is capable of spatial navigation, machine vision, pattern recognition, and associative memory and can do evidence-based hypothesis generation. It has a “mind’s eye” that can see a pattern, for example, a badly written number, and generate a good guess as to what the actual number is. Already better than our Precambrian ancestors.
Modha pointed out that this type of reasoning is a lot like that of a typical right hemisphere in the brain: intuitive, parallel, synthetic. Not content with half a brain, Modha envisions adding a typical von Neumann-type computer, which acts more like a reasoning left hemisphere, to the mix, and having the two share information, just like a real brain.
When this brain is ready to go to market, I’m going to send my own on holiday and let Modha’s do my thinking for me.
Oh, and, by the way, in case you were wondering whether the SyNAPSE project has caused Watson to be put out to pasture, nothing could be further from the truth. Watson is alive and well and moving on to new, more practical applications.
For example, since jeopardy contestants can’t “call a friend,” Watson was constrained to the data that could be loaded directly into the machine (no Internet searches), but in the latest application of Watson technology — medical diagnoses — the Internet is easily added to the corpus within the machine, allowing Watson to search a much wider range of unstructured data before rendering an answer.
Watson had to hit the bell faster than the human contestants, but the doctors seeking advice on a strange set of symptoms can easily wait a half hour or longer. So, Watson can make more considered choices. Watson at work is a serious tool.
All this genius is causing my brain to explode.
Disclosure: Endpoint has a consulting relationship with IBM.
© 2011 Endpoint Technologies Associates, Inc. All rights reserved.
Source: Forbes - Cognitive Computing
Roger Kay, Contributor
I cover endpoints and their interrelation with the cloud.