IBM has set its sights on a new era of ‘cognitive computing’ to make sense of unprecedented amounts of noisy, unstructured data in various industry sectors today.
Speaking at the University of Melbourne ahead of the launch of its latest R&D facility, IBM’s research director John E Kelly III described the end of the 70-year programmable computing era.
The next decade, he said, would yield exaflop devices that learned and recognised patterns in order to extract useful information from exascale (million-terabyte) data centres.
That cognitive computing era was glimpsed in February, he said, when IBM’s supercomputing software, Watson, beat human opponents in the US quiz show, ‘Jeopardy!’.
Watson used about $3 million of hardware, drew 85kW of power and was customised to understand each question, search 500GB of local memory and make a decision within the game’s three-second time limit.
Meanwhile its opponents, Ken Jennings and Brad Rutter, didn’t study for the show and information was “instantly there” in their brains, Kelly said, describing conversations with the former Jeopardy! champions.
“They have no filing system, the answer is just there and they’ve done this for so long that they trust the information for the answer will be there when presented with a question.
“For those two human beings, the brain consumes only 20W [of power]. The incredible thing is it took 85kW to beat a 40W machine.
“We need to do bio-inspired computer science; we need to understand how this does what it does. If [computers] can do the type of reasoning we can do with the brain, then we can do some very interesting things.”
To highlight the gap in capabilities of biological and digital systems, Kelly described a Blue Gene/P supercomputer that IBM built for the Lawrence Livermore National Laboratory in 2009.
The supercomputer, dubbed Dawn, had almost 147,456 processor cores, 144 TB of memory and performed 500 TeraFLOPS, placing ninth on the Top500 supercomputer list that June.
But it was only “roughly” capable of simulating the 764 million neurons and 6100 billion synapses in a cat’s brain, Kelly said, referring to a November 2009 cognitive computing project by IBM (pdf).
Earlier this year, Salk Institute computational neuroscientist Terrence Sejnowski told Australian scientists that new power-efficient, parallel computing hardware was needed to support brain-like processing.
Although electrons travel far quicker than neurons in the brain, the latter functions probabilistically, performing only the calculations most likely to produce the desired outcome, and doing many such calculations in parallel, Sejnowski explained.
Kelly expects IBM to be able to simulate the human brain’s 20 billion neurons and 200,000 billion synapses in exascale supercomputers of the next decade.
“That’s the easy part,” he said. “We do not understand how this is wired; we do not understand the fundamentals of how the neurons and synapses are behaving.
“All we know is that they’re not ones and zeros – they’re multi-state devices, and they learn over time and their behaviour changes based on what they have experienced.”
IBM is building arrays of electronic, synapse-like devices under its SyNAPSE project that aim to physically mimic the human brain.
The project delivered prototype silicon chips in August that contained 256 neurons and either 262,144 programmable synapses or 65,536 “learning synapses”.
The next step, Kelly said, is to replace components of the chip with materials that support multiple states, and not just the traditional ones and zeros.
“Between what we’re doing with computer modelling and what we’re doing with physical devices, we’re fundamentally on a journey [towards] these so-called learning systems,” he said.
A Watson for the enterprise?
Last month, IBM finalised an agreement with US health insurance company WellPoint to develop Watson-based technology using the insurer’s historical data of treatments and outcomes.
Kelly said the healthcare sector presented the “largest opportunity” for Watson, which could quickly assist doctors in diagnosing patients.
Decisions made by human doctors tended to be skewed by their experiences, he said, while Watson could make purely statistical decisions, “[wiping] the slate clean” before each question.
Kelly acknowledged that IBM faced challenges in acquiring and using healthcare data due to privacy restrictions in western countries.
"We have to deal with the privacy issues in each country," he said. "In the US and Australia, we need data to be sanitised and anonymous if we’re going to deal with it.
"In other places around the world, that’s not the case; in China, they’re very open to sharing healthcare data so we can do things there much more quickly and easily than we can do in the western world."
Pilots of the technology were “astonishing”, he said.
According to IBM research fellow Brenda Dietrich, enterprise-focused decision-making systems could produce better outcomes and reduce “decision fatigue” in organisations.
IBM was already working with the State of New York’s taxation department to automatically identify fraudulent or erroneous tax returns and recommend the most effective means of collecting money from debtors.
Real-world problems were far more complex than those Watson faced on television, Dietrich noted, adding that it could take five, ten, or even 20 years before a full decision-making system was enterprise-ready.
“Jeopardy! was simple; you were right or you were wrong. In medicine, you may not know if your treatment was accurate; we have to figure out how to deal with noisy feedback.”
But “I think the biggest bottleneck is human acceptance,” she said. “It’s gotten a whole lot better in the 25 years I’ve worked in this space, but it’s still not completely solved.”