Neuromorphic computing and the future of artificial cognition.

05

In 2008, the U.S. Defense Advanced Research Projects Agency issued a challenge to researchers: Create a sophisticated, shoebox-size system that incorporates billions of transistors, weighs about three pounds, and requires a fraction of the energy needed by current computers. Basically, a brain in a box.

Computers, despite decades of advances, still couldn’t solve real-world challenges that people confront every day, the agency said. Continued improvement on the same trajectory wasn’t going to move the needle significantly. DARPA concluded that it needed “a new path forward for creating useful, intelligent machines.”

The human brain is an ideal model. It’s small and efficient, and it can process many types of inputs almost instantly. Computers have been compared to brains since the dawn of computer science. More recently, researchers have started to design computers that are modeled on how the brain works, a field now known as neuromorphic computing.

Researchers anticipate that brain-like systems could quickly handle tasks that currently take even specially trained computers several hours—tasks humans find trivial, like counting the number of people who appear in a video.

Computers that follow neuromorphic principles to read social cues or extract meaning from images could provide real-time commentary on news events. They could help diagnose elusive medical conditions earlier, and detect sophisticated computer security attacks within seconds. Practice makes perfect: The more often these computers do a task, the faster and more accurate they’d become—without requiring human intervention.

Black box

Although neuroscience has made important strides in recent years, the inner workings of the brain are still largely a mystery. “So little is really understood about the hardware of the brain—the neurons and their interconnections, and the algorithms that run on top of them—that today, anyone who claims to have built ‘a brain-like computer’ is laughable,” says Stan Williams, a research fellow at Hewlett Packard Labs.

Programs mirror human logic, but they don’t mirror intuitive thought.

Rich Friedrich, Hewlett Packard Labs

Nonetheless, Labs researchers examined the DARPA challenge. They concluded that a brain in a box wasn’t achievable, given the time and resources available. However, they also realized that there were significant possible improvements in computing they could accomplish in the near term that would preview progress to come—including software that solves problems in a more brain-like way and a computer architecture that adapts some of the known features of the brain’s structure.

The efforts are all the more critical because the standard computer designs that have been in use for the past 70 years won't scale forever.

Moore’s Law, the observation that transistor density—and thus computing power—doubles roughly every two years, is nearing the end of its useful life. Transistors are nearly as small as they will get using today’s materials. Recent gains in computing power have relied largely on adding more processor cores to a chip and requiring them to operate in parallel, which requires more power and cooling.

“The challenge in the future is not more transistors per unit area, but rather more computation per unit of energy,” Williams says.

Building a brain-like computer

Watch the Video

Mind and matter

Brains can’t compete with computers when it comes to arithmetical operations. Over time, programmers have reduced brain-intensive tasks like animating images and playing chess to a series of mathematical calculations.

Yet computers still struggle with many cognitive functions that are easy for people. With some training, for example, computers can recognize and accurately label a cat and a stool. But they have difficulty narrating a video of a cat jumping over a stool.

“To get a machine to narrate a video—it’s apparently harder than rocket science,” says Ennio Mingolla, a Northeastern University professor who studies neural network models of visual perception.

The challenge highlights the differences between recognition and perception—differences that neuromorphic computing approaches are trying to address.

Recognition is a relatively straightforward cognitive task, and programmers have largely figured out how to reproduce it in computers. With an algorithm and some training, a computer can distinguish cat from stool or scan products on a manufacturing line, flagging defects or abnormalities.

Perception is more challenging to mimic. It requires navigating or interacting with a busy and changing visual field. Walking through a crowded room without bumping into anything, narrating a video—these are tasks that require perception in addition to recognition. The brain has to “do a kind of triage to find out which subsection of all this data is even interesting to look at,” says Mingolla, who works with Labs researchers on projects that test perceptual algorithms.

How nature can guide researchers

Watch the Video

These moments of perception have to do with “grouping and sorting activity, rather than the ‘Aha, I found it’ moments of recognition,” Mingolla adds.

Programmers struggle to write algorithms that allow computers to perceive things easily, in part because the neurological processes behind perception aren’t entirely clear. The brain contains an estimated 86 billion cells, called neurons, interconnected by a web of pathways called synapses. Researchers have determined that the same groups of neurons and synapses that perform calculations are also responsible for storing memories. They’re also capable of handling many disparate tasks in parallel.

Computers don’t work the way neurons and synapses do. Computers and applications are typically designed to solve problems or execute tasks in a sequential manner, passing information back and forth between a processor and memory at each step.

“Programs mirror human logic, but they don’t mirror intuitive thought.” says Rich Friedrich, director of systems software at Hewlett Packard Labs. “The brain is inherently massively parallel and probabilistic.” It can draw reasonable inferences and conclusions based on very small amounts of information.

No single quality or technology defines neuromorphic computing. As researchers learn more about the way the brain works, they experiment with how to implement those ideas in hardware and software.

One of the emerging tenets of neuromorphic computing is parallel processing, or dividing big problems into smaller tasks that can be executed simultaneously, like sheets of neurons firing at the same time in the brain.

Parallel processing, which has been around for decades, has been used primarily for specialized, high-performance computing tasks. Neuromorphic research aims to bring parallelism into everyday computing to tackle big computing tasks faster.

Cognition from the machine

Sitting in a conference room at Labs’ offices in Palo Alto, Calif., Friedrich holds up a smartphone, his hand covering the logo. “To you and me, it’s obvious this is an iPhone,” he says. “To a computer with only a partial view, it’s not so easy.”

This is the sort of problem Friedrich’s group has been working on for the past several years.

The challenge highlights the differences between recognition and perception—differences that neuromorphic computing approaches are trying to address.

Recognition is a relatively straightforward cognitive task, and programmers have largely figured out how to reproduce it in computers. With an algorithm and some training, a computer can distinguish cat from stool or scan products on a manufacturing line, flagging defects or abnormalities.

Perception is more challenging to mimic. It requires navigating or interacting with a busy and changing visual field. Walking through a crowded room without bumping into anything, narrating a video—these are tasks that require perception in addition to recognition. The brain has to “do a kind of triage to find out which subsection of all this data is even interesting to look at,” says Mingolla, who works with Labs researchers on projects that test perceptual algorithms.

Biologically influenced systems architectures

Watch the Video

Labs wound down its involvement in Synapse in 2011, but the researchers continued using the software simulator and compiler they had developed for a software development platform called CogX, derived from the pseudo-Latin cog ex machina, or “cognition from the machine.”

CogX allows non-specialist developers to write software that takes advantage of parallelism, mimicking how the brain uses many neurons to solve multiple aspects of a problem at the same time. CogX runs on affordable hardware that can be built today.

In April, HPE announced a public, open-source version of the platform called the Cognitive Computing Toolkit. Instead of relying on the traditional CPUs that power most computers, the Toolkit runs on graphics processing units (GPUs), inexpensive chips designed for video game applications.

“My sons play Halo,” says Labs researcher Dick Carter. “All those explosions, smoke, debris flying everywhere—GPUs were built to process these kinds of visuals.”

A key quality of GPUs is that they can simultaneously process multiple visuals. Rendering an image on one corner of the screen doesn’t depend on first processing the other corner. GPUs, which over time have become less specialized and more programmable, are also great for breaking up nonlinear computing problems into discrete, simultaneous tasks.

In December, HPE demonstrated software that extracts corporate logos from multiple video streams in real time. If a company buys TV advertising during a sports event, that company will want to verify that all the ads it purchased were actually shown in full. A CogX application monitors 25 video streams simultaneously, noting each appearance of the advertiser’s logo and immediately informing the company whether the contract has been fulfilled. The application has learned to recognize 1,000 logos, all without requiring a programmer to write a single line of code.

In addition to visual perception, the project showcased a second neuromorphic principle: deep learning, a computing technique in which a system iteratively builds a more accurate understanding of the relationships in a set of data.

Opening von Neumann’s bottleneck

Say you’re performing a two-step mathematical function, such as taking the average of two numbers. The first step is to find the sum. A traditional computer would write this sum into memory and then perform the second step, division.

Now imagine this same scenario playing out over hundreds of thousands of computations. The computer must shuttle data between memory and processing, slowing down the overall operation of the system. This is the famous “von Neumann bottleneck,” named for John von Neumann, who in 1945 described the basic architecture that computers still use today.

“In the brain, the same cells hold memories and perform computing,” Williams says. As a result, our brains don’t waste time or expend energy moving information from one state to the other, a problem known as latency.

CogX reduces latency by using techniques that mimic brain functions. For example, kernel fusion minimizes the number of times the system writes information into the computer’s memory. To solve the two-step equation above, a program developed in CogX might use the register file, a tiny bit of memory placed directly on the processor, to store the intermediate sum. Only the final average would be written to system memory.

Labs is exploring brain-like architectures in additional ways. For example, it’s developing a nanoscale electronic component called a Memristor, which is operationally similar to a synapse and can store information without using any energy. A system using Memristors instead of today’s DRAM memory technology would use significantly less power, much like the human brain.

More efficient computing models

Watch the Video

Programs mirror human logic, but they don’t mirror intuitive thought.

Rich Friedrich, Hewlett Packard Labs

Eventually, Labs plans to incorporate Memristors as the primary form of memory in The Machine, a new type of computer that places a large shared pool of memory at the center of the architecture.

Memristors can also be used to perform neuromorphic calculations. Labs recently created what it calls the Dot Product Engine, an array of Memristors that performs a complex type of computation called matrix-vector multiplication. Traditional computers burn a lot of time and energy on these calculations, but the Dot Product Engine can make them almost instantly.

For applications such as deep learning that often use matrix-vector multiplication, a Dot Product Engine on a multicore chip can act as an accelerator, cutting the time and energy required to obtain an answer by an order of magnitude.

Accelerators like this one, specializing in one type of computing task, mimic how different neural structures handle different neural tasks. Multicore chips equipped with similar accelerators could be slotted directly into the architecture of The Machine.

Labs is also working with government and industry partners interested in neuromorphic computing. In October 2015, the U.S. Office of Science and Technology Policy announced a “Grand Challenge” for nanotechnology research aimed at developing transformational computing capabilities. The challenge incorporates many of the ideas outlined in a 2015 paper on “sensible machines” that Stan Williams co-authored.

Developments like these indicate breakthroughs to come. Someday, neuromorophic machines will not just out-calculate humans, but also perform complex tasks involving sensory information and common-sense perception.

“The end of Moore’s Law is potentially an existential threat,” Williams says. “But in any time of change you also get tremendous opportunities. We’re finally going back to the days of John von Neumann and asking, ‘what would a brain do—or how would a brain do it?’”