Definition

neuromorphic computing

Contributor(s): Ben Lutkevich

Neuromorphic computing is a method of computer engineering in which elements of a computer are modeled after systems in the human brain and nervous system. The term refers to the design of both hardware and software computing elements.

Content Continues Below

Neuromorphic engineers draw from several disciplines -- including computer science, biology, mathematics, electronic engineering and physics -- to create artificial neural systems inspired by biological structures.

There are two overarching goals of neuromorphic computing (sometimes called neuromorphic engineering). The first is to create a device that can learn, retain information and even make logical deductions the way a human brain can -- a cognition machine. The second goal is to acquire new information -- and perhaps prove a rational theory -- about how the human brain works.

How does neuromorphic computing work?

Traditional neural network and machine learning computation are well suited for existing algorithms. It is typically focused on providing either fast computation or low power, often achieving one at the expense of the other.

Neuromorphic systems on the other hand, achieve both fast computation and low power consumption. They are also:

  • massively parallel, meaning they can handle many tasks at once;
  • event-driven, meaning they respond to events based on variable environmental conditions and only the parts of the computer in use require power;
  • high in adaptability and plasticity, meaning they're very flexible;
  • able to generalize; and
  • strong and fault-tolerant, meaning it can still produce results after components have failed.

High energy efficiency, fault tolerance and powerful problem-solving capabilities are all also traits that the brain possesses. For example, the brain uses roughly 20 watts of power on average, which is about half that of a standard laptop. It is also extremely fault-tolerant -- information is stored redundantly (in multiple places), and even relatively serious failures of certain brain areas do not prevent general function. It can also solve novel problems and adapt to new environments very quickly.

Neuromorphic computing achieves this brainlike function and efficiency by building artificial neural systems that implement "neurons" (the actual nodes that process information) and "synapses" (the connections between those nodes) to transfer electrical signals using analog circuitry. This enables them to modulate the amount of electricity flowing between those nodes to mimic the varying degrees of strength that naturally occurring brain signals have.

The system of neurons and synapses that transmit these electric pulses is known as a spiking neural network (SNN), which can measure these discrete analog signal changes and are not present in traditional neural networks that use less nuanced digital signals.

Neuromorphic systems also introduce a new chip architecture that collocates memory and processing together on each individual neuron instead of having separate designated areas for each.

A traditional computer chip architecture (known as the von Neumann architecture) typically has a separate memory unit (MU), central processing unit (CPU) and data paths. This means that information needs to be shuttled back and forth repeatedly between these different components as the computer completes a given task. This creates a bottleneck for time and energy efficiency -- known as the von Neumann bottleneck.

By collocating memory, a neuromorphic chip can process information in a much more efficient way and enables chips to be simultaneously very powerful and very efficient. Each individual neuron can perform either processing or memory, depending on the task at hand.

Challenges of neuromorphic computing

Neuromorphic computing is an emergent field of technology that is mostly still being researched. Only recently have there been any attempts at the practical use of neuromorphic computer architectures. The most recent developments in neuromorphic hardware have the potential to improve the efficiency of current neural networks, which currently run on somewhat inefficient graphic processing units (GPU). However, a functional human brain chip is still a long way off.

One main challenge facing the field of neurocomputers put forth by expert Katie Schuman in an interview with online magazine Ubiquity, is that it is dominated by hardware developers, neuroscientists and machine learning researchers. It is lacking in software developers and engineers. In order to bring neuromorphic systems into production, a change in thinking is required. Developers, researchers and engineers need to be willing to move beyond the current mode of thinking about computer architecture and begin thinking outside of the traditional von Neumann framework.

Eventually, in order to popularize neuromorphic computing, software developers will need to create a set of APIs, programming models and languages to make neuromorphic computers accessible to nonexperts.

Researchers will need to develop new ways to measure and assess the performance of these new architectures so they can be improved upon. They will also need to merge research with other emergent fields such as probabilistic computing, which aims to help AI manage uncertainty and noise.

Despite these challenges, there is significant investment in the field. Although there are skeptics, many experts believe neuromorphic computing has the potential to revolutionize the algorithmic power, efficiency and overall capabilities of AI, as well as reveal new insights into cognition.

Use cases

Experts predict that when neuromorphic computers do come into their own, they'll work well for running AI algorithms at the edge instead of in the cloud because of their smaller size and low power consumption. Much like a human, they'd be capable of adapting to their environment, remembering what's necessary and accessing an external source (the cloud in this case) for more information when necessary.

Other potential applications of this technology in both consumer and enterprise tech include:

  • driverless cars
  • smart home devices
  • natural language understanding
  • data analytics
  • process optimization
  • real-time image processing for use in police cameras, for example

Although these practical applications remain a prediction, there are real-world examples of neuromorphic systems that exist today, albeit primarily for research purposes. These include:

  • The Tianjic chip. Used to power a self-driving bike capable of following a person, navigating obstacles, and responding to voice commands. It had 40,000 neurons, 10 million synapses and performed 160 times better and 120,000 times more efficiently than a comparable GPU.
  • Intel's Loihi chips. Have 130 million synapses and 131,000 neurons per chip. It is optimized for spiking neural networks.
  • Intel's Pohoiki Beach computers. Features 8.3 million neurons. It delivers 1,000 times better performance and 10,000 times more efficiency than comparable GPUs.
  • IBM's TrueNorth chip. Has over 1 million neurons and over 268 million synapses. It is 10,000 times more energy-efficient than conventional microprocessors and only uses power when necessary.
  • A massively parallel, manycore supercomputer that was designed at the University of Manchester. It is currently being used for the Human Brain Project.
  • BrainScaleS from Heidelberg University. Uses neuromorphic hybrid systems that combine biological experimentation with computational analysis to study brain information processing.

The examples from IBM and Intel approach neuromorphic computing from a computational perspective, focusing on improved efficiency and processing. The examples from the universities take a neuroscience-first approach, using neuromorphic computers as a means of learning about the human brain. Both approaches are important to the field of neuromorphic computing, as both types of knowledge are required to advance AI.

Neuromorphic computing and artificial general intelligence (AGI)

The term artificial general intelligence (AGI) refers to AI that exhibits intelligence equal to that of humans. One could say it's the holy grail of all AI. Machines have not yet and may never reach that level of intelligence. However, neuromorphic computing offers emerging new avenues for making progress toward it.

For example, the Human Brain Project -- which features the neuromorphic supercomputer SpiNNaker -- aims to produce a functioning simulation of the human brain and is one of many active research projects interested in AGI.

The criteria for determining whether a machine has achieved AGI are debated, but a few commonly included in the discussion are:

  • The machine can reason and make judgments under uncertainty.
  • The machine can plan.
  • The machine can learn.
  • The machine can communicate using natural language.
  • The machine can represent knowledge, including common-sense knowledge.
  • The machine can integrate these skills in the pursuit of a common goal.

Sometimes the capacity for imagination, subjective experience and self-awareness are included. Other proposed methods of confirming AGI are the famous Turing Test, and the Robot College Student Test, in which a machine enrolls in classes and obtains a degree like a human would.

If a machine ever did reach human intelligence, there are also debates about how it should be handled ethically and legally. Some argue that it should be treated as a nonhuman animal in the eyes of the law. These arguments have occurred for decades in part because consciousness in general is still not completely understood.

History of neuromorphic computing

The predecessor to the artificial neurons used in neural networks today can be traced back to 1958 with the invention of the perceptron. The perceptron was a crude attempt at imitating elements of biological neural networks using the limited knowledge of the brain's inner workings available at the time. The perceptron was intended to be a custom-built mechanical hardware used for image-recognition tasks by the U.S. Navy. The technology received significant hype before it was realized that the technology couldn't achieve the necessary function.

Neuromorphic computing was first proposed by Caltech professor Carver Mead in the 1980s. Mead described the first analog silicon retina, which foreshadowed a new type of physical computations inspired by the neural paradigm.

Mead is also quoted in a publication about neural computation in analog VLSI as saying he believed there was nothing done by the human nervous system that couldn't be done by computers if there was a complete understanding of how the nervous system worked.

However, the recent investment and hype around neuromorphic research can be attributed in part to the widespread and increasing use of AI, machine learning and neural networks in consumer and enterprise technology. It can also be largely attributed to the perceived end of Moore's law among many IT experts. Moore's Law states that the number of microcomponents that can be placed on a chip will double every year, with the cost staying the same.

Because neuromorphic computing promises to circumvent traditional architectures and achieve dramatic new levels of efficiency, it has gained much attention from major chip manufacturers like IBM and Intel -- Intel launching Loihi in 2017 -- with the end of Moore's Law imminent.

Mead, who distilled Gordon Moore's insights and coined Moore's Law, is also quoted in 2013 as saying, "Going to multicore chips helped, but now we're up to eight cores and it doesn't look like we can go much further. People have to crash into the wall before they pay attention." This belief reinforces the fact that popular discourse and hype around AI goes through ebbs and flows, with the lulls in interest often referred to as AI winters, and periods of heightened interest often come as a result of an immediate problem that needs solving, in this case the end of Moore's Law.

This was last updated in February 2020

Continue Reading About neuromorphic computing

Dig Deeper on Neural networks and deep learning

Start the conversation

Send me notifications when other members comment.

Please create a username to comment.

-ADS BY GOOGLE

File Extensions and File Formats

SearchBusinessAnalytics

SearchCIO

SearchDataManagement

SearchERP

Close