What is neuromorphic computing and why you should care about it?

Saurav Pawar
7 min readAug 28, 2022

In 1967, an American engineer and scientist Carver Mead met the biophysicist Max Delbrück. As they spent time, Delbrück lit a small interest in Mead’s mind about transducer physiology which explains the transformations that occur between the physical input starting a perceptual process and eventual phenomena. This led to a great fascination in Mead’s mind and he started observing the synaptic transmission in the eye’s retina which eventually made him think of transistors as analog devices rather than digital switches, and therefore the concept of neuromorphic computing was born in 1980.

What is neuromorphic computing?

Neuromorphic computing is the use of very large-scale integration systems containing electronic analog circuits to mimic neuro-biological architectures present in the nervous system. It combines various fields like biology (neuroscience), physics, mathematics, electrical and electronics engineering, and computer science and therefore focuses on developing brain-inspired computer architectures to build computers as efficient as the biological brain. In simple words, neuromorphic computing aims at building brain-inspired analog machines.

How does a neuromorphic computer work?

As neuromorphic computing focuses on creating physical neural networks with a close resemblance to biological neural networks, the basic building blocks are the physical artificial neurons that mimic their biological counterparts.

Unlike conventional Von-Neumann architecture, these artificial neurons communicate using analog electrical charges and use the properties of the signal, like amplitude and generated time to encode the data within analog pulses. Neurons have associated threshold values for the electrical charges they accumulate. When the threshold limit is reached, the neuron fires an electrical pulse known as action potential (or spike) which travels through its output synapses and delivers the encoded message to thousands of neighboring neurons through dendrites which act as the input terminals of neurons.

Von-Neumann architecture (Image by author)

Neurons also have an associated leakage rate which determines the discharging rate of their gathered electrical charge as time passes.

Over time, activities on a neural network tweak the parameters like the threshold value of the neurons associated with a particular activity. This tweaking process allows the system to learn and adapt to handle the activities more efficiently as the system gains more exposure over a period of time. When looking at neuromorphic architecture there is one very important difference. Neuromorphic computers replace the separate processing memory units which is a distinctive feature in Von-Neumann architecture with collocated new devices that can handle both processing and memory functions.

Neuromorphic computers are event-based processing systems, which means that the system components are activated only if a signal comes along, so even though these systems contain millions of neurons, only a small percentage of neurons are activated for a specific activity while the rest of the system stays deactivated. A single neuromorphic system can learn to handle multiple complex activities depending on the system’s size and algorithm. The more neurons the system has, the more problems it learns to handle. In this manner, neuromorphic computers are considered to be highly adaptable at handling complex dynamic problems using a smaller number of computing power. Even though the concept just started to catch attention, there are already some advances made in this field.

In 2015, IBM introduced a brain-inspired chip known as TrueNorth. It has 1 million neurons, 256 million synapses, and is made up of 5.4 billion transistors. It works in real-time and burns only 73 milliwatts of power which means that if we connect the chip directly to an iPhone battery we can run the chip on full power for 1 whole week!

A board incorporating 16 IBM TrueNorth chips (Source)

If this chip is scaled up to the size of the brain, it can become 10,000 times more powerful than a human brain.

In 2017, IBM demonstrated in-memory computing using 1 million phase-changing memory devices, thus breaking the Von-Neumann bottleneck.

In the same year, Intel introduced their neuromorphic chip known as Loihi, which is a 128-core chip fabricated on a 14-nanometer process technology. This chip is developed based on a specialized architecture optimized for spiking neural network algorithms. Spiking neural networks are a specialized architecture of neural networks designed to efficiently run on emerging new neuromorphic computers. They are also known as third-generation neural networks. If you wish to learn more about Spiking Neural Networks, please refer this research paper.

Intel Nahuku board incorporating 16 Loihi neuromorphic research chips (Source)
Intel Nahuku board incorporating 16 Loihi neuromorphic research chips (close-up view) (Source)

The Loihi chip includes 130,000 neurons and 130 million synapses, and each neuron in the chip has a learning engine embedded in them. Currently, the chip is successfully implemented on a hazardous chemical detection system by recognizing the odors of the chemicals. Yes, you read it right. Intel is teaching chips how to smell! Also, in September 2021 Intel announced Loihi 2 — a successor to Loihi.

There are also some supercomputers built around this concept which are as follows:

SpiNNaker stands for Spiking Neural Network Architecture. It is a neuromorphic system located in Manchester, United Kingdom, and consists of 18 core ARM processors with 128 MB of shared local RAM interconnected with a packet-based network. The system contains more than 1 million (1,036,800) cores and over 7 TB of RAM. It runs real-time simulations with considerably low power consumption compared to its size and aims at simulating 1 billion neurons (or 1 % of the human brain) in real-time. Researchers in the UK have also claimed that SpiNNaker can be used to simulate the behavior of the human cortex. SpiNNaker has also achieved a huge milestone in neuromorphic computing by matching the results with that of a traditional supercomputer!

One of the primary use of SpiNNaker is to help neuroscientists understand how the human brain works by running large-scale simulations of various parts of the brain like the human cortex (an 80,000 neuron model), the outer part of the brain, and basal ganglia — the area of the human brain affected by Parkinson’s disease.

SpiNNaker — 1 million core machine (Source)
SpiNNaker — 1 million core machine (front view) (Source)
A board incorporating 48 SpiNNaker chips (Source)

BrainScaleS (BSS) is a large-scale neuromorphic computer model located in Heidelberg, Germany, which is capable of implementing analog electronic models of neurons and synapses. BrainScaleS is reported to be running 1,000 times faster than real-time, making it the fastest neuromorphic computer in the world.

BrainScaleS (Source)

Neuromorphic computing is a relatively old concept. It was revived with the successful development of memristive devices, known as memristors.

Memristors are unique devices with the ability to process data and store them in their resistive and conductive states. A memristor can vary its conductivity based on its past programming or learnings which can even be recalled after a power loss!

Memristor (Source)

Memristors are the closest device that mimics the biological neurons, even though it is much less capable than their biological counterpart.

Till now we have gained enough intuition to understand what neuromorphic computing is and how it functions. Now let us see some of its applications!

Medicine:

Neuromorphic devices can be used to improve drug delivery systems. Their highly responsive nature allows them to release a drug upon sensing a change in the body conditions. They are also a great alternative for prosthetics as they are easily compatible with the human body when integrated with biotic materials.

Artificial intelligence:

The current surge in AI development demands heavy computing power to train large neural networks which are fulfilled by thousands of CPU and GPU units. This stacking of computing devices proved that Moore’s law is approaching its end as manufacturers failed to further scale down and improve the components of the microchip due to atomic energy dissipation and consumption limits. Neuromorphic computing allows the expansion beyond these limits with higher potential performance and incredible power efficiency. Neuromorphic architecture by design inherits a few qualities conventional computers are lacking. Additionally, neuromorphic architecture also solves the issue of separate processing and memory units (Von-Neumann bottleneck).

Edge computing:

Autonomous or self-driving cars are based on neural networks. For a car to drive without human involvement, it needs to be connected to a data center that constantly analyses the data it receives from the car and then returns it to the car using 4/5 G technology. This produces latency, which can cost lives. Neuromorphic computing holds the capability to solve this problem as all the processing can be done locally, inside the “brain” (neuromorphic chip) of the car. This not only helps in reducing latency but also helps in solving a crucial problem of cybersecurity as most of the data is processed locally, thus preventing confidential data from malicious attacks.

Similarly, various edge devices like smartphones, tablets, and various IoT sensors continuously transfer data back and forth to centralized cloud-based servers where the necessary computation is carried out, and then the result is fed back to the device. This activity subjects confidential data to various security and privacy risks.

That’s it for today! I hope you enjoyed the blog!🙂

If you have any questions regarding anything, please mention them in the comment section and I will be happy to answer them!

Thank you for your time!

--

--

Saurav Pawar

Machine Learning Research at Technology Innovation Institute