Concerned by recent AI news? Then get ready for computing that mimics the human brain.

John Edwards, Technology Journalist & Author

May 1, 2023

5 Min Read
abstract of The Brain-shaped Chip and the Future of Artificial Intelligence
carlos larrechi via Alamy Stock

As artificial intelligence technology continues evolving, attention is turning to neuromorphic computing and its potential to take AI to new levels of power and performance.

Neuromorphic computing is a type of computer engineering that mimics the human brain and nervous system. “It's a hardware and software computing element that combines several specializations, such as biology, mathematics, electronics, and physics,” explains Abhishek Khandelwal, vice president, life sciences, at engineering consulting firm Capgemini Engineering.

While current AI technology has become better at outperforming human capabilities in multiple fields, such as Level 4 self-driving vehicles and generative models, it still offers only a crude approximation of human/biological capabilities and is only useful in a handful of fields. “The input and output are still tied to digital means,” says Shriram Natarajan, a director with technology research and advisory firm ISG. Neuromorphic approaches, on the other hand, attempt to replicate actual underlying biological systems. This could lead to a better understanding of the physical processes involved and potentially could be more natural for users to adopt.

Alternative Computational Architecture

Neuromorphic computing provides an alternative computational architecture that's fundamentally different from present computing platforms. “Current computing architectures are based on von Neuman principles, such as separate memory and processing and binary representation,” Khandelwal says. “Neuromorphic computing is modeled against brain concepts, like neurons and synapses.”

Current AI and machine learning (ML) technologies utilize a “network of neurons” at increasing depths to create a human-like understanding of spaces, visuals or languages, Natarajan says. The technology aims to mirror human behavior and intuition. Neuromorphic technology has similar aims, but with more fidelity to the human brain structure. “AI systems have been very successful by just adopting a few traits of the brain,” he observes. “The expectation of neuromorphic tech is that a deeper copy would be more effective, have wider applicability, and probably require lower power.”

Greater Intelligence, Consumes Less Energy

Neuromorphic computing uses a distributed network of neurons to process information in parallel. “This parallel processing approach is critical, allowing the system to process information more quickly and efficiently than traditional computing and making it more resilient to errors and noise in the data,” Khandelwal says. “Unlike traditional computing, which must be trained on a large amount of data, neuromorphic computing learns and adapts in real time, just like the human brain, and consumes relatively less energy than traditional AI algorithms.”

Neuromorphic supporters believe the technology will lead to more intelligent systems. “Such systems could also learn automatically and self-regulate what to learn and where to learn from,” Natarajan says. Meanwhile, combining neuromorphic technology with neuro-prosthetics, (such as Neuralink) could lead to breakthroughs in prosthetic limb control and various other types of human assistive and augmented technologies.

Neuromorphic computing systems can learn and adapt in real-time. “Compared to traditional AI algorithms, which require significant amounts of training before they can become effective, neuromorphic computing systems can learn and adapt on the fly,” Khandelwal says. “This means they can quickly respond to changing environments and situations, making them ideal for use in applications such as robotics and self-driving cars.”

Complex and Challenging to Develop

Developing neuromorphic computing systems is a complex and challenging task, requiring significant expertise and innovation as well as a deep understanding of neuroscience, computer science, and engineering. “The primary technical challenge is designing and building systems that can accurately simulate the behavior of biological neurons and synapses,” Khandelwal says. “This requires a deep understanding of the underlying principles of neural networks and the ability to translate this knowledge into practical technological solutions.”

Neuromorphic computing is a relatively new field, and much work remains to be done before it can reach anything close to its full promise. “Because neuromorphic computing aims to replicate the structure and function of the human brain in hardware and software, it has the potential to revolutionize computing by enabling machines to process information more efficiently and accurately, and to learn and adapt like humans,” Khandelwal says. “The objective is to overcome the limitations of artificial intelligence and robotics, which still experience challenges with autonomy, creativity, and sociality, and be able to integrate aspects such as haptics and tactile perception into the overall analysis and decision-making processes.”

Neuromorphic Leaders and Evolution

Neuromorphic computing market leaders include Qualcomm, Intel, and IBM, Natarajan says. Yet before the technology can enter the commercial mainstream visual, audio, spatial sensors will have to be developed or improved. “Even with these advances, it's going to take a lot more fundamental research, compute, and simulation to get viable neuromorphic solutions off the ground,” he says.

Despite the challenges, Khandelwal believes that neuromorphic computing research is moving forward on several fronts. “Advancements in neural networks, like world models, or large language models (LLMs), such as GPT-4, are extending what will be possible for real-world use cases.” he says. “Computational psychology and other fields that mix neuroscience, computer sciences, and cognition sciences are pushing the boundaries of what could be possible with neuromorphic computing,” Khandelwal states.

What to Read Next:

3 Ways Computer Vision Will Put the Human in AI in 2023

Emotion Detection in Tech: It’s Complicated

Building a Chatbot That Humans Will Actually Like

About the Author(s)

John Edwards

Technology Journalist & Author

John Edwards is a veteran business technology journalist. His work has appeared in The New York Times, The Washington Post, and numerous business and technology publications, including Computerworld, CFO Magazine, IBM Data Management Magazine, RFID Journal, and Electronic Design. He has also written columns for The Economist's Business Intelligence Unit and PricewaterhouseCoopers' Communications Direct. John has authored several books on business technology topics. His work began appearing online as early as 1983. Throughout the 1980s and 90s, he wrote daily news and feature articles for both the CompuServe and Prodigy online services. His "Behind the Screens" commentaries made him the world's first known professional blogger.

Never Miss a Beat: Get a snapshot of the issues affecting the IT industry straight to your inbox.

You May Also Like


More Insights