UII UPDATE 423 | OCTOBER 2025
EMERGING TECHNOLOGY SERIES
This report is one of a series on emerging and potentially disruptive technologies that may be deployed in digital infrastructure. Here, Uptime Intelligence considers the use of neuromorphic computing to reduce the power demands of AI workloads radically.
AI workloads are responsible for a large and rapidly growing share of data center power consumption, increasing the strain on grid operators that cannot build new capacity quickly enough.
At the same time, the high power and cooling requirements of individual rack-scale GPU-based systems are forcing operators to make changes to established data center designs — increasing cost, complexity and risk.
Research into neuromorphic computing could lead to the creation of smaller, faster and orders of magnitude more power-efficient accelerators for AI, while simultaneously offering a solution to the end of traditional silicon scaling techniques.
The future of data centers for AI is expected to be dominated by gigawatt-scale facilities that deliver hundreds of kilowatts to each rack. Neuromorphic computing proposes an alternative — where AI workloads are run in small and efficient facilities filled with systems that consume less power than traditional enterprise IT.
Neuromorphic computing is a relatively new research discipline, established in the late 1980s to design chips, systems and software that are based on the structure and operating principles of the human brain. These systems use silicon and software representations of artificial neurons and synapses to build spiking neural networks (SNNs) that abstract the characteristics of electrical signals in the brain into mathematical functions.
The human brain is an extremely capable, very efficient biological computer estimated to run on just 20 W of power. The goal of neuromorphic computing is not to fully replicate its functions, but to use what we know to create a practical computing system.
There are two broad approaches to neuromorphic systems:
In both types of systems, the computing architecture differs from the Von Neumann design principles that mainstream computers follow — where a compute function is distinct from an operating memory that stores all data, including instructions (software code). In neuromorphic architecture, there is no such distinction. Arithmetic operations and data share the same circuits, and data is operated on in situ, much like how the organic brain works.
The architectural benefit of neuromorphic computing is the removal of data movement to and from the processor’s arithmetic units, and complex control logic that directs these flows. A Von Neumann machine, even if fully integrated on a single piece of silicon (die), moves data from memory arrays through control logic to arithmetic units for processing, then stores the results. Inevitably, these movements increase latencies and consume power.
This so-called Von Neumann bottleneck adds inefficiencies and concomitant performance limitations — computing resources often stall during execution as data is missing. The complexities of modern microprocessors and GPUs are rooted in ongoing design efforts put into optimizing data flows to overcome the Von Neumann bottleneck. Much of the emphasis in creating super-dense supercomputing clusters to train the largest generative AI models is on providing the low-latency and extreme bandwidth needed to move data around.
Neuromorphic computing operates to a fundamentally different principle in which data is stored and can be manipulated at the same time. This architecture style lends itself to processing massively parallel data streams for pattern recognition, control and event response. Examples of potential applications are image recognition and analysis, industrial machine vision and robotic controls, autonomous vehicles, natural language processing and anomaly detection.
For appropriate workloads, the efficiency gains promise to be two to three orders of magnitude, while performing at a tiny fraction of the energy of a Von Neumann machine.
Various chip technologies have been proposed to implement neurons in silicon, including transistors, memristors, spintronic memories and threshold switches.
Digital neuromorphic systems bypass many of the complexities of implementing physical neurons but are much larger and do not enjoy the full efficiency benefits of mixed-signal neuromorphic hardware. Notable examples of this approach are the SpiNNaker system at the University of Manchester, powered by one million CPU cores, and its successor, SpiNNaker 2 at the Technical University of Dresden, with five million cores.
Notable neuromorphic computing projects, listed in chronological order, are:
There are separate efforts to adopt the technology for low-power, on-device applications, including:
In 2025, much of the focus of neuromorphic research has shifted from discovering new compute paradigms to developing hardware platforms that can radically improve performance and decrease power consumption of existing deep learning-based AI applications, especially inference workloads.
Hybrid servers equipped with both traditional and neuromorphic processors could make today’s AI models faster, cheaper and easier to deploy, before eventually replacing them with “native” neuromorphic models. The likely initial use cases include speech and image recognition, natural language processing and brain-machine interfaces.
Separately, neuromorphic chips are being considered for edge processing and robotics applications, bringing advanced AI functionality directly to devices and reducing the need for cloud services.
All neuromorphic chips are currently used for prototyping and demonstration, with commercial applications years, or possibly decades, away. Yet, if some of the early claims are verified, neuromorphic computing could contest a sizeable (>20%) chunk of the compute segment of the semiconductor market, estimated at $800 billion for 2025.
Much of the funding for neuromorphic research comes from the public sector and is focused on universities. Some of the programs started in academia have resulted in ambitious commercialization efforts — the most notable being the Technical University of Dresden’s SpiNNcloud and Zhejiang University’s Zhejiang Lab.
These academic institutions are competing against the research organizations of giants Intel and IBM, but neither appears excited about the prospects of neuromorphic computing. IBM tests show that NorthPole, a prototype chip made using vintage 12 nanometer fabrication processes, delivered much lower latency and higher efficiency in small language model inference than Nvidia’s H100. IBM has demonstrated a 2U server that contains four of these, and yet there are no plans to commercialize the system.
The dangers of being too early to the market were illustrated in 2025 by the failure of San Francisco-based Rain AI, a neuromorphic hardware startup backed by Sam Altman. Altman invested $25 million in a seed round in 2022, and OpenAI committed to buying $51 million worth of hardware. However, the hardware never materialized, as the startup failed to secure the $150 million it required to continue development. It is now exploring a sale.
Renewed interest in neuromorphic computing comes at a time when the power consumption of existing approaches to AI is becoming increasingly problematic. A more efficient alternative would be welcomed by broad swaths of the industry, from small-scale data center developers to grid operators, policymakers and the public.
If demonstrated to be superior, the technology poses a risk to investment commitments in conventional AI infrastructure. However, the commercial success of neuromorphic computing appears remote and uncertain.
To take full advantage of a new architecture, researchers will need to reinvent or redevelop existing AI capabilities, for example, teaching spiking neural network (SNN)-based models how to recognize handwriting or speech while not relying on today’s artificial neural networks. New programming languages and software will be needed to operate neuromorphic hardware, and a sufficient pool of developers would need to be trained in their use.
The possibility of building systems with more neurons and synapses also depends on advances in nanotechnologies and materials science. Traditional computing architectures keep evolving rapidly to increase compute and data parallelism for AI training and inference workloads while suppressing power use, making “neuromorphic advantage” a fast-moving target.
Presently, the neuromorphic hardware and software market is diverse, with no clear leaders that could produce a framework that the rest of the growing industry segment could follow.
At the same time, examples show that neuromorphic systems are scaling very quickly: from millions of neurons a decade ago to billions of neurons today. Connecting these is relatively straightforward, and soon, researchers will be able to build machines that have as many neurons as the human cerebral cortex. It will be the software component that decides whether neuromorphic computing becomes a competitive alternative to the machine learning platforms of today.
Much like the commercial challenge facing quantum computing systems, neuromorphic systems are available but limited to small user bases in universities or industrial research groups. Similarly, there is currently no compelling demonstration of a high-volume application where neuromorphic hardware outperforms the alternative.
To take the next step in development, researchers must demonstrate a “neuromorphic advantage” in performance and power over conventional GPUs. Such applications appear to be close, and indeed, several vendors already claim performance and efficiency leadership in a narrow and carefully constructed set of tasks.
Funding neuromorphic research carries little risk and could have transformative effects on many sectors. In the most optimistic commercialization scenario, this technology will change how most organizations approach IT — in a transition that will take many years.