Category Archives: neuromorphic engineering

Ultrafast neuromorphic (brainlike) computing at room temperature made possible by utilizing polariton nonlinearities

A June 6, 2025 news item on Nanowerk describes research into the development of ultrafast neuromorphic (brainlike) computing, Note: A link has been removed,

Neuromorphic computing, inspired by the human brain, is considered as the next-generation paradigm for artificial intelligence (AI), offering dramatically increased speed and lower energy consumption. While software-based artificial neural networks (ANNs) have made remarkable strides, unlocking their full potential calls for physical platforms that combine ultrafast operation, high computational density, energy efficiency, and scalability.

Among various physical systems, microcavity exciton polaritons have attracted attention for neuromorphic computing due to their ultrafast dynamics, strong nonlinearities, and light-based architecture, which naturally align with the requirements of brain-inspired computation. However, their practical use has been hampered by the need for cryogenic operation and intricate fabrication processes.

In a new paper published in eLight (“Ultrafast neuromorphic computing driven by polariton nonlinearities”), a team of scientists led by Professor Qihua Xiong from Tsinghua University and Beijing Academy of Quantum Information Sciences report a demonstration of neuromorphic computing utilizing perovskite microcavity exciton polaritons operating at room temperature. Their novel system achieves high-speed digit recognition with 92% accuracy using only single-step training and opens new opportunities for scalable, light-driven neural hardware.

A June 4, 2025 Light Publishing Center, Changchun Institute of Optics, Fine Mechanics And Physics, CAS (Chinese Academy of Sciences) press release on EurekAlert, which originated the news item, provides more technical details,

The core of their system is a planar FAPbBr3 perovskite microcavity which supports exciton-polariton condensation under non-resonant optical pumping. Input images from the MNIST dataset are optically encoded by a spatial light modulator (SLM) and projected onto the microcavity as spatially structured excitation beams. The resulting polariton emission patterns serve as the output of the ANN, which is then linearly processed using ridge regression. Remarkably, this scheme requires no predefined network structure—only the physical response of the polariton system—and achieves competitive accuracy using a lightweight training set of 900 images.

“Unlike conventional approaches that rely on prefabricated structures or predefined network nodes, our method employs a fully connected spatial mapping, utilizing the entire perovskite sample area without additional structural constraints,” the corresponding author Qihua Xiong replied. This not only improves the system’s scalability but also simplifies experimental realization.

What makes this system stand out is the intrinsic nonlinear and dynamical response of the polaritons. The researchers show that below the condensation threshold, the system behaves nearly linearly, while near and above threshold, nonlinearities emerge sharply, enhancing pattern discrimination. Moreover, by applying ultrafast Kerr-gated time-resolved photoluminescence, the team probes the temporal evolution of polariton responses. They find that polariton dynamics unfold on the picosecond scale and exhibit time-dependent nonlinear mappings, which significantly broaden the system’s capacity for processing complex and temporally varying inputs.

The researchers conclude that “perovskite microcavity exciton polaritons offer ultrafast processing speeds on the picosecond timescale and exhibit exceptionally strong nonlinear interactions, significantly surpassing those in traditional photonic systems.” These attributes make them powerful candidates for future physical neural networks capable of real-time, energy-efficient AI.

This work highlights the growing role of halide perovskites in next-generation photonic computing and marks an important step toward developing all-optical neuromorphic hardware—free from the energy and speed limitations of traditional electronics.

Here’s a link to and a citation for the paper,

Ultrafast neuromorphic computing driven by polariton nonlinearities by Yusong Gan, Ying Shi, Sanjib Ghosh, Haiyun Liu, Huawen Xu & Qihua Xiong. eLight volume 5, Article number: 9 (2025) DOI: https://doi.org/10.1186/s43593-025-00087-9 Published: 02 June 2025

This paper is open access.

Mimicking human color vision with self-powered artificial synapse

This June 3, 2025 news item on Nanowerk announces new research into machine vision, Note: A link has been removed,

As artificial intelligence and smart devices continue to evolve, machine vision is taking an increasingly pivotal role as a key enabler of modern technologies. Unfortunately, despite much progress, machine vision systems still face a major problem: processing the enormous amounts of visual data generated every second requires substantial power, storage, and computational resources. This limitation makes it difficult to deploy visual recognition capabilities in edge devices― such as smartphones, drones, or autonomous vehicles.

Interestingly, the human visual system offers a compelling alternative model. Unlike conventional machine vision systems that have to capture and process every detail, our eyes and brain selectively filter information, allowing for higher efficiency in visual processing while consuming minimal power. Neuromorphic computing, which mimics the structure and function of biological neural systems, has thus emerged as a promising approach to overcome existing hurdles in computer vision. However, two major challenges have persisted. The first is achieving color recognition comparable to human vision, whereas the second is eliminating the need for external power sources to minimize energy consumption.

Against this backdrop, a research team led by Associate Professor Takashi Ikuno from the School of Advanced Engineering, Department of Electronic Systems Engineering, Tokyo University of Science (TUS), Japan, has developed a groundbreaking solution. Their paper, published in Scientific Reports (“Polarity-tunable dye-sensitized optoelectronic artificial synapses for physical reservoir computing-based machine vision”), introduces a self-powered artificial synapse capable of distinguishing colors with remarkable precision. The study was co-authored by Mr. Hiroaki Komatsu and Ms. Norika Hosoda, also from TUS.

A June 2, 2025 Tokyo University of Science press release (also on EurekAlert), which originated the news item, provides more technical detail,

The researchers created their device by integrating two different dye-sensitized solar cells, which respond differently to various wavelengths of light. Unlike conventional optoelectronic artificial synapses that require external power sources, the proposed synapse generates its electricity via solar energy conversion. This self-powering capability makes it particularly suitable for edge computing applications, where energy efficiency is crucial.

As evidenced through extensive experiments, the resulting system can distinguish between colors with a resolution of 10 nanometers across the visible spectrum—a level of discrimination approaching that of the human eye. Moreover, the device also exhibited bipolar responses, producing positive voltage under blue light and negative voltage under red light. This makes it possible to perform complex logic operations that would typically require multiple conventional devices. “The results show great potential for the application of this next-generation optoelectronic device, which enables high-resolution color discrimination and logical operations simultaneously, to low-power artificial intelligence (AI) systems with visual recognition,” notes Dr. Ikuno.

To demonstrate a real-world application, the team used their device in a physical reservoir computing framework to recognize different human movements recorded in red, green, and blue. The system achieved an impressive 82% accuracy when classifying 18 different combinations of colors and movements using just a single device, rather than the multiple photodiodes needed in conventional systems.

The implications of this research extend across multiple industries. In autonomous vehicles, these devices could enable more efficient recognition of traffic lights, road signs, and obstacles. In healthcare, they could power wearable devices that monitor vital signs like blood oxygen levels with minimal battery drain. For consumer electronics, this technology could lead to smartphones and augmented/virtual reality headsets with dramatically improved battery life while maintaining sophisticated visual recognition capabilities. “We believe this technology will contribute to the realization of low-power machine vision systems with color discrimination capabilities close to those of the human eye, with applications in optical sensors for self-driving cars, low-power biometric sensors for medical use, and portable recognition devices,” remarks Dr. Ikuno.

Overall, this work represents a significant step toward bringing the wonders of computer vision to edge devices, enabling our everyday devices to see the world more like we do.

Here’s a link to and a citation for the paper,

Polarity-tunable dye-sensitized optoelectronic artificial synapses for physical reservoir computing-based machine vision by Hiroaki Komatsu, Norika Hosoda & Takashi Ikuno. Scientific Reports volume 15, Article number: 16488 (2025) DOI: https://doi.org/10.1038/s41598-025-00693-0 Published: 12 May 2025

This paper is open access.

It’s not all about simulating the synapse for neuromorphic (brainlike) computing: presenting dendritic integration

Michael Berger’s May 20, 2026 Nanowerk Spotlight article features a new (to me) aspect (or, if you prefer, challenge) to neuromorphic computing, Note: A link has been removed,

Efforts to design computing systems that operate more like the brain have pushed engineers to rethink how information is processed, transmitted, and stored. Biological neurons are not simple relays. Their ability to process input relies not just on synapses—the connections between neurons—but also on dendrites. These branching structures collect and integrate signals across both time and space, shaping how a neuron responds.

Most neuromorphic devices developed so far have focused on mimicking synaptic functions. Dendritic behavior, which governs how multiple inputs are combined and modulated, remains less explored. This gap limits the capacity of neuromorphic hardware to emulate the full computational complexity of biological neurons.

For anyone unfamiliar with dendrites, here’s a description from the Dendrite Wikipedia entry, which follows the image, Note: Links not included in the caption for the image have been removed,

Credity: Curtis Neveu – Own work. Caption: The neuron contains dendrites that receives information, a cell body called the soma, an an axon that sends information. Schwann cells make activity move faster down axon. Synapses allow neurons to activate other neurons. The dendrites receive a signal, the axon hillock funnels the signal to the initial segment and the initial segment triggers the activity (action potential) that is sent along the axon towards the synapse. Please see learnbio.org for interactive version. CC BY-SA 4.0
File:Anatomy of neuron.png
Created: 17 May 2022
Uploaded: 17 May 2022

A dendrite (from Greek δένδρον déndron, “tree”) or dendron is a branched cytoplasmic process that extends from a nerve cell that propagates the electrochemical stimulation received from other neural cells to the cell body, or soma, of the neuron from which the dendrites project. Electrical stimulation is transmitted onto dendrites by upstream neurons (usually via their axons) via synapses which are located at various points throughout the dendritic tree.

Dendrites play a critical role in integrating these synaptic inputs and in determining the extent to which action potentials are produced by the neuron.[1]

Berger’s May 20, 2026 article explains how scientists are attempting to create artificial dendrites, Note: Links have been removed,

Artificial dendrites are difficult to construct. Unlike synapses, which can often be replicated with resistive memory elements (memristors), dendrites require spatially distributed signal processing and sensitivity to the timing of input spikes. Biological dendrites perform this by managing ion flow across complex membrane structures, often with localized chemical and electrical variations. Traditional electronic systems, which rely on electrons in solid-state circuits, struggle to reproduce these dynamics.

Ionic devices offer a more faithful analogue. In particular, nanofluidic memristors—devices that transport ions through confined channels—can mimic how neurons regulate ionic currents. Prior work has shown that such systems can simulate synaptic plasticity and memory. Yet most rely on electrical stimulation, which adds complexity to control circuitry.

In contrast, light offers a clean, contactless way to manipulate ion behavior. Optogenetics, a biological technique that uses light to activate ion channels in neurons, has shown how effective this can be. Researchers have started applying similar principles to synthetic systems, but artificial dendrites with full spatiotemporal integration remain rare.

A study published in Advanced Materials (“Optogenetics‐Inspired Nanofluidic Artificial Dendrite with Spatiotemporal Integration Functions”) introduces a nanofluidic device that addresses this challenge. Developed by a team at Northeast Normal University [NENU], the system integrates layered graphene oxide (GO) into a flexible polydimethylsiloxane (PDMS) matrix. It uses light to control sodium ion (Na⁺) transport through nanochannels. This approach simulates how dendrites integrate signals from different spatial locations and over time. It also lays the groundwork for more advanced neuromorphic machines that include artificial sensory-motor reflexes.

This work shows how optical modulation of ionic pathways can be used to create functional artificial dendrites. It opens a path toward more realistic neural circuits in hardware, capable not just of memory and learning, but of the nuanced signal processing required for perception and motor control. As components like this are refined, they could play a central role in building autonomous systems that interact more naturally with their environment.

Here’s a link to and a citation for the paper,

Optogenetics-Inspired Nanofluidic Artificial Dendrite with Spatiotemporal Integration Functions by Zhuangzhuang Li, Ya Lin, Xuanyu Shan, Zhongqiang Wang, Xiaoning Zhao, Ye Tao, Haiyang Xu, Yichun Liu. Advanced Materials First published: 16 May 2025 Online Version of Record before inclusion in an issue 2502438 DOI: https://doi.org/10.1002/adma.202502438

This paper is behind a paywall.

If you have the time, Berger’s May 20, 2026 article provides more detail about the device.

A Multidisciplinary Centre for Neuromorphic (brainlike) Computing in the UK

A May 6, 2025 Aston University press release (also on EurekAlert but published May 7, 2025) announces a UK ‘neuromorphic initiative’, Note: Links have been removed,

  • Aston University to lead the UK’s new centre to pioneer brain-inspired, energy-efficient computing technologies 
  • The initiative will receive £5.6 million over four years from the Engineering and Physical Sciences Research Council [EPSRC]
  • The aim of the centre is to become a focal point for networking and collaboration on fundamental research and technology.

The UK will be getting a new centre to pioneer brain-inspired, energy-efficient computing technologies.

The UK Multidisciplinary Centre for Neuromorphic Computing is led by Aston University and will receive £5.6 million over four years from the UKRI [UK Research and Innovation] Engineering and Physical Sciences Research Council (EPSRC).

The aim of the centre is to become a focal point for networking and collaboration on fundamental research and technology of neuromorphic computing to address the sustainability challenges facing today’s digital infrastructure and artificial intelligence systems.

The centre will be led by the Aston Institute of Photonic Technologies (AIPT) and will include the world-leading researchers from Aston University, the University of Oxford, the University of Cambridge, the University of Southampton, Queen Mary University of London, Loughborough University and the University of Strathclyde. 

Neuromorphic computing seeks to replicate the brain’s structural and functional principles, however scientists currently lack a deep, system-level understanding of how the human brain computes at cellular and network scales. The researchers aim to tackle that challenge directly, blending stem-cell-derived human neuron experiments with advanced computational models, low-power algorithms and novel photonic hardware.

The centre team includes world-leading researchers with broad and complementary expertise in neuroscience, non-conventional computing algorithms, photonics, opto- and nano-electronics and materials science. In collaboration with policymakers and industrial partners the scientists and engineers aim to demonstrate the capabilities of neuromorphic computing across a range of sectors and applications. The centre will be supported by a broad network of industry partners including Microsoft Research, Thales, BT, QinetiQ, Nokia Bell Labs, Hewlett Packard Labs, Leonardo, Northrop Grumman and a number of small to medium enterprises. Their contribution will focus on enhancing the centre’s impact on society.

Professor Rhein Parri, co-director and neurophysiologist at Aston University said: “For the first time, we can combine the study of living human neurons with that of advanced computing platforms to co-develop the future of computing. 

“This project is an exciting leap forward, learning from biology and technology in ways that were not previously possible.”

The experts aim to co-design brain-inspired neuromorphic systems by studying human neuronal function using the latest human induced pluripotent stem cell – or hiPSC technologies – and developing new computational paradigms and low-power AI algorithms. They also plan to create devices and hardware that are inspired by biological systems, like the human brain. These devices will use light – or photonic hardware – to process information. This approach will be the next big step in making computing more energy-efficient and capable of handling many tasks at the same time. They also aim to create a sustainable UK research ecosystem through training, road mapping, and international collaboration.

Professor Sergei K. Turitsyn, director of the centre and AIPT, said: “The project’s ambition is not only to develop future technologies, but also to create a new internationally known UK research brand in neuromorphic computing that will unite the UK’s best minds across disciplines and will lead to sustainable operation and a long-term impact. It’s a proud moment for AIPT and Aston University to lead this national effort.”

Professor Natalia Berloff, co-director of the centre who is based at the University of Cambridge said: “One of the most exciting aspects of neuromorphic computing is the potential of photonic hardware to deliver truly brain-like efficiency. 

“Light-based processors can exploit massive parallelism and ultrafast signal propagation to outperform conventional electronics on demanding AI workloads, while consuming far less power. By combining these photonic architectures with insights from living human neurons, we aim to co-design neuromorphic systems that move beyond incremental improvements and toward a genuinely transformative computing paradigm.”

In addition, the researchers aim to tackle the increasing global energy footprint of information and communication technologies which is developing at an unsustainable pace, driven partly by the explosive growth of artificial intelligence. Today’s AI systems are built on traditional computing hardware with increasingly high-power consumption (kW), posing a barrier to scalability and sustainability. In contrast, the human brain performs complex computation and communication tasks using just 20 watts.

Professor Dimitra Georgiadou, co-director of the centre who is based at the University of Southampton added: “To address the challenge of substantially lowering the power consumption in electronics, novel materials and device architectures are needed that can effectively emulate computation in the brain and cellular responses to certain stimuli.”

The centre’s ambition goes beyond technology development as it aims to serve as a foundation for a long-term, interdisciplinary research ecosystem – actively expanding its membership and reach over time. It aims to establish a sustainable centre that continues to be a focal point for the community and will thrive beyond the initial funding period, reinforcing innovation, partnership, and impact in the field of neuromorphic computing.

Good luck to this effort to lower power consumption.

Memristors could help AIs overcome ‘catastrophic forgetting’

A March 20,205 news item on SciencDaily describes a ‘novel’ memristor,

They consume extremely little power and behave similarly to brain cells: so-called memristors. Researchers from Jülich [Forschungszentrum Juelich; Germany], led by Ilia Valov, have now introduced novel memristive components in Nature Communications that offer significant advantages over previous versions: they are more robust, function across a wider voltage range, and can operate in both analog and digital modes. These properties could help address the problem of “catastrophic forgetting,” where artificial neural networks abruptly forget previously learned information.

The problem of “catastrophic forgetting” occurs when deep neural networks are trained for a new task. This is because a new optimization simply overwrites a previous one. The brain does not have this problem because it can apparently adjust the degree of synaptic change; experts are now also talking about a so-called “metaplasticity”. They suspect that it is only through these different degrees of plasticity that our brain can permanently learn new tasks without forgetting old content. The new memristor accomplishes something similar.

“Its unique properties allow the use of different switching modes to control the modulation of the memristor in such a way that stored information is not lost,” says Ilia Valov from the Peter Grünberg Institute (PGI-7) at Forschungszentrum Jülich.

A March 20, 2025 Forschungszentrum Juelich press release (also on EurekAlert), which originated the news item, provides context for the work along with more technical details,

Ideal candidates for neuro-inspired devices

Modern computer chips are evolving rapidly. Their development could receive a further boost from memristors—a term derived from memory and resistor. These components are essentially resistors with memory: their electrical resistance changes depending on the applied voltage, and unlike conventional switching elements, their resistance value remains even after the voltage is turned off. This is because memristors can undergo structural changes—for example, due to atoms depositing on the electrodes.

“Memristive elements are considered ideal candidates for learning-capable, neuro-inspired computer components modeled on the brain,” says Ilia Valov.

Despite considerable progress and efforts, the commercialization of the components is progressing slower than expected. This is due in particular to an often high failure rate in production and a short lifespan of the products. In addition, they are sensitive to heat generation or mechanical influences, which can lead to frequent malfunctions during operation. “Basic research is therefore essential to better control nanoscale processes,” says Valov, who has been working in this field of memristors for many years. ”We need new materials and switching mechanisms to reduce the complexity of the systems and increase the range of functionalities.”

It is precisely in this regard that the chemist and materials scientist, together with German and Chinese colleagues, has now been able to report an important success: “We have discovered a fundamentally new electrochemical memristive mechanism that is chemically and electrically more stable,” explains Valov. The development has now been presented in the journal Nature Communications.

A New Mechanism for Memristors

“So far, two main mechanisms have been identified for the functioning of so-called bipolar memristors: ECM and VCM,” explains Valov. ECM stands for ‘Electrochemical Metallization’ and VCM for ‘Valence Change Mechanism’.

  • ECM memristors form a metallic filament between the two electrodes—a tiny “conductive bridge” that alters electrical resistance and dissolves again when the voltage is reversed. The critical parameter here is the energy barrier (resistance) of the electrochemical reaction. This design allows for low switching voltages and fast switching times, but the generated states are variable and relatively short-lived.
     
  • VCM memristors, on the other hand, do not change resistance through the movement of metal ions but rather through the movement of oxygen ions at the interface between the electrode and electrolyte—by modifying the so-called Schottky barrier. This process is comparatively stable but requires high switching voltages.

Each type of memristor has its own advantages and disadvantages. “We therefore considered designing a memristor that combines the benefits of both types,” explains Ilia Valov. Among experts, this was previously thought to be impossible. “Our new memristor is based on a completely different principle: it utilizes a filament made of metal oxides rather than a purely metallic one like ECM,” Valov explains. This filament is formed by the movement of oxygen and tantalum ions and is highly stable—it never fully dissolves. “You can think of it as a filament that always exists to some extent and is only chemically modified,” says Valov.

The novel switching mechanism is therefore very robust. The scientists also refer to it as a filament conductivity modification mechanism (FCM). Components based on this mechanism have several advantages: they are chemically and electrically more stable, more resistant to high temperatures, have a wider voltage window and require lower voltages to produce. As a result, fewer components burn out during the manufacturing process, the reject rate is lower and their lifespan is longer.

Perspective solution for “catastrophic forgetting”

On top of that, the different oxidation states allow the memristor to be operated in a binary and/or analog mode. While binary signals are digital and can only output two states, analog signals are continuous and can take on any intermediate value. This combination of analog and digital behavior is particularly interesting for neuromorphic chips because it can help to overcome the problem of “catastrophic forgetting”: deep neural networks delete what they have learned when they are trained for a new task. This is because a new optimization simply overwrites a previous one.

The brain does not have this problem because it can apparently adjust the degree of synaptic change; experts are now also talking about a so-called “metaplasticity”. They suspect that it is only through these different degrees of plasticity that our brain can permanently learn new tasks without forgetting old content. The new ohmic memristor accomplishes something similar. “Its unique properties allow the use of different switching modes to control the modulation of the memristor in such a way that stored information is not lost,” says Valov.

The researchers have already implemented the new memristive component in a model of an artificial neural network in a simulation. In several image data sets, the system achieved a high level of accuracy in pattern recognition. In the future, the team wants to look for other materials for memristors that might work even better and more stably than the version presented here. “Our results will further advance the development of electronics for ‘computation-in-memory’ applications,” Valov is certain.

Here’s a link to and a citation for the paper,

Electrochemical ohmic memristors for continual learning by Shaochuan Chen, Zhen Yang, Heinrich Hartmann, Astrid Besmehn, Yuchao Yang & Ilia Valov. Nature Communications volume 16, Article number: 2348 (2025) DOI: https://doi.org/10.1038/s41467-025-57543-w Published: 08 March 2025

This paper is open access.

‘Super-Turing AI’ uses less energy to mimic brain

Neuromorphic (brainlike) engineering and neuromorphic computing being long time interests here, this March 26, 2025 new item on ScienceDaily caught my eye,

Artificial Intelligence (AI) can perform complex calculations and analyze data faster than any human, but to do so requires enormous amounts of energy. The human brain is also an incredibly powerful computer, yet it consumes very little energy.

As technology companies increasingly expand, a new approach to AI’s “thinking,” developed by researchers including Texas A&M University engineers, mimics the human brain and has the potential to revolutionize the AI industry.

A March 25, 2025 Texas A&M University news release (also on EurekAlert) by Lesley Henton, which originated the news item, delves further into the creation of a “Super-Turing AI,” Note: Links have been removed,

As technology companies increasingly expand, a new approach to AI’s “thinking,” developed by researchers including Texas A&M University engineers, mimics the human brain and has the potential to revolutionize the AI industry.

Dr. Suin Yi, assistant professor of electrical and computer engineering at Texas A&M’s College of Engineering, is on a team of researchers that developed “Super-Turing AI,” which operates more like the human brain. This new AI integrates certain processes instead of separating them and then migrating huge amounts of data like current systems do.

The Energy Crisis In AI

Today’s AI systems, including large language models [LLM] such as OpenAI [a company not an LLM] and ChatGPT [an LLM produced by OpenAI], require immense computing power and are housed in expansive data centers that consume vast amounts of electricity.

“These data centers are consuming power in gigawatts, whereas our brain consumes 20 watts,” Suin explained. “That’s 1 billion watts compared to just 20. Data centers that are consuming this energy are not sustainable with current computing methods. So while AI’s abilities are remarkable, the hardware and power generation needed to sustain it is still needed.”

The substantial energy demands not only escalate operational costs but also raise environmental concerns, given the carbon footprint associated with large-scale data centers. As AI becomes more integrated, addressing its sustainability becomes increasingly critical.

Emulating The Brain

Yi and team believe the key to solving this problem lies in nature — specifically, the human brain’s neural processes.

In the brain, the functions of learning and memory are not separated, they are integrated. Learning and memory rely on connections between neurons, called “synapses,” where signals are transmitted. Learning strengthens or weakens synaptic connections through a process called “synaptic plasticity,” forming new circuits and altering existing ones to store and retrieve information. 

By contrast, in current computing systems, training (how the AI is taught) and memory (data storage) happen in two separate places within the computer hardware. Super-Turing AI is revolutionary because it bridges this efficiency gap, so the computer doesn’t have to migrate enormous amounts of data from one part of its hardware to another.

“Traditional AI models rely heavily on backpropagation — a method used to adjust neural networks during training,” Yi said. “While effective, backpropagation is not biologically plausible and is computationally intensive.

“What we did in that paper is troubleshoot the biological implausibility present in prevailing machine learning algorithms,” he said. “Our team explores mechanisms like Hebbian learning and spike-timing-dependent plasticity — processes that help neurons strengthen connections in a way that mimics how real brains learn.”

Hebbian learning principles are often summarized as “cells that fire together, wire together.” This approach aligns more closely with how neurons in the brain strengthen their connections based on activity patterns. By integrating such biologically inspired mechanisms, the team aims to develop AI systems that require less computational power without compromising performance.

In a test, a circuit using these components helped a drone navigate a complex environment — without prior training — learning and adapting on the fly. This approach was faster, more efficient and used less energy than traditional AI.

Why This Matters For The Future Of AI

This research could be a game-changer for the AI industry. Companies are racing to build larger and more powerful AI models, but their ability to scale is limited by hardware and energy constraints. In some cases, new AI applications require building entire new data centers, further increasing environmental and economic costs.

Yi emphasizes that innovation in hardware is just as crucial as advancements in AI systems themselves. “Many people say AI is just a software thing, but without computing hardware, AI cannot exist,” he said.

Looking Ahead: Sustainable AI Development

Super-Turing AI represents a pivotal step toward sustainable AI development. By reimagining AI architectures to mirror the efficiency of the human brain, the industry can address both economic and environmental challenges.

Yi and his team hope that their research will lead to a new generation of AI that is both smarter and more efficient.

“Modern AI like ChatGPT is awesome, but it’s too expensive. We’re going to make sustainable AI,” Yi said. “Super-Turing AI could reshape how AI is built and used, ensuring that as it continues to advance, it does so in a way that benefits both people and the planet.”

There’s no mention of a memristor but there is a ‘synaptic resistor’, which I find puzzling. Is a synaptic resistor something different? In a search with these search terms “synaptic resistor memristor” I found this,

The term “memristive synapses” signifies the amalgamation of memristor functionality with synaptic characteristics, resulting in a novel approach to neuromorphic computing.

I’m guessing memristive synapses can also be called synaptic resistors or, at the least, are related concepts.

I pulled the definition from,

Resistive Switching Properties in Memristors for Optoelectronic Synaptic Memristors: Deposition Techniques, Key Performance Parameters, and Applications by Rajwali Khan, Naveed Ur Rehman, Shahid Iqbal, Sherzod Abdullaev, and Haila M. Aldosari. ACS Applied Electronic Materials Vol 6/ Issue 1 pp. 73–119 DOI: https://doi.org/10.1021/acsaelm.3c01323 Published December 29, 2023 Copyright © 2023 The Authors. Published by American Chemical Society. This publication is licensed under
CC-BY 4.0

Getting back to this latest work from Texas A&M University, here’s a link to and a citation for Dr. Suin Yi and his team’s paper,

HfZrO-based synaptic resistor circuit for a Super-Turing intelligent system by Jungmin Lee, Rahul Shenoy, Atharva Deo, Suin Yi, Dawei Gao, David Qiao, Mingjie Xu, Shiva Asapu, Zixuan Rong, Dhruva Nathan, Yong Hei, Dharma Paladugu, Jian-Guo Zheng, J. Joshua Yang, R. Stanley Williams, Qing Wu, and Yong Chen. Science Advances 28 Feb 2025 Vol 11, Issue 9 DOI: 10.1126/sciadv.adr2082

This paper is open access.

Notice that one of the Super Turing paper’s authors is R. Stanley Williams who ‘discovered’ the memristor in 2008. You can read his November 28, 2008 article “How We Found the Missing Memristor; The memristor—the functional equivalent of a synapse—could revolutionize circuit design” in the IEEE Spectrum online,

It’s time to stop shrinking. Moore’s Law, the semiconductor industry’s obsession with the shrinking of transistors and their commensurate steady doubling on a chip about every two years, has been the source of a 50-year technical and economic revolution. Whether this scaling paradigm lasts for five more years or 15, it will eventually come to an end. The emphasis in electronics design will have to shift to devices that are not just increasingly infinitesimal but increasingly capable.

Earlier this year, I and my colleagues at Hewlett-Packard Labs, in Palo Alto, Calif., surprised the electronics community with a fascinating candidate for such a device: the memristor. It had been theorized nearly 40 years ago, but because no one had managed to build one, it had long since become an esoteric curiosity. That all changed on 1 May [2008], when my group published the details of the memristor in Nature.

For anyone interested in a trip down memory road, I have a few comments from the theorist (Leon Chua) mentioned in his 2008 article in this April 13, 2010 posting (scroll down to the ‘More on memristors’ subhead).

New magnetic state, ‘Vortion,’ able to mimic neuronal synapses

A March 3, 2025 news item on phys.org announces a new magnetic state,

Researchers from the Department of Physics [at the Autonomous University of Barcelona] have managed to experimentally develop a new magnetic state: a magneto-ionic vortex or “vortion.” The research, published in Nature Communications, allows for an unprecedented level of control of magnetic properties at the nanoscale and at room temperature, and opens new horizons for the development of advanced magnetic devices.

A March 3, 2025 Universitat Autonoma de Barcelona [Autonomous University of Barcelona] press release on EurekAlert, which originated the news item, describes the impetus for this research,

The use of Big Data has multiplied the energy demand in information technologies. Generally, to store information, systems utilize electric currents to write data, which dissipates power by heating the devices. Controlling magnetic memories with voltage, instead of electric currents, can minimise this energy expenditure. One way to achieve this is by using magneto-ionic materials, which allow for the manipulation of their magnetic properties by adding or removing ions through changes in the polarity of the applied voltage. So far, most studies in this area have focused on continuous films, rather than on controlling properties at the nanometric scale in discrete “bits”, essential for high-density data storage. Moreover, it is known that new magnetic phenomena can emerge at the sub-micrometre scale, that do not exist at the macroscopic level, such as magnetic vortices – small swirl-like magnetic structures. These vortices have applications in the way magnetic data are currently recorded and read, as well as in biomedicine. Nevertheless, changing the vortex state in already prepared materials is often impossible or requires large amounts of energy.

Researchers from the UAB Department of Physics, in collaboration with scientists from the ICMAB-CSIC, the ALBA Synchrotron and research institutions in Italy and the United States, propose a new solution that combines magneto-ionics and magnetic vortices. Researchers experimentally developed a new magnetic state that they have named magneto-ionic vortex, or “vortion”. This new object allows “on-demand” control of the magnetic properties of a nanodot (a dot of nanometric dimensions) with high precision. This is achieved by extracting nitrogen ions through the application of voltage, thus allowing for efficient control with very low energy consumption.

“This is a so far unexplored object at the nanoscale,” explains ICREA [Catalan Institution for Research and Advanced Studies] researcher in the UAB Department of Physics Jordi Sort, director of the research. “There is a great demand for controlling magnetic states at the nanoscale but, surprisingly, most of the research in magneto-ionics has so far focused on the study of films of continuous materials. If we look at the effects of ion displacement in discrete structures of nanometre dimensions, the ‘nanodots’ we have analysed, we see that very interesting dynamically evolving spin configurations appear, which are unique to these types of structures”. These spin configurations and the magnetic properties of the vortices vary as a function of the duration of the applied voltage. Thus, different magnetic states (e.g., vortices with different properties or states with uniform magnetic orientation) can be generated from nanodots of an initially non-magnetic material by the gradual extraction of ions through the application of voltage.

“With the ‘vortions’ we developed, we can have unprecedented control of magnetic properties such as magnetisation, coercivity, remanence, anisotropy or the critical fields at which vortions are formed or annihilated. These are fundamental properties for storing information in magnetic memories, which we are now able to control and tune in an analogue and reversible manner by a voltage-activated process with very low energy consumption,” explains Irena Spasojević, postdoctoral researcher in the UAB Department of Physics and first author of the paper. “The voltage actuation procedure, instead of using electric current, prevents heating in devices such as laptops, servers and data centres, and it drastically reduces energy loss.”

Researchers have shown that by precisely controlling the thickness of the voltage-generated magnetic layer, the magnetic state of the material can be varied at will, in a controlled and reversible manner, between a non-magnetic state, a state with a uniform magnetic orientation (such as that found in a magnet), and the new magneto-ionic vortex state.

Ability to mimic the behaviour of neuronal synapses

This unprecedented level of control of magnetic properties at the nanoscale and at room temperature opens new horizons for the development of advanced magnetic devices with functionalities that can be tailored once the material has been synthesised. This provides greater flexibility which is needed to meet specific technological demands. “We envision, for example, the integration of reconfigurable magneto-ionic vortices in neural networks as dynamic synapses, capable of mimicking the behaviour of biological synapses”, says Jordi Sort. In the brain, the connections between neurons, the synapses, have different weights (intensities) that adapt dynamically according to the activity and learning process. Similarly, “vortions” could provide tuneable neuronal synaptic weights, reflected in reconfigurable magnetisation or anisotropy values, for neuromorphic (brain-inspired) spintronic devices. In fact, “the activity of biological neurons and synapses is also controlled by electrical signals and ion migration, analogous to our magneto-ionic units,” comments Irena Spasojević.

Researchers believe that, besides their impact in brain-inspired devices, analogue computing or multi-state data storage systems, vortions may have other potential applications, including medical therapy techniques such as theragnostics, data security, magnetic spin computing devices (spin logics), and the generation of spin waves (magnonics).

The research, led by ICREA professor of the UAB Department of Physics Jordi Sort, and postdoctoral researcher of the UAB Department of Physics Irena Spasojević as the first author of the publication, also included Zheng Ma, from the same department, Aleix Barrera and Anna Palau, from the Institute of Materials Science of Barcelona (ICMAB-CSIC), and researchers from the ALBA Synchrotron, the Istituto Nazionale di Ricerca Metrologica (INRiM) of Turin, Italy, and Colorado State University, USA. The study was published in the latest issue of the journal Nature Communications. This study was financed by the REMINDS project from the European Research Council.

Here’s a link to and a citation for the paper,

Magneto-ionic vortices: voltage-reconfigurable swirling-spin analog-memory nanomagnets by Irena Spasojevic, Zheng Ma, Aleix Barrera, Federica Celegato, Alessandro Magni, Sandra Ruiz-Gómez, Michael Foerster, Anna Palau, Paola Tiberto, Kristen S. Buchanan & Jordi Sort. Nature Communications volume 16, Article number: 1990 (2025) DOI: https://doi.org/10.1038/s41467-025-57321-8 Published: 26 February 2025

This paper is open access.

Memristor-based brain-computer interfaces (BCIs)

Brief digression: For anyone unfamiliar with memristors, they are, for want of better terms, devices or elements that have memory in addition to their resistive properties. (For more see: R Jagan Mohan Rao’s undated article ‘What is a Memristor? Principle, Advantages, Applications” on InsstrumentalTools.com)

A March 27,2025 news item on ScienceDaily announces a memristor-enhanced brain-computer interface (BCI),

Summary: Researchers have conducted groundbreaking research on memristor-based brain-computer interfaces (BCIs). This research presents an innovative approach for implementing energy-efficient adaptive neuromorphic decoders in BCIs that can effectively co-evolve [emphasis mine] with changing brain signals.

So, the decoder in the BCI will ‘co-evolve’ with your brain? hmmm Also, where is this ‘memristor chip’? The video demo (https://assets-eu.researchsquare.com/files/rs-3966063/v1/7a84dc7037b11bad96ae0378.mp4) shows a volunteer wearing cap attached by cable to an intermediary device (an enlarged chip with a brain on it?) which is in turn attached to a screen. I believe some artistic licence has been taken with regard to the brain on the chip..

Caption: Researchers propose an adaptive neuromorphic decoder supporting brain-machine co-evolution. Credit: The University of Hong Kong

A March 25, 2025 University of Hong Kong (HKU) press release (also on EurekAlert but published on March 26, 2025), which originated the news item, explains more about memristors, BCIs, and co-evolution,

Professor Ngai Wong and Dr Zhengwu Liu from the Department of Electrical and Electronic Engineering at the Faculty of Engineering at the University of Hong Kong (HKU), in collaboration with research teams at Tsinghua University and Tianjin University, have conducted groundbreaking research on memristor-based brain-computer interfaces (BCIs). Published in Nature Electronics, this research presents an innovative approach for implementing energy-efficient adaptive neuromorphic decoders in BCIs that can effectively co-evolve with changing brain signals.

A brain-computer interface (BCI) is a computer-based system that creates a direct communication pathway between the brain and external devices, such as computers, allowing individuals to control these devices or applications purely through brain activity, bypassing the need for traditional muscle movements or the nervous system. This technology holds immense potential across a wide range of fields, from assistive technologies to neurological rehabilitation. However, traditional BCIs still face challenges.

“The brain is a complex dynamic system with signals that constantly evolve and fluctuate. This poses significant challenges for BCIs to maintain stable performance over time,” said Professor Wong and Dr Liu. “Additionally, as brain-machine links grow in complexity, traditional computing architectures struggle with real-time processing demands.”

The collaborative research addressed these challenges by developing a 128K-cell memristor chip that serves as an adaptive brain signal decoder. The team introduced a hardware-efficient one-step memristor decoding strategy that significantly reduces computational complexity while maintaining high accuracy. Dr Liu, a Research Assistant Professor in the Department of Electrical and Electronic Engineering at HKU, contributed as a co-first author to this groundbreaking work.

In real-world testing, the system demonstrated impressive capabilities in a four-degree-of-freedom drone flight control task, achieving 85.17% decoding accuracy—equivalent to software-based methods—while consuming 1,643 times less energy and offering 216 times higher normalised speed than conventional CPU-based systems.

Most significantly, the researchers developed an interactive update framework that enables the memristor decoder and brain signals to adapt to each other naturally. This co-evolution, demonstrated in experiments involving ten participants over six-hour sessions, resulted in approximately 20% higher accuracy compared to systems without co-evolution capability.

“Our work on optimising the computational models and error mitigation techniques was crucial to ensure that the theoretical advantages of memristor technology could be realised in practical BCI applications,” explained Dr Liu. “The one-step decoding approach we developed together significantly reduces both computational complexity and hardware costs, making the technology more accessible for a wide range of practical scenarios.”

Professor Wong further emphasised, “More importantly, our interactive updating framework enables co-evolution between the memristor decoder and brain signals, addressing the long-term stability issues faced by traditional BCIs. This co-evolution mechanism allows the system to adapt to natural changes in brain signals over time, greatly enhancing decoding stability and accuracy during prolonged use.”

Building on the success of this research, the team is now expanding their work through a new collaboration with HKU Li Ka Shing Faculty of Medicine and Queen Mary Hospital to develop a multimodal large language model for epilepsy data analysis.

“This new collaboration aims to extend our work on brain signal processing to the critical area of epilepsy diagnosis and treatment,” said Professor Wong and Dr Liu. “By combining our expertise in advanced algorithms and neuromorphic computing with clinical data and expertise, we hope to develop more accurate and efficient models to assist epilepsy patients.”

The research represents a significant step forward in human-centred hybrid intelligence, which combines biological brains with neuromorphic computing systems, opening new possibilities for medical applications, rehabilitation technologies, and human-machine interaction.

The project received support from the RGC Theme-based Research Scheme (TRS) project T45-701/22-R, the STI 2030-Major Projects, the National Natural Science Foundation of China, and the XPLORER Prize.

Here’s a link to and a citation for the paper,

A memristor-based adaptive neuromorphic decoder for brain–computer interfaces by Zhengwu Liu, Jie Mei, Jianshi Tang, Minpeng Xu, Bin Gao, Kun Wang, Sanchuang Ding, Qi Liu, Qi Qin, Weize Chen, Yue Xi, Yijun Li, Peng Yao, Han Zhao, Ngai Wong, He Qian, Bo Hong, Tzyy-Ping Jung, Dong Ming & Huaqiang Wu. Nature Electronics volume 8, pages 362–372 (2025) DOI: https://doi.org/10.1038/s41928-025-01340-2 Published online: 17 February 2025 Issue Date: April 2025

This paper is behind a paywall.

Words from the press release like “… human-centred hybrid intelligence, which combines biological brains with neuromorphic computing systems …” put me in mind of cyborgs.

Pioneering bionic hand achieves human-like grip on plush toys, water bottles, and other everyday objects

This is not a biohybrid hand incorporating ‘living’ and nonliving materials but a hybrid hand incorporating soft and rigid robotics.

A March 5, 2025 news item on ScienceDaily announces work from Johns Hopkins University (JHU; Maryland, US),

Johns Hopkins University engineers have developed a pioneering prosthetic hand that can grip plush toys, water bottles, and other everyday objects like a human, carefully conforming and adjusting its grasp to avoid damaging or mishandling whatever it holds.

The system’s hybrid design is a first for robotic hands, which have typically been too rigid or too soft to replicate a human’s touch when handling objects of varying textures and materials. The innovation offers a promising solution for people with hand loss and could improve how robotic arms interact with their environment.

A March 5, 2025 Johns Hopkins University (JHU) news release (also on EurekAlert), which originated the news item, provides more details, Note: Links have been removed,

“The goal from the beginning has been to create a prosthetic hand that we model based on the human hand’s physical and sensing capabilities—a more natural prosthetic that functions and feels like a lost limb,” said Sriramana Sankar, a Johns Hopkins biomedical engineer who led the work. We want to give people with upper-limb loss the ability to safely and freely interact with their environment, to feel and hold their loved ones without concern of hurting them.”

The device, developed by the same Neuroengineering and Biomedical Instrumentations Lab that in 2018 created the world’s first electronic “skin” with a humanlike sense of pain [mentioned here in a December 14, 2018 posting], features a multifinger system with rubberlike polymers and a rigid 3D-printed internal skeleton. Its three layers of tactile sensors, inspired by the layers of human skin, allow it to grasp and distinguish objects of various shapes and surface textures, rather than just detect touch. Each of its soft air-filled finger joints can be controlled with the forearm’s muscles, and machine learning algorithms focus the signals from the artificial touch receptors to create a realistic sense of touch, Sankar said. “The sensory information from its fingers is translated into the language of nerves to provide naturalistic sensory feedback through electrical nerve stimulation.”

In the lab, the hand identified and manipulated 15 everyday objects, including delicate stuffed toys, dish sponges, and cardboard boxes, as well as pineapples, metal water bottles, and other sturdier items. In the experiments, the device achieved the best performance compared with the alternatives, successfully handling objects with 99.69% accuracy and adjusting its grip as needed to prevent mishaps. The best example was when it nimbly picked up a thin, fragile plastic cup filled with water, using only three fingers without denting it.

“We’re combining the strengths of both rigid and soft robotics to mimic the human hand,” Sankar said. “The human hand isn’t completely rigid or purely soft—it’s a hybrid system, with bones, soft joints, and tissue working together. That’s what we want our prosthetic hand to achieve. This is new territory for robotics and prosthetics, which haven’t fully embraced this hybrid technology before. It’s being able to give a firm handshake or pick up a soft object without fear of crushing it.”

To help amputees regain the ability to feel objects while grasping, prostheses will need three key components: sensors to detect the environment, a system to translate that data into nerve-like signals, and a way to stimulate nerves so the person can feel the sensation, said Nitish Thakor, a Johns Hopkins biomedical engineering professor who directed the work.

The bioinspired technology allows the hand to function this way, using muscle signals from the forearm, like most hand prostheses. These signals bridge the brain and nerves, allowing the hand to flex, release, or react based on its sense of touch. The result is a robotic hand that intuitively “knows” what it’s touching, much like the nervous system does, Thakor said.

“If you’re holding a cup of coffee, how do you know you’re about to drop it? Your palm and fingertips send signals to your brain that the cup is slipping,” Thakor said. “Our system is neurally inspired—it models the hand’s touch receptors to produce nervelike messages so the prosthetics’ ‘brain,’ or its computer, understands if something is hot or cold, soft or hard, or slipping from the grip.”

While the research is an early breakthrough for hybrid robotic technology that could transform both prosthetics and robotics, more work is needed to refine the system, Thakor said. Future improvements could include stronger grip forces, additional sensors, and industrial-grade materials.

“This hybrid dexterity isn’t just essential for next-generation prostheses,” Thakor said. “It’s what the robotic hands of the future need because they won’t just be handling large, heavy objects. They’ll need to work with delicate materials such as glass, fabric, or soft toys. That’s why a hybrid robot, designed like the human hand, is so valuable—it combines soft and rigid structures, just like our skin, tissue, and bones.” 

Other authors include Wen-Yu Cheng of Florida Atlantic University; Jinghua Zhang, Ariel Slepyan, Mark M. Iskarous, Rebecca J. Greene, Rene DeBrabander, and Junjun Chen of Johns Hopkins; and Arnav Gupta of the University of Illinois Chicago.

Here’s a link to and a citation for the paper,

A natural biomimetic prosthetic hand with neuromorphic tactile sensing for precise and compliant grasping by Sriramana Sankar, Wen-Yu Cheng, Jinghua Zhang, Ariel Slepyan, Mark M. Iskarous, Rebecca J. Greene, Rene DeBrabander, Junjun Chen, Arnav Gupta, and Nitish V. Thakor. Science Advances 5 Mar 2025 Vol 11, Issue 10 DOI: 10.1126/sciadv.adr9300

This paper is open access.

Next-generation neuromorphic, semiconductor-based, ultra-small computing chip learns and corrects itself

This is yet another of my memristor posts. Researchers from Korea Advanced Institute of Science and Technology (KAIST) have some exciting news according to a January 21, 2025 news item on ScienceDaily,

Existing computer systems have separate data processing and storage devices, making them inefficient for processing complex data like AI. A KAIST research team has developed a memristor-based integrated system similar to the way our brain processes information. It is now ready for application in various devices including smart security cameras, allowing them to recognize suspicious activity immediately without having to rely on remote cloud servers, and medical devices with which it can help analyze health data in real time.

KAIST (President Kwang Hyung Lee) announced on the 17th of January [2025] that the joint research team of Professor Shinhyun Choi and Professor Young-Gyu Yoon of the School of Electrical Engineering has developed a next-generation neuromorphic semiconductor-based ultra-small computing chip that can learn and correct errors on its own.

A January 17, 2025 KAIST press release (also on EurekAlert but published January 20, 2025), which originated the news item, provides more information,

What is special about this computing chip is that it can learn and correct errors that occur due to non-ideal characteristics that were difficult to solve in existing neuromorphic devices. For example, when processing a video stream, the chip learns to automatically separate a moving object from the background, and it becomes better at this task over time.

This self-learning ability has been proven by achieving accuracy comparable to ideal computer simulations in real-time image processing. The research team’s main achievement is that it has completed a system that is both reliable and practical, beyond the development of brain-like components.

The research team has developed the world’s first memristor-based integrated system that can adapt to immediate environmental changes, and has presented an innovative solution that overcomes the limitations of existing technology.

At the heart of this innovation is a next-generation semiconductor device called a memristor*. The variable resistance characteristics of this device can replace the role of synapses in neural networks, and by utilizing it, data storage and computation can be performed simultaneously, just like our brain cells.

*Memristor: A compound word of memory and resistor, next-generation electrical device whose resistance value is determined by the amount and direction of charge that has flowed between the two terminals in the past.

The research team designed a highly reliable memristor that can precisely control resistance changes and developed an efficient system that excludes complex compensation processes through self-learning. This study is significant in that it experimentally verified the commercialization possibility of a next-generation neuromorphic semiconductor-based integrated system that supports real-time learning and inference.

This technology will revolutionize the way artificial intelligence is used in everyday devices, allowing AI tasks to be processed locally without relying on remote cloud servers, making them faster, more privacy-protected, and more energy-efficient.

“This system is like a smart workspace where everything is within arm’s reach instead of having to go back and forth between desks and file cabinets,” explained KAIST researchers Hakcheon Jeong and Seungjae Han, who led the development of this technology. “This is similar to the way our brain processes information, where everything is processed efficiently at once at one spot.”

The research was conducted with Hakcheon Jeong and Seungjae Han, the students of Integrated Master’s and Doctoral Program at KAIST School of Electrical Engineering being the co-first authors, the results of which was published online in the international academic journal, Nature Electronics, on January 8, 2025.

Here’s a link to and a citation for the paper,

Self-supervised video processing with self-calibration on an analogue computing platform based on a selector-less memristor array by Hakcheon Jeong, Seungjae Han, See-On Park, Tae Ryong Kim, Jongmin Bae, Taehwan Jang, Yoonho Cho, Seokho Seo, Hyun-Jun Jeong, Seungwoo Park, Taehoon Park, Juyoung Oh, Jeongwoo Park, Kwangwon Koh, Kang-Ho Kim, Dongsuk Jeon, Inyong Kwon, Young-Gyu Yoon & Shinhyun Choi. Nature Electronics volume 8, pages 168–178 (2025) DOI: https://doi.org/10.1038/s41928-024-01318-6 Published: 08 January 2025 Issue Date: February 2025

This paper is behind a paywall.