Tag Archives: neuromorphic computing

New chip for neuromorphic computing runs at a fraction of the energy of today’s systems

An August 17, 2022 news item on Nanowerk announces big (so to speak) claims from a team researching neuromorphic (brainlike) computer chips,

An international team of researchers has designed and built a chip that runs computations directly in memory and can run a wide variety of artificial intelligence (AI) applications–all at a fraction of the energy consumed by computing platforms for general-purpose AI computing.

The NeuRRAM neuromorphic chip brings AI a step closer to running on a broad range of edge devices, disconnected from the cloud, where they can perform sophisticated cognitive tasks anywhere and anytime without relying on a network connection to a centralized server. Applications abound in every corner of the world and every facet of our lives, and range from smart watches, to VR headsets, smart earbuds, smart sensors in factories and rovers for space exploration.

The NeuRRAM chip is not only twice as energy efficient as the state-of-the-art “compute-in-memory” chips, an innovative class of hybrid chips that runs computations in memory, it also delivers results that are just as accurate as conventional digital chips. Conventional AI platforms are a lot bulkier and typically are constrained to using large data servers operating in the cloud.

In addition, the NeuRRAM chip is highly versatile and supports many different neural network models and architectures. As a result, the chip can be used for many different applications, including image recognition and reconstruction as well as voice recognition.

..

An August 17, 2022 University of California at San Diego (UCSD) news release (also on EurekAlert), which originated the news item, provides more detail than usually found in a news release,

“The conventional wisdom is that the higher efficiency of compute-in-memory is at the cost of versatility, but our NeuRRAM chip obtains efficiency while not sacrificing versatility,” said Weier Wan, the paper’s first corresponding author and a recent Ph.D. graduate of Stanford University who worked on the chip while at UC San Diego, where he was co-advised by Gert Cauwenberghs in the Department of Bioengineering. 

The research team, co-led by bioengineers at the University of California San Diego, presents their results in the Aug. 17 [2022] issue of Nature.

Currently, AI computing is both power hungry and computationally expensive. Most AI applications on edge devices involve moving data from the devices to the cloud, where the AI processes and analyzes it. Then the results are moved back to the device. That’s because most edge devices are battery-powered and as a result only have a limited amount of power that can be dedicated to computing. 

By reducing power consumption needed for AI inference at the edge, this NeuRRAM chip could lead to more robust, smarter and accessible edge devices and smarter manufacturing. It could also lead to better data privacy as the transfer of data from devices to the cloud comes with increased security risks. 

On AI chips, moving data from memory to computing units is one major bottleneck. 

“It’s the equivalent of doing an eight-hour commute for a two-hour work day,” Wan said. 

To solve this data transfer issue, researchers used what is known as resistive random-access memory, a type of non-volatile memory that allows for computation directly within memory rather than in separate computing units. RRAM and other emerging memory technologies used as synapse arrays for neuromorphic computing were pioneered in the lab of Philip Wong, Wan’s advisor at Stanford and a main contributor to this work. Computation with RRAM chips is not necessarily new, but generally it leads to a decrease in the accuracy of the computations performed on the chip and a lack of flexibility in the chip’s architecture. 

“Compute-in-memory has been common practice in neuromorphic engineering since it was introduced more than 30 years ago,” Cauwenberghs said.  “What is new with NeuRRAM is that the extreme efficiency now goes together with great flexibility for diverse AI applications with almost no loss in accuracy over standard digital general-purpose compute platforms.”

A carefully crafted methodology was key to the work with multiple levels of “co-optimization” across the abstraction layers of hardware and software, from the design of the chip to its configuration to run various AI tasks. In addition, the team made sure to account for various constraints that span from memory device physics to circuits and network architecture. 

“This chip now provides us with a platform to address these problems across the stack from devices and circuits to algorithms,” said Siddharth Joshi, an assistant professor of computer science and engineering at the University of Notre Dame , who started working on the project as a Ph.D. student and postdoctoral researcher in Cauwenberghs lab at UC San Diego. 

Chip performance

Researchers measured the chip’s energy efficiency by a measure known as energy-delay product, or EDP. EDP combines both the amount of energy consumed for every operation and the amount of times it takes to complete the operation. By this measure, the NeuRRAM chip achieves 1.6 to 2.3 times lower EDP (lower is better) and 7 to 13 times higher computational density than state-of-the-art chips. 

Researchers ran various AI tasks on the chip. It achieved 99% accuracy on a handwritten digit recognition task; 85.7% on an image classification task; and 84.7% on a Google speech command recognition task. In addition, the chip also achieved a 70% reduction in image-reconstruction error on an image-recovery task. These results are comparable to existing digital chips that perform computation under the same bit-precision, but with drastic savings in energy. 

Researchers point out that one key contribution of the paper is that all the results featured are obtained directly on the hardware. In many previous works of compute-in-memory chips, AI benchmark results were often obtained partially by software simulation. 

Next steps include improving architectures and circuits and scaling the design to more advanced technology nodes. Researchers also plan to tackle other applications, such as spiking neural networks.

“We can do better at the device level, improve circuit design to implement additional features and address diverse applications with our dynamic NeuRRAM platform,” said Rajkumar Kubendran, an assistant professor for the University of Pittsburgh, who started work on the project while a Ph.D. student in Cauwenberghs’ research group at UC San Diego.

In addition, Wan is a founding member of a startup that works on productizing the compute-in-memory technology. “As a researcher and  an engineer, my ambition is to bring research innovations from labs into practical use,” Wan said. 

New architecture 

The key to NeuRRAM’s energy efficiency is an innovative method to sense output in memory. Conventional approaches use voltage as input and measure current as the result. But this leads to the need for more complex and more power hungry circuits. In NeuRRAM, the team engineered a neuron circuit that senses voltage and performs analog-to-digital conversion in an energy efficient manner. This voltage-mode sensing can activate all the rows and all the columns of an RRAM array in a single computing cycle, allowing higher parallelism. 

In the NeuRRAM architecture, CMOS neuron circuits are physically interleaved with RRAM weights. It differs from conventional designs where CMOS circuits are typically on the peripheral of RRAM weights.The neuron’s connections with the RRAM array can be configured to serve as either input or output of the neuron. This allows neural network inference in various data flow directions without incurring overheads in area or power consumption. This in turn makes the architecture easier to reconfigure. 

To make sure that accuracy of the AI computations can be preserved across various neural network architectures, researchers developed a set of hardware algorithm co-optimization techniques. The techniques were verified on various neural networks including convolutional neural networks, long short-term memory, and restricted Boltzmann machines. 

As a neuromorphic AI chip, NeuroRRAM performs parallel distributed processing across 48 neurosynaptic cores. To simultaneously achieve high versatility and high efficiency, NeuRRAM supports data-parallelism by mapping a layer in the neural network model onto multiple cores for parallel inference on multiple data. Also, NeuRRAM offers model-parallelism by mapping different layers of a model onto different cores and performing inference in a pipelined fashion.

An international research team

The work is the result of an international team of researchers. 

The UC San Diego team designed the CMOS circuits that implement the neural functions interfacing with the RRAM arrays to support the synaptic functions in the chip’s architecture, for high efficiency and versatility. Wan, working closely with the entire team, implemented the design; characterized the chip; trained the AI models; and executed the experiments. Wan also developed a software toolchain that maps AI applications onto the chip. 

The RRAM synapse array and its operating conditions were extensively characterized and optimized at Stanford University. 

The RRAM array was fabricated and integrated onto CMOS at Tsinghua University. 

The Team at Notre Dame contributed to both the design and architecture of the chip and the subsequent machine learning model design and training.

The research started as part of the National Science Foundation funded Expeditions in Computing project on Visual Cortex on Silicon at Penn State University, with continued funding support from the Office of Naval Research Science of AI program, the Semiconductor Research Corporation and DARPA [{US} Defense Advanced Research Projects Agency] JUMP program, and Western Digital Corporation. 

Here’s a link to and a citation for the paper,

A compute-in-memory chip based on resistive random-access memory by Weier Wan, Rajkumar Kubendran, Clemens Schaefer, Sukru Burc Eryilmaz, Wenqiang Zhang, Dabin Wu, Stephen Deiss, Priyanka Raina, He Qian, Bin Gao, Siddharth Joshi, Huaqiang Wu, H.-S. Philip Wong & Gert Cauwenberghs. Nature volume 608, pages 504–512 (2022) DOI: https://doi.org/10.1038/s41586-022-04992-8 Published: 17 August 2022 Issue Date: 18 August 2022

This paper is open access.

Synaptic transistors for brainlike computers based on (more environmentally friendly) graphene

An August 9, 2022 news item on ScienceDaily describes research investigating materials other than silicon for neuromorphic (brainlike) computing purposes,

Computers that think more like human brains are inching closer to mainstream adoption. But many unanswered questions remain. Among the most pressing, what types of materials can serve as the best building blocks to unlock the potential of this new style of computing.

For most traditional computing devices, silicon remains the gold standard. However, there is a movement to use more flexible, efficient and environmentally friendly materials for these brain-like devices.

In a new paper, researchers from The University of Texas at Austin developed synaptic transistors for brain-like computers using the thin, flexible material graphene. These transistors are similar to synapses in the brain, that connect neurons to each other.

An August 8, 2022 University of Texas at Austin news release (also on EurekAlert but published August 9, 2022), which originated the news item, provides more detail about the research,

“Computers that think like brains can do so much more than today’s devices,” said Jean Anne Incorvia, an assistant professor in the Cockrell School of Engineering’s Department of Electrical and Computer Engineer and the lead author on the paper published today in Nature Communications. “And by mimicking synapses, we can teach these devices to learn on the fly, without requiring huge training methods that take up so much power.”

The Research: A combination of graphene and nafion, a polymer membrane material, make up the backbone of the synaptic transistor. Together, these materials demonstrate key synaptic-like behaviors — most importantly, the ability for the pathways to strengthen over time as they are used more often, a type of neural muscle memory. In computing, this means that devices will be able to get better at tasks like recognizing and interpreting images over time and do it faster.

Another important finding is that these transistors are biocompatible, which means they can interact with living cells and tissue. That is key for potential applications in medical devices that come into contact with the human body. Most materials used for these early brain-like devices are toxic, so they would not be able to contact living cells in any way.

Why It Matters: With new high-tech concepts like self-driving cars, drones and robots, we are reaching the limits of what silicon chips can efficiently do in terms of data processing and storage. For these next-generation technologies, a new computing paradigm is needed. Neuromorphic devices mimic processing capabilities of the brain, a powerful computer for immersive tasks.

“Biocompatibility, flexibility, and softness of our artificial synapses is essential,” said Dmitry Kireev, a post-doctoral researcher who co-led the project. “In the future, we envision their direct integration with the human brain, paving the way for futuristic brain prosthesis.”

Will It Really Happen: Neuromorphic platforms are starting to become more common. Leading chipmakers such as Intel and Samsung have either produced neuromorphic chips already or are in the process of developing them. However, current chip materials place limitations on what neuromorphic devices can do, so academic researchers are working hard to find the perfect materials for soft brain-like computers.

“It’s still a big open space when it comes to materials; it hasn’t been narrowed down to the next big solution to try,” Incorvia said. “And it might not be narrowed down to just one solution, with different materials making more sense for different applications.”

The Team: The research was led by Incorvia and Deji Akinwande, professor in the Department of Electrical and Computer Engineering. The two have collaborated many times together in the past, and Akinwande is a leading expert in graphene, using it in multiple research breakthroughs, most recently as part of a wearable electronic tattoo for blood pressure monitoring.

The idea for the project was conceived by Samuel Liu, a Ph.D. student and first author on the paper, in a class taught by Akinwande. Kireev then suggested the specific project. Harrison Jin, an undergraduate electrical and computer engineering student, measured the devices and analyzed data.

The team collaborated with T. Patrick Xiao and Christopher Bennett of Sandia National Laboratories, who ran neural network simulations and analyzed the resulting data.

Here’s a link to and a citation for the ‘graphene transistor’ paper,

Metaplastic and energy-efficient biocompatible graphene artificial synaptic transistors for enhanced accuracy neuromorphic computing by Dmitry Kireev, Samuel Liu, Harrison Jin, T. Patrick Xiao, Christopher H. Bennett, Deji Akinwande & Jean Anne C. Incorvia. Nature Communications volume 13, Article number: 4386 (2022) DOI: https://doi.org/10.1038/s41467-022-32078-6 Published: 28 July 2022

This paper is open access.

Neuromorphic computing and liquid-light interaction

Simulation result of light affecting liquid geometry, which in turn affects reflection and transmission properties of the optical mode, thus constituting a two-way light–liquid interaction mechanism. The degree of deformation serves as an optical memory allowing to store the power magnitude of the previous optical pulse and use fluid dynamics to affect the subsequent optical pulse at the same actuation region, thus constituting an architecture where memory is part of the computation process. Credit: Gao et al., doi 10.1117/1.AP.4.4.046005

This is a fascinating approach to neuromorphic (brainlike) computing and given my recent post (August 29, 2022) about human cells being incorporated into computer chips, it’s part o my recent spate of posts about neuromorphic computing. From a July 25, 2022 news item on phys.org,

Sunlight sparkling on water evokes the rich phenomena of liquid-light interaction, spanning spatial and temporal scales. While the dynamics of liquids have fascinated researchers for decades, the rise of neuromorphic computing has sparked significant efforts to develop new, unconventional computational schemes based on recurrent neural networks, crucial to supporting wide range of modern technological applications, such as pattern recognition and autonomous driving. As biological neurons also rely on a liquid environment, a convergence may be attained by bringing nanoscale nonlinear fluid dynamics to neuromorphic computing.

A July 25, 2022 SPIE (International Society for Optics and Photonics) press release (also on EurekAlert), which originated the news item,

Researchers from University of California San Diego recently proposed a novel paradigm where liquids, which usually do not strongly interact with light on a micro- or nanoscale, support significant nonlinear response to optical fields. As reported in Advanced Photonics, the researchers predict a substantial light–liquid interaction effect through a proposed nanoscale gold patch operating as an optical heater and generating thickness changes in a liquid film covering the waveguide.

The liquid film functions as an optical memory. Here’s how it works: Light in the waveguide affects the geometry of the liquid surface, while changes in the shape of the liquid surface affect the properties of the optical mode in the waveguide, thus constituting a mutual coupling between the optical mode and the liquid film. Importantly, as the liquid geometry changes, the properties of the optical mode undergo a nonlinear response; after the optical pulse stops, the magnitude of liquid film’s deformation indicates the power of the previous optical pulse.

Remarkably, unlike traditional computational approaches, the nonlinear response and the memory reside at the same spatial region, thus suggesting realization of a compact (beyond von-Neumann) architecture where memory and computational unit occupy the same space. The researchers demonstrate that the combination of memory and nonlinearity allow the possibility of “reservoir computing” capable of performing digital and analog tasks, such as nonlinear logic gates and handwritten image recognition.

Their model also exploits another significant liquid feature: nonlocality. This enables them to predict computation enhancement that is simply not possible in solid state material platforms with limited nonlocal spatial scale. Despite nonlocality, the model does not quite achieve the levels of modern solid-state optics-based reservoir computing systems, yet the work nonetheless presents a clear roadmap for future experimental works aiming to validate the predicted effects and explore intricate coupling mechanisms of various physical processes in a liquid environment for computation.

Using multiphysics simulations to investigate coupling between light, fluid dynamics, heat transport, and surface tension effects, the researchers predict a family of novel nonlinear and nonlocal optical effects. They go a step further by indicating how these can be used to realize versatile, nonconventional computational platforms. Taking advantage of a mature silicon photonics platform, they suggest improvements to state-of-the-art liquid-assisted computation platforms by around five orders magnitude in space and at least two orders of magnitude in speed.

Here’s a link to and a citation for the paper,

Thin liquid film as an optical nonlinear-nonlocal medium and memory element in integrated optofluidic reservoir computer by Chengkuan Gao, Prabhav Gaur, Shimon Rubin, Yeshaiahu Fainman. Advanced Photonics, 4(4), 046005 (2022). https://doi.org/10.1117/1.AP.4.4.046005 Published: 1 July 2022

This paper is open access.

Guide for memristive hardware design

An August 15 ,2022 news item on ScienceDaily announces a type of guide for memristive hardware design,

They are many times faster than flash memory and require significantly less energy: memristive memory cells could revolutionize the energy efficiency of neuromorphic [brainlike] computers. In these computers, which are modeled on the way the human brain works, memristive cells function like artificial synapses. Numerous groups around the world are working on the use of corresponding neuromorphic circuits — but often with a lack of understanding of how they work and with faulty models. Jülich researchers have now summarized the physical principles and models in a comprehensive review article in the renowned journal Advances in Physics.

An August 15, 2022 Forschungszentrum Juelich press release (also on EurekAlert), which originated the news item, describes two papers designed to help researchers better understand and design memristive hardware,

Certain tasks – such as recognizing patterns and language – are performed highly efficiently by a human brain, requiring only about one ten-thousandth of the energy of a conventional, so-called “von Neumann” computer. One of the reasons lies in the structural differences: In a von Neumann architecture, there is a clear separation between memory and processor, which requires constant moving of large amounts of data. This is time and energy consuming – the so-called von Neumann bottleneck. In the brain, the computational operation takes place directly in the data memory and the biological synapses perform the tasks of memory and processor at the same time.

In Jülich, scientists have been working for more than 15 years on special data storage devices and components that can have similar properties to the synapses in the human brain. So-called memristive memory devices, also known as memristors, are considered to be extremely fast, energy-saving and can be miniaturized very well down to the nanometer range. The functioning of memristive cells is based on a very special effect: Their electrical resistance is not constant, but can be changed and reset again by applying an external voltage, theoretically continuously. The change in resistance is controlled by the movement of oxygen ions. If these move out of the semiconducting metal oxide layer, the material becomes more conductive and the electrical resistance drops. This change in resistance can be used to store information.

The processes that can occur in cells are very complex and vary depending on the material system. Three researchers from the Jülich Peter Grünberg Institute – Prof. Regina Dittmann, Dr. Stephan Menzel, and Prof. Rainer Waser – have therefore compiled their research results in a detailed review article, “Nanoionic memristive phenomena in metal oxides: the valence change mechanism”. They explain in detail the various physical and chemical effects in memristors and shed light on the influence of these effects on the switching properties of memristive cells and their reliability.

“If you look at current research activities in the field of neuromorphic memristor circuits, they are often based on empirical approaches to material optimization,” said Rainer Waser, director at the Peter Grünberg Institute. “Our goal with our review article is to give researchers something to work with in order to enable insight-driven material optimization.” The team of authors worked on the approximately 200-page article for ten years and naturally had to keep incorporating advances in knowledge.

“The analogous functioning of memristive cells required for their use as artificial synapses is not the normal case. Usually, there are sudden jumps in resistance, generated by the mutual amplification of ionic motion and Joule heat,” explains Regina Dittmann of the Peter Grünberg Institute. “In our review article, we provide researchers with the necessary understanding of how to change the dynamics of the cells to enable an analog operating mode.”

“You see time and again that groups simulate their memristor circuits with models that don’t take into account high dynamics of the cells at all. These circuits will never work.” said Stephan Menzel, who leads modeling activities at the Peter Grünberg Institute and has developed powerful compact models that are now in the public domain (www.emrl.de/jart.html). “In our review article, we provide the basics that are extremely helpful for a correct use of our compact models.”

Roadmap neuromorphic computing

The “Roadmap of Neuromorphic Computing and Engineering”, which was published in May 2022, shows how neuromorphic computing can help to reduce the enormous energy consumption of IT globally. In it, researchers from the Peter Grünberg Institute (PGI-7), together with leading experts in the field, have compiled the various technological possibilities, computational approaches, learning algorithms and fields of application. 

According to the study, applications in the field of artificial intelligence, such as pattern recognition or speech recognition, are likely to benefit in a special way from the use of neuromorphic hardware. This is because they are based – much more so than classical numerical computing operations – on the shifting of large amounts of data. Memristive cells make it possible to process these gigantic data sets directly in memory without transporting them back and forth between processor and memory. This could reduce the energy efficiency of artificial neural networks by orders of magnitude.

Memristive cells can also be interconnected to form high-density matrices that enable neural networks to learn locally. This so-called edge computing thus shifts computations from the data center to the factory floor, the vehicle, or the home of people in need of care. Thus, monitoring and controlling processes or initiating rescue measures can be done without sending data via a cloud. “This achieves two things at the same time: you save energy, and at the same time, personal data and data relevant to security remain on site,” says Prof. Dittmann, who played a key role in creating the roadmap as editor.

Here’s a link to and a citation for the ‘roadmap’,

2022 roadmap on neuromorphic computing and engineering by Dennis V Christensen, Regina Dittmann, Bernabe Linares-Barranco, Abu Sebastian, Manuel Le Gallo, Andrea Redaelli, Stefan Slesazeck, Thomas Mikolajick, Sabina Spiga, Stephan Menzel, Ilia Valov, Gianluca Milano, Carlo Ricciardi, Shi-Jun Liang, Feng Miao, Mario Lanza, Tyler J Quill, Scott T Keene, Alberto Salleo, Julie Grollier, Danijela Marković, Alice Mizrahi, Peng Yao, J Joshua Yang, Giacomo Indiveri, John Paul Strachan, Suman Datta, Elisa Vianello, Alexandre Valentian, Johannes Feldmann, Xuan Li, Wolfram H P Pernice, Harish Bhaskaran, Steve Furber, Emre Neftci, Franz Scherr, Wolfgang Maass, Srikanth Ramaswamy, Jonathan Tapson, Priyadarshini Panda, Youngeun Kim, Gouhei Tanaka, Simon Thorpe, Chiara Bartolozzi, Thomas A Cleland, Christoph Posch, ShihChii Liu, Gabriella Panuccio, Mufti Mahmud, Arnab Neelim Mazumder, Morteza Hosseini, Tinoosh Mohsenin, Elisa Donati, Silvia Tolu, Roberto Galeazzi, Martin Ejsing Christensen, Sune Holm, Daniele Ielmini and N Pryds. Neuromorphic Computing and Engineering , Volume 2, Number 2 DOI: 10.1088/2634-4386/ac4a83 20 May 2022 • © 2022 The Author(s)

This paper is open access.

Here’s the most recent paper,

Nanoionic memristive phenomena in metal oxides: the valence change mechanism by Regina Dittmann, Stephan Menzel & Rainer Waser. Advances in Physics
Volume 70, 2021 – Issue 2 Pages 155-349 DOI: https://doi.org/10.1080/00018732.2022.2084006 Published online: 06 Aug 2022

This paper is behind a paywall.

Quantum memristors

This March 24, 2022 news item on Nanowerk announcing work on a quantum memristor seems to have had a rough translation from German to English,

In recent years, artificial intelligence has become ubiquitous, with applications such as speech interpretation, image recognition, medical diagnosis, and many more. At the same time, quantum technology has been proven capable of computational power well beyond the reach of even the world’s largest supercomputer.

Physicists at the University of Vienna have now demonstrated a new device, called quantum memristor, which may allow to combine these two worlds, thus unlocking unprecedented capabilities. The experiment, carried out in collaboration with the National Research Council (CNR) and the Politecnico di Milano in Italy, has been realized on an integrated quantum processor operating on single photons.

Caption: Abstract representation of a neural network which is made of photons and has memory capability potentially related to artificial intelligence. Credit: © Equinox Graphics, University of Vienna

A March 24, 2022 University of Vienna (Universität Wien) press release (also on EurekAlert), which originated the news item, explains why this work has an impact on artificial intelligence,

At the heart of all artificial intelligence applications are mathematical models called neural networks. These models are inspired by the biological structure of the human brain, made of interconnected nodes. Just like our brain learns by constantly rearranging the connections between neurons, neural networks can be mathematically trained by tuning their internal structure until they become capable of human-level tasks: recognizing our face, interpreting medical images for diagnosis, even driving our cars. Having integrated devices capable of performing the computations involved in neural networks quickly and efficiently has thus become a major research focus, both academic and industrial.

One of the major game changers in the field was the discovery of the memristor, made in 2008. This device changes its resistance depending on a memory of the past current, hence the name memory-resistor, or memristor. Immediately after its discovery, scientists realized that (among many other applications) the peculiar behavior of memristors was surprisingly similar to that of neural synapses. The memristor has thus become a fundamental building block of neuromorphic architectures.

A group of experimental physicists from the University of Vienna, the National Research Council (CNR) and the Politecnico di Milano led by Prof. Philip Walther and Dr. Roberto Osellame, have now demonstrated that it is possible to engineer a device that has the same behavior as a memristor, while acting on quantum states and being able to encode and transmit quantum information. In other words, a quantum memristor. Realizing such device is challenging because the dynamics of a memristor tends to contradict the typical quantum behavior. 

By using single photons, i.e. single quantum particles of lights, and exploiting their unique ability to propagate simultaneously in a superposition of two or more paths, the physicists have overcome the challenge. In their experiment, single photons propagate along waveguides laser-written on a glass substrate and are guided on a superposition of several paths. One of these paths is used to measure the flux of photons going through the device and this quantity, through a complex electronic feedback scheme, modulates the transmission on the other output, thus achieving the desired memristive behavior. Besides demonstrating the quantum memristor, the researchers have provided simulations showing that optical networks with quantum memristor can be used to learn on both classical and quantum tasks, hinting at the fact that the quantum memristor may be the missing link between artificial intelligence and quantum computing.

“Unlocking the full potential of quantum resources within artificial intelligence is one of the greatest challenges of the current research in quantum physics and computer science”, says Michele Spagnolo, who is first author of the publication in the journal “Nature Photonics”. The group of Philip Walther of the University of Vienna has also recently demonstrated that robots can learn faster when using quantum resources and borrowing schemes from quantum computation. This new achievement represents one more step towards a future where quantum artificial intelligence become reality.

Here’s a link to and a citation for the paper,

Experimental photonic quantum memristor by Michele Spagnolo, Joshua Morris, Simone Piacentini, Michael Antesberger, Francesco Massa, Andrea Crespi, Francesco Ceccarelli, Roberto Osellame & Philip Walther. Nature Photonics volume 16, pages 318–323 (2022) DOI: https://doi.org/10.1038/s41566-022-00973-5 Published 24 March 2022 Issue Date April 2022

This paper is open access.

Simulating neurons and synapses with memristive devices

I’ve been meaning to get to this research on ‘neuromorphic memory’ for a while. From a May 20, 2022 news item on Nanowerk,

Researchers have reported a nano-sized neuromorphic memory device that emulates neurons and synapses simultaneously in a unit cell, another step toward completing the goal of neuromorphic computing designed to rigorously mimic the human brain with semiconductor devices.

Neuromorphic computing aims to realize artificial intelligence (AI) by mimicking the mechanisms of neurons and synapses that make up the human brain. Inspired by the cognitive functions of the human brain that current computers cannot provide, neuromorphic devices have been widely investigated.

However, current Complementary Metal-Oxide Semiconductor (CMOS)-based neuromorphic circuits simply connect artificial neurons and synapses without synergistic interactions, and the concomitant implementation of neurons and synapses still remains a challenge.

A May 20, 2022 Korea Advanced Institute of Science and Technology (KAIST) press release (also on EurekAlert), which originated the news item, delves further into the research,

To address these issues, a research team led by Professor Keon Jae Lee from the Department of Materials Science and Engineering implemented the biological working mechanisms of humans by introducing the neuron-synapse interactions in a single memory cell, rather than the conventional approach of electrically connecting artificial neuronal and synaptic devices.

Similar to commercial graphics cards, the artificial synaptic devices previously studied often used to accelerate parallel computations, which shows clear differences from the operational mechanisms of the human brain. The research team implemented the synergistic interactions between neurons and synapses in the neuromorphic memory device, emulating the mechanisms of the biological neural network. In addition, the developed neuromorphic device can replace complex CMOS neuron circuits with a single device, providing high scalability and cost efficiency. 

The human brain consists of a complex network of 100 billion neurons and 100 trillion synapses. The functions and structures of neurons and synapses can flexibly change according to the external stimuli, adapting to the surrounding environment. The research team developed a neuromorphic device in which short-term and long-term memories coexist using volatile and non-volatile memory devices that mimic the characteristics of neurons and synapses, respectively. A threshold switch device is used as volatile memory and phase-change memory is used as a non-volatile device. Two thin-film devices are integrated without intermediate electrodes, implementing the functional adaptability of neurons and synapses in the neuromorphic memory.

Professor Keon Jae Lee explained, “Neurons and synapses interact with each other to establish cognitive functions such as memory and learning, so simulating both is an essential element for brain-inspired artificial intelligence. The developed neuromorphic memory device also mimics the retraining effect that allows quick learning of the forgotten information by implementing a positive feedback effect between neurons and synapses.”

Here’s a link to and a citation for the paper,

Simultaneous emulation of synaptic and intrinsic plasticity using a memristive synapse by Sang Hyun Sung, Tae Jin Kim, Hyera Shin, Tae Hong Im & Keon Jae Lee. Nature Communications volume 13, Article number: 2811 (2022) DOI https://doi.org/10.1038/s41467-022-30432-2 Published 19 May 2022

This paper is open access.

An ‘artificial brain’ and life-long learning

Talk of artificial brains (also known as, brainlike computing or neuromorphic computing) usually turns to memory fairly quickly. This February 3, 2022 news item on ScienceDaily does too although the focus is on how memory and forgetting affect the ability to learn,

When the human brain learns something new, it adapts. But when artificial intelligence learns something new, it tends to forget information it already learned.

As companies use more and more data to improve how AI recognizes images, learns languages and carries out other complex tasks, a paper publishing in Science this week shows a way that computer chips could dynamically rewire themselves to take in new data like the brain does, helping AI to keep learning over time.

“The brains of living beings can continuously learn throughout their lifespan. We have now created an artificial platform for machines to learn throughout their lifespan,” said Shriram Ramanathan, a professor in Purdue University’s [Indiana, US] School of Materials Engineering who specializes in discovering how materials could mimic the brain to improve computing.

Unlike the brain, which constantly forms new connections between neurons to enable learning, the circuits on a computer chip don’t change. A circuit that a machine has been using for years isn’t any different than the circuit that was originally built for the machine in a factory.

This is a problem for making AI more portable, such as for autonomous vehicles or robots in space that would have to make decisions on their own in isolated environments. If AI could be embedded directly into hardware rather than just running on software as AI typically does, these machines would be able to operate more efficiently.

A February 3, 2022 Purdue University news release (also on EurekAlert), which originated the news item, provides more technical detail about the work (Note: Links have been removed),

In this study, Ramanathan and his team built a new piece of hardware that can be reprogrammed on demand through electrical pulses. Ramanathan believes that this adaptability would allow the device to take on all of the functions that are necessary to build a brain-inspired computer.

“If we want to build a computer or a machine that is inspired by the brain, then correspondingly, we want to have the ability to continuously program, reprogram and change the chip,” Ramanathan said.

Toward building a brain in chip form

The hardware is a small, rectangular device made of a material called perovskite nickelate,  which is very sensitive to hydrogen. Applying electrical pulses at different voltages allows the device to shuffle a concentration of hydrogen ions in a matter of nanoseconds, creating states that the researchers found could be mapped out to corresponding functions in the brain.

When the device has more hydrogen near its center, for example, it can act as a neuron, a single nerve cell. With less hydrogen at that location, the device serves as a synapse, a connection between neurons, which is what the brain uses to store memory in complex neural circuits.

Through simulations of the experimental data, the Purdue team’s collaborators at Santa Clara University and Portland State University showed that the internal physics of this device creates a dynamic structure for an artificial neural network that is able to more efficiently recognize electrocardiogram patterns and digits compared to static networks. This neural network uses “reservoir computing,” which explains how different parts of a brain communicate and transfer information.

Researchers from The Pennsylvania State University also demonstrated in this study that as new problems are presented, a dynamic network can “pick and choose” which circuits are the best fit for addressing those problems.

Since the team was able to build the device using standard semiconductor-compatible fabrication techniques and operate the device at room temperature, Ramanathan believes that this technique can be readily adopted by the semiconductor industry.

“We demonstrated that this device is very robust,” said Michael Park, a Purdue Ph.D. student in materials engineering. “After programming the device over a million cycles, the reconfiguration of all functions is remarkably reproducible.”

The researchers are working to demonstrate these concepts on large-scale test chips that would be used to build a brain-inspired computer.

Experiments at Purdue were conducted at the FLEX Lab and Birck Nanotechnology Center of Purdue’s Discovery Park. The team’s collaborators at Argonne National Laboratory, the University of Illinois, Brookhaven National Laboratory and the University of Georgia conducted measurements of the device’s properties.

Here’s a link to and a citation for the paper,

Reconfigurable perovskite nickelate electronics for artificial intelligence by Hai-Tian Zhang, Tae Joon Park, A. N. M. Nafiul Islam, Dat S. J. Tran, Sukriti Manna, Qi Wang, Sandip Mondal, Haoming Yu, Suvo Banik, Shaobo Cheng, Hua Zhou, Sampath Gamage, Sayantan Mahapatra, Yimei Zhu, Yohannes Abate, Nan Jiang, Subramanian K. R. S. Sankaranarayanan, Abhronil Sengupta, Christof Teuscher, Shriram Ramanathan. Science • 3 Feb 2022 • Vol 375, Issue 6580 • pp. 533-539 • DOI: 10.1126/science.abj7943

This paper is behind a paywall.

Memristive spintronic neurons

A December 6, 2021 news item on Nanowerk on memristive spintronic neurons (Note: A link has been removed),

Researchers at Tohoku University and the University of Gothenburg have established a new spintronic technology for brain-inspired computing.

Their achievement was published in the journal Nature Materials (“Memristive control of mutual SHNO synchronization for neuromorphic computing”).

Sophisticated cognitive tasks, such as image and speech recognition, have seen recent breakthroughs thanks to deep learning. Even so, the human brain still executes these tasks without exerting much energy and with greater efficiency than any computer. The development of energy-efficient artificial neurons capable of emulating brain-inspired processes has therefore been a major research goal for decades.

A November 29, 2021 Tohoku University press release (also on EurekAlert but published November 30, 2021), which originated the news release, provides more technical detail,

Researchers demonstrated the first integration of a cognitive computing nano-element – the memristor – into another – a spintronic oscillator. Arrays of these memristor-controlled oscillators combine the non-volatile local storage of the memristor function with the microwave frequency computation of the nano-oscillator networks and can closely imitate the non-linear oscillatory neural networks of the human brain.

Resistance of the memristor changed with the voltage hysteresis applied to the top Ti/Cu electrode. Upon voltage application to the electrode, an electric field was applied at the high-resistance state, compared to electric current flows for the low-resistance state. The effects of electric field and current on the oscillator differed from each other, offering various controls of oscillation and synchronization properties.

Professor Johan Åkerman of the University of Gothenburg and leader of the study expressed his hopes for the future and the significance of the finding. “We are particularly interested in emerging quantum-inspired computing schemes, such as Ising Machines. The results also highlight the productive collaboration that we have established in neuromorphic spintronics between the University of Gothenburg and Tohoku University, something that is also part of the Sweden-Japan collaborative network MIRAI 2.0.”

“So far, artificial neurons and synapses have been developed separately in many fields; this work marks an important milestone: two functional elements have been combined into one,” said professor Shunsuke Fukami, who led the project on the Tohoku University side. Dr. Mohammad Zahedinejad of the University of Gothenburg and first author of the study adds, “Using the memristor-controlled spintronic oscillator arrays, we could tune the synaptic interactions between adjacent neurons and program them into mutually different and partially synchronized states.”

To put into practice their discovery, the researchers examined the operation of a test device comprising one oscillator and one memristor. The constricted region of W/CoFeB stack served as an oscillator, i.e., the neuron, whereas the MgO/AlOx/SiNx stack acted as a memristor, i.e., the synapse.

Resistance of the memristor changed with the voltage hysteresis applied to the top Ti/Cu electrode. Upon voltage application to the electrode, an electric field was applied at the high-resistance state, compared to electric current flows for the low-resistance state. The effects of electric field and current on the oscillator differed from each other, offering various controls of oscillation and synchronization properties.

Professor Johan Åkerman of the University of Gothenburg and leader of the study expressed his hopes for the future and the significance of the finding. “We are particularly interested in emerging quantum-inspired computing schemes, such as Ising Machines. The results also highlight the productive collaboration that we have established in neuromorphic spintronics between the University of Gothenburg and Tohoku University, something that is also part of the Sweden-Japan collaborative network MIRAI 2.0.” [sic]

Here’s a link to and a citation for the paper,

Memristive control of mutual spin Hall nano-oscillator synchronization for neuromorphic computing by Mohammad Zahedinejad, Himanshu Fulara, Roman Khymyn, Afshin Houshang, Mykola Dvornik, Shunsuke Fukami, Shun Kanai, Hideo Ohno & Johan Åkerman. Nature Materials (2021) DOI: https://doi.org/10.1038/s41563-021-01153-6 Published 29 November 2021

This paper is behind a paywall.

Artificial ionic neuron for electronic memories

This venture into brain-like (neuromorphic) computing comes from France according to an August 17, 2021 news item on Nanowerk (Note: A link has been removed),

Brain-inspired electronics are the subject of intense research. Scientists from CNRS (Centre national de la recherche scientifique; French National Centre for Scientific Research) and the Ecole Normale Supérieure – PSL have theorized how to develop artificial neurons using, as nerve cells, ions to carry the information.

Their work, published in Science (“Modeling of emergent memory and voltage spiking in ionic transport through angstrom-scale slits”), reports that devices made of a single layer of water transporting ions within graphene nanoslits have the same transmission capacity as a neuron.

Caption Artificial neuron prototype: nanofluidic slits can play the role of ion channels and allow neurons to communicate. Ion clusters achieve the ion transport that causes this communication. Credit © Paul Robin, ENS Laboratoire de Physique (CNRS/ENS-PSL/Sorbonne Université/Université de Paris).

Au August 16, 2021 CNRS press release (also on EurekAlert but published August 6, 2021), which originated the news item, provides insight into the international interest in neuromorphic computing along with a few technical details about this latest research,

With an energy consumption equivalent to two bananas per day, the human brain can perform many complex tasks. Its high energy efficiency depends in particular on its base unit, the neuron, which has a membrane with nanometric pores called ion channels, which open and close according to the stimuli received. The resulting ion flows create an electric current responsible for the emission of action potentials, signals that allow neurons to communicate with each other.

Artificial intelligence can do all of these tasks but only at the cost of energy consumption tens of thousands of times that of the human brain. So the entire research challenge today is to design electronic systems that are as energy efficient as the human brain, for example, by using ions, not electrons, to carry the information. For this, nanofluidics, the study of how fluids behave in channels less than 100 nanometers wide, offer many perspectives. In a new study, a team from the ENS Laboratoire de Physique (CNRS/ENS-PSL/Sorbonne Université/Université de Paris) shows how to construct a prototype of an artificial neuron formed of extremely thin graphene slits containing a single layer of water molecules1. The scientists have shown that, under the effect of an electric field, the ions from this layer of water assemble into elongated clusters and develop a property known as the memristor effect: these clusters retain some of the stimuli that have been received in the past. To repeat the comparison with the brain, the graphene slits reproduce the ion channels, clusters and ion flows. And, using theoretical and digital tools, scientists have shown how to assemble these clusters to reproduce the physical mechanism of emission of action potentials, and thus the transmission of information.

This theoretical work continues experimentally within the French team, in collaboration with scientists from the University of Manchester (UK). The goal now is to prove experimentally that such systems can implement simple learning algorithms that can serve as the basis for tomorrow’s electronic memories.

1 Recently invented in Manchester by the group of André Geim (Nobel Prize in Physics 2010)

Here’s a link to and a citation for the paper,

Modeling of emergent memory and voltage spiking in ionic transport through angstrom-scale slits by Paul Robin, Nikita Kavokine, Lydéric Bocquet. Science 06 Aug 2021: Vol. 373, Issue 6555, pp. 687-691 DOI: 10.1126/science.abf7923

This paper is behind a paywall.

Memristors, it’s all about the oxides

I have one research announcement from China and another from the Netherlands, both of which concern memristors and oxides.

China

A May 17, 2021 news item on Nanowerk announces work, which suggests that memristors may not need to rely solely on oxides but could instead utilize light more gainfully,

Scientists are getting better at making neuron-like junctions for computers that mimic the human brain’s random information processing, storage and recall. Fei Zhuge of the Chinese Academy of Sciences and colleagues reviewed the latest developments in the design of these ‘memristors’ for the journal Science and Technology of Advanced Materials …

Computers apply artificial intelligence programs to recall previously learned information and make predictions. These programs are extremely energy- and time-intensive: typically, vast volumes of data must be transferred between separate memory and processing units. To solve this issue, researchers have been developing computer hardware that allows for more random and simultaneous information transfer and storage, much like the human brain.

Electronic circuits in these ‘neuromorphic’ computers include memristors that resemble the junctions between neurons called synapses. Energy flows through a material from one electrode to another, much like a neuron firing a signal across the synapse to the next neuron. Scientists are now finding ways to better tune this intermediate material so the information flow is more stable and reliable.

I had no success locating the original news release, which originated the news item, but have found this May 17, 2021 news item on eedesignit.com, which provides the remaining portion of the news release.

“Oxides are the most widely used materials in memristors,” said Zhuge. “But oxide memristors have unsatisfactory stability and reliability. Oxide-based hybrid structures can effectively improve this.”

Memristors are usually made of an oxide-based material sandwiched between two electrodes. Researchers are getting better results when they combine two or more layers of different oxide-based materials between the electrodes. When an electrical current flows through the network, it induces ions to drift within the layers. The ions’ movements ultimately change the memristor’s resistance, which is necessary to send or stop a signal through the junction.

Memristors can be tuned further by changing the compounds used for electrodes or by adjusting the intermediate oxide-based materials. Zhuge and his team are currently developing optoelectronic neuromorphic computers based on optically-controlled oxide memristors. Compared to electronic memristors, photonic ones are expected to have higher operation speeds and lower energy consumption. They could be used to construct next generation artificial visual systems with high computing efficiency.

Now for a picture that accompanied the news release, which follows,

Fig. The all-optically controlled memristor developed for optoelectronic neuromorphic computing (Image by NIMTE)

Here’s the February 7, 2021 Ningbo Institute of Materials Technology and Engineering (NIMTE) press release featuring this work and a more technical description,

A research group led by Prof. ZHUGE Fei at the Ningbo Institute of Materials Technology and Engineering (NIMTE) of the Chinese Academy of Sciences (CAS) developed an all-optically controlled (AOC) analog memristor, whose memconductance can be reversibly tuned by varying only the wavelength of the controlling light.

As the next generation of artificial intelligence (AI), neuromorphic computing (NC) emulates the neural structure and operation of the human brain at the physical level, and thus can efficiently perform multiple advanced computing tasks such as learning, recognition and cognition.

Memristors are promising candidates for NC thanks to the feasibility of high-density 3D integration and low energy consumption. Among them, the emerging optoelectronic memristors are competitive by virtue of combining the advantages of both photonics and electronics. However, the reversible tuning of memconductance depends highly on the electric excitation, which have severely limited the development and application of optoelectronic NC.

To address this issue, researchers at NIMTE proposed a bilayered oxide AOC memristor, based on the relatively mature semiconductor material InGaZnO and a memconductance tuning mechanism of light-induced electron trapping and detrapping.

The traditional electrical memristors require strong electrical stimuli to tune their memconductance, leading to high power consumption, a large amount of Joule heat, microstructural change triggered by the Joule heat, and even high crosstalk in memristor crossbars.

On the contrary, the developed AOC memristor does not involve microstructure changes, and can operate upon weak light irradiation with light power density of only 20 μW cm-2, which has provided a new approach to overcome the instability of the memristor.

Specifically, the AOC memristor can serve as an excellent synaptic emulator and thus mimic spike-timing-dependent plasticity (STDP) which is an important learning rule in the brain, indicating its potential applications in AOC spiking neural networks for high-efficiency optoelectronic NC.

Moreover, compared to purely optical computing, the optoelectronic computing using our AOC memristor showed higher practical feasibility, on account of the simple structure and fabrication process of the device.

The study may shed light on the in-depth research and practical application of optoelectronic NC, and thus promote the development of the new generation of AI.

This work was supported by the National Natural Science Foundation of China (No. 61674156 and 61874125), the Strategic Priority Research Program of Chinese Academy of Sciences (No. XDB32050204), and the Zhejiang Provincial Natural Science Foundation of China (No. LD19E020001).

Here’s a link to and a citation for the paper,

Hybrid oxide brain-inspired neuromorphic devices for hardware implementation of artificial intelligence by Jingrui Wang, Xia Zhuge & Fei Zhuge. Science and Technology of Advanced Materials Volume 22, 2021 – Issue 1 Pages 326-344 DOI: https://doi.org/10.1080/14686996.2021.1911277 Published online:14 May 2021

This paper appears to be open access.

Netherlands

In this case, a May 18, 2021 news item on Nanowerk marries oxides to spintronics,

Classic computers use binary values (0/1) to perform. By contrast, our brain cells can use more values to operate, making them more energy-efficient than computers. This is why scientists are interested in neuromorphic (brain-like) computing.

Physicists from the University of Groningen (the Netherlands) have used a complex oxide to create elements comparable to the neurons and synapses in the brain using spins, a magnetic property of electrons.

The press release, which follows, was accompanied by this image illustrating the work,

Caption: Schematic of the proposed device structure for neuromorphic spintronic memristors. The write path is between the terminals through the top layer (black dotted line), the read path goes through the device stack (red dotted line). The right side of the figure indicates how the choice of substrate dictates whether the device will show deterministic or probabilistic behaviour. Credit: Banerjee group, University of Groningen

A May 18, 2021 University of Groningen press release (also on EurekAlert), which originated the news item, adds more ‘spin’ to the story,

Although computers can do straightforward calculations much faster than humans, our brains outperform silicon machines in tasks like object recognition. Furthermore, our brain uses less energy than computers. Part of this can be explained by the way our brain operates: whereas a computer uses a binary system (with values 0 or 1), brain cells can provide more analogue signals with a range of values.

Thin films

The operation of our brains can be simulated in computers, but the basic architecture still relies on a binary system. That is why scientist look for ways to expand this, creating hardware that is more brain-like, but will also interface with normal computers. ‘One idea is to create magnetic bits that can have intermediate states’, says Tamalika Banerjee, Professor of Spintronics of Functional Materials at the Zernike Institute for Advanced Materials, University of Groningen. She works on spintronics, which uses a magnetic property of electrons called ‘spin’ to transport, manipulate and store information.

In this study, her PhD student Anouk Goossens, first author of the paper, created thin films of a ferromagnetic metal (strontium-ruthenate oxide, SRO) grown on a substrate of strontium titanate oxide. The resulting thin film contained magnetic domains that were perpendicular to the plane of the film. ‘These can be switched more efficiently than in-plane magnetic domains’, explains Goossens. By adapting the growth conditions, it is possible to control the crystal orientation in the SRO. Previously, out-of-plane magnetic domains have been made using other techniques, but these typically require complex layer structures.

Magnetic anisotropy

The magnetic domains can be switched using a current through a platinum electrode on top of the SRO. Goossens: ‘When the magnetic domains are oriented perfectly perpendicular to the film, this switching is deterministic: the entire domain will switch.’ However, when the magnetic domains are slightly tilted, the response is probabilistic: not all the domains are the same, and intermediate values occur when only part of the crystals in the domain have switched.

By choosing variants of the substrate on which the SRO is grown, the scientists can control its magnetic anisotropy. This allows them to produce two different spintronic devices. ‘This magnetic anisotropy is exactly what we wanted’, says Goossens. ‘Probabilistic switching compares to how neurons function, while the deterministic switching is more like a synapse.’

The scientists expect that in the future, brain-like computer hardware can be created by combining these different domains in a spintronic device that can be connected to standard silicon-based circuits. Furthermore, probabilistic switching would also allow for stochastic computing, a promising technology which represents continuous values by streams of random bits. Banerjee: ‘We have found a way to control intermediate states, not just for memory but also for computing.’

Here’s a link to and a citation for the paper,

Anisotropy and Current Control of Magnetization in SrRuO3/SrTiO3 Heterostructures for Spin-Memristors by A.S. Goossens, M.A.T. Leiviskä and T. Banerjee. Frontiers in Nanotechnology DOI: https://doi.org/10.3389/fnano.2021.680468 Published: 18 May 2021

This appears to be open access.