Tag Archives: neuromorphic computing

Light-based neural networks

It’s unusual to see the same headline used to highlight research from two different teams released in such proximity, February 2024 and July 2024, respectively. Both of these are neuromorphic (brainlike) computing stories.

February 2024: Neural networks made of light

The first team’s work is announced in a February 21, 2024 Friedrich Schiller University press release, Note: A link has been removed,

Researchers from the Leibniz Institute of Photonic Technology (Leibniz IPHT) and the Friedrich Schiller University in Jena, along with an international team, have developed a new technology that could significantly reduce the high energy demands of future AI systems. This innovation utilizes light for neuronal computing, inspired by the neural networks of the human brain. It promises not only more efficient data processing but also speeds many times faster than current methods, all while consuming considerably less energy. Published in the prestigious journal „Advanced Science,“ their work introduces new avenues for environmentally friendly AI applications, as well as advancements in computerless diagnostics and intelligent microscopy.

Artificial intelligence (AI) is pivotal in advancing biotechnology and medical procedures, ranging from cancer diagnostics to the creation of new antibiotics. However, the ecological footprint of large-scale AI systems is substantial. For instance, training extensive language models like ChatGPT-3 requires several gigawatt-hours of energy—enough to power an average nuclear power plant at full capacity for several hours.

Prof. Mario Chemnitz, new Junior Professor of Intelligent Photonic SystemsExternal link at Friedrich Schiller University Jena, and Dr Bennet Fischer from Leibniz IPHT in Jena, in collaboration with their international team, have devised an innovative method to develop potentially energy-efficient computing systems that forego the need for extensive electronic infrastructure. They harness the unique interactions of light waves within optical fibers to forge an advanced artificial learning system.

A single fiber instead of thousands of components

Unlike traditional systems that rely on computer chips containing thousands of electronic components, their system uses a single optical fiber. This fiber is capable of performing the tasks of various neural networks—at the speed of light. “We utilize a single optical fiber to mimic the computational power of numerous neural networks,“ Mario Chemnitz, who is also leader of the “Smart Photonics“ junior research group at Leibniz IPHT, explains. “By leveraging the unique physical properties of light, this system will enable the rapid and efficient processing of vast amounts of data in the future.

Delving into the mechanics reveals how information transmission occurs through the mixing of light frequencies: Data—whether pixel values from images or frequency components of an audio track—are encoded onto the color channels of ultrashort light pulses. These pulses carry the information through the fiber, undergoing various combinations, amplifications, or attenuations. The emergence of new color combinations at the fiber’s output enables the prediction of data types or contexts. For example, specific color channels can indicate visible objects in images or signs of illness in a voice.

A prime example of machine learning is identifying different numbers from thousands of handwritten characters. Mario Chemnitz, Bennet Fischer, and their colleagues from the Institut National de la Recherche Scientifique (INRS) in Québec utilized their technique to encode images of handwritten digits onto light signals and classify them via the optical fiber. The alteration in color composition at the fiber’s end forms a unique color spectrum—a „fingerprint“ for each digit. Following training, the system can analyze and recognize new handwriting digits with significantly reduced energy consumption.

System recognizes COVID-19 from voice samples

In simpler terms, pixel values are converted into varying intensities of primary colors—more red or less blue, for instance,“ Mario Chemnitz details. “Within the fiber, these primary colors blend to create the full spectrum of the rainbow. The shade of our mixed purple, for example, reveals much about the data processed by our system.“

The team has also successfully applied this method in a pilot study to diagnose COVID-19 infections using voice samples, achieving a detection rate that surpasses the best digital systems to date.

We are the first to demonstrate that such a vibrant interplay of light waves in optical fibers can directly classify complex information without any additional intelligent software,“ Mario Chemnitz states.

Since December 2023, Mario Chemnitz has held the position of Junior Professor of Intelligent Photonic Systems at Friedrich Schiller University Jena. Following his return from INRS in Canada in 2022, where he served as a postdoc, Chemnitz has been leading an international team at Leibniz IPHT in Jena. With Nexus funding support from the Carl Zeiss Foundation, their research focuses on exploring the potentials of non-linear optics. Their goal is to develop computer-free intelligent sensor systems and microscopes, as well as techniques for green computing.

Here’s a link to and a citation for the paper,

Neuromorphic Computing via Fission-based Broadband Frequency Generation by Bennet Fischer, Mario Chemnitz, Yi Zhu, Nicolas Perron, Piotr Roztocki, Benjamin MacLellan, Luigi Di Lauro, A. Aadhi, Cristina Rimoldi, Tiago H. Falk, Roberto Morandotti. Advanced Science Volume 10, Issue 35 December 15, 2023 2303835 DOI: https://doi.org/10.1002/advs.202303835. First published: 02 October 2023

This paper is open access.

July 2024: Neural networks made of light

A July 12, 2024 news item on ScienceDaily announces research from another German team,

Scientists propose a new way of implementing a neural network with an optical system which could make machine learning more sustainable in the future. The researchers at the Max Planck Institute for the Science of Light have published their new method in Nature Physics, demonstrating a method much simpler than previous approaches.

A July 12, 2024 Max Planck Institute for the Science of Light press release (also on EurekAlert), which originated the news item, provides more detail about their approach to neuromorphic computiing,

Machine learning and artificial intelligence are becoming increasingly widespread with applications ranging from computer vision to text generation, as demonstrated by ChatGPT. However, these complex tasks require increasingly complex neural networks; some with many billion parameters. This rapid growth of neural network size has put the technologies on an unsustainable path due to their exponentially growing energy consumption and training times. For instance, it is estimated that training GPT-3 consumed more than 1,000 MWh of energy, which amounts to the daily electrical energy consumption of a small town. This trend has created a need for faster, more energy- and cost-efficient alternatives, sparking the rapidly developing field of neuromorphic computing. The aim of this field is to replace the neural networks on our digital computers with physical neural networks. These are engineered to perform the required mathematical operations physically in a potentially faster and more energy-efficient way.

Optics and photonics are particularly promising platforms for neuromorphic computing since energy consumption can be kept to a minimum. Computations can be performed in parallel at very high speeds only limited by the speed of light. However, so far, there have been two significant challenges: Firstly, realizing the necessary complex mathematical computations requires high laser powers. Secondly, the lack of an efficient general training method for such physical neural networks.

Both challenges can be overcome with the new method proposed by Clara Wanjura and Florian Marquardt from the Max Planck Institute for the Science of Light in their new article in Nature Physics. “Normally, the data input is imprinted on the light field. However, in our new methods we propose to imprint the input by changing the light transmission,” explains Florian Marquardt, Director at the Institute. In this way, the input signal can be processed in an arbitrary fashion. This is true even though the light field itself behaves in the simplest way possible in which waves interfere without otherwise influencing each other. Therefore, their approach allows one to avoid complicated physical interactions to realize the required mathematical functions which would otherwise require high-power light fields. Evaluating and training this physical neural network would then become very straightforward: “It would really be as simple as sending light through the system and observing the transmitted light. This lets us evaluate the output of the network. At the same time, this allows one to measure all relevant information for the training”, says Clara Wanjura, the first author of the study. The authors demonstrated in simulations that their approach can be used to perform image classification tasks with the same accuracy as digital neural networks.

In the future, the authors are planning to collaborate with experimental groups to explore the implementation of their method. Since their proposal significantly relaxes the experimental requirements, it can be applied to many physically very different systems. This opens up new possibilities for neuromorphic devices allowing physical training over a broad range of platforms.

Here’s a link to and a citation for the paper,

Fully nonlinear neuromorphic computing with linear wave scattering by Clara C. Wanjura & Florian Marquardt. Nature Physics (2024) DOI: https://doi.org/10.1038/s41567-024-02534-9 Published: 09 July 2024

This paper is open access.

Dual functions—neuromorphic (brainlike) and security—with papertronic devices

Michael Berger’s June 27, 2024 Nanowerk Spotlight article describes some of the latest work on developing electronic paper devices (yes, paper), Note 1: Links have been removed, Note 2: If you do check out Berger’s article, you will need to click a box confirming you are human,+

Paper-based electronic devices have long been an intriguing prospect for researchers, offering potential advantages in sustainability, cost-effectiveness, and flexibility. However, translating the unique properties of paper into functional electronic components has presented significant challenges. Traditional semiconductor manufacturing processes are incompatible with paper’s thermal sensitivity and porous structure. Previous attempts to create paper-based electronics often resulted in devices with limited functionality or poor durability.

Recent advances in materials science and nanofabrication techniques have opened new avenues for realizing sophisticated electronic devices on paper substrates. Researchers have made progress in developing conductive inks, flexible electrodes, and solution-processable semiconductors that can be applied to paper without compromising its inherent properties. These developments have paved the way for creating paper-based sensors, energy storage devices, and simple circuits.

Despite these advancements, achieving complex electronic functionalities on paper, particularly in areas like neuromorphic computing and security applications, has remained elusive. Neuromorphic devices, which mimic the behavior of biological synapses, typically require precise control of charge transport and storage mechanisms.

Similarly, physically unclonable functions (PUFs) used in security applications depend on the ability to generate random, unique patterns at the nanoscale level. Implementing these sophisticated functionalities on paper substrates has been a persistent challenge due to the material’s inherent variability and limited compatibility with advanced fabrication techniques.

A research team in Korea has now made significant strides in addressing these challenges, developing a versatile paper-based electronic device that demonstrates both neuromorphic and security capabilities. Their work, published in Advanced Materials (“Versatile Papertronics: Photo-Induced Synapse and Security Applications on Papers”), describes a novel approach to creating multifunctional “papertronics” using a combination of solution-processable materials and innovative device architectures.

The team showcased the potential of their device by simulating a facial recognition task. Using a simple neural network architecture and the light-responsive properties of their paper-based device, they achieved a recognition accuracy of 91.7% on a standard face database. This impressive performance was achieved with a remarkably low voltage bias of -0.01 V, demonstrating the energy efficiency of the approach. The ability to operate at such low voltages is particularly advantageous for portable and low-power applications.

In addition to its neuromorphic capabilities, the device also showed promise as a physically unclonable function (PUF) for security applications. The researchers leveraged the inherent randomness in the deposition of SnO2 nanoparticles [tin oxide nanoparticles] to create unique electrical characteristics in each device. By fabricating arrays of these devices on paper, they generated security keys that exhibited high levels of randomness and uniqueness.

One of the most intriguing aspects of this research is the dual functionality achieved with a single device structure. The ability to serve as both a neuromorphic component and a security element could lead to the development of highly integrated, secure edge computing devices on paper substrates. This convergence of functionalities addresses growing concerns about data privacy and security in Internet of Things (IoT) applications.

Berger’s June 27, 2024 Nanowerk Spotlight article offers more detail about the work and it’s written in an accessible fashion. Berger also notes at the end, that there are still a lot of challenges before this work leaves the laboratory.

Here’s a link to and a citation for the paper,

Versatile Papertronics: Photo-Induced Synapse and Security Applications on Papers by Wangmyung Choi, Jihyun Shin, Yeong Jae Kim, Jaehyun Hur, Byung Chul Jang, Hocheon Yoo. Advanced Materials DOI: https://doi.org/10.1002/adma.202312831 First published: 13 June 2024

This paper is behind a paywall.

Proposed platform for brain-inspired computing

Researchers at the University of California at Santa Barbara (UCSB) have proposed a more energy-efficient architecture for neuromorphic (brainlike or brain-inspored) computing according to a June 25, 2024 news item on ScienceDaily,

Computers have come so far in terms of their power and potential, rivaling and even eclipsing human brains in their ability to store and crunch data, make predictions and communicate. But there is one domain where human brains continue to dominate: energy efficiency.

“The most efficient computers are still approximately four orders of magnitude — that’s 10,000 times — higher in energy requirements compared to the human brain for specific tasks such as image processing and recognition, although they outperform the brain in tasks like mathematical calculations,” said UC Santa Barbara electrical and computer engineering Professor Kaustav Banerjee, a world expert in the realm of nanoelectronics. “Making computers more energy efficient is crucial because the worldwide energy consumption by on-chip electronics stands at #4 in the global rankings of nation-wise energy consumption, and it is increasing exponentially each year, fueled by applications such as artificial intelligence.” Additionally, he said, the problem of energy inefficient computing is particularly pressing in the context of global warming, “highlighting the urgent need to develop more energy-efficient computing technologies.”

….

A June 24, 2024 UCSB news release (also on Eurekalert), which originated the news item, delves further into the subject,

Neuromorphic (NM) computing has emerged as a promising way to bridge the energy efficiency gap. By mimicking the structure and operations of the human brain, where processing occurs in parallel across an array of low power-consuming neurons, it may be possible to approach brain-like energy efficiency. In a paper published in the journal Nature Communications, Banerjee and co-workers Arnab Pal, Zichun Chai, Junkai Jiang and Wei Cao, in collaboration with researchers Vivek De and Mike Davies from Intel Labs propose such an ultra-energy efficient platform, using 2D transition metal dichalcogenide (TMD)-based tunnel-field-effect transistors (TFETs). Their platform, the researchers say, can bring the energy requirements to within two orders of magnitude (about 100 times) with respect to the human brain.

Leakage currents and subthreshold swing

The concept of neuromorphic computing has been around for decades, though the research around it has intensified only relatively recently. Advances in circuitry that enable smaller, denser arrays of transistors, and therefore more processing and functionality for less power consumption are just scratching the surface of what can be done to enable brain-inspired computing. Add to that an appetite generated by its many potential applications, such as AI and the Internet-of-Things, and it’s clear that expanding the options for a hardware platform for neuromorphic computing must be addressed in order to move forward.

Enter the team’s 2D tunnel-transistors. Emerging out of Banerjee’s longstanding research efforts to develop high-performance, low-power consumption transistors to meet the growing hunger for processing without a matching increase in power requirement, these atomically thin, nanoscale transistors are responsive at low voltages, and as the foundation of the researchers’ NM platform, can mimic the highly energy efficient operations of the human brain. In addition to lower off-state currents, the 2D TFETs also have a low subthreshold swing (SS), a parameter that describes how effectively a transistor can switch from off to on. According to Banerjee, a lower SS means a lower operating voltage, and faster and more efficient switching.

“Neuromorphic computing architectures are designed to operate with very sparse firing circuits,” said lead author Arnab Pal, “meaning they mimic how neurons in the brain fire only when necessary.” In contrast to the more conventional von Neumann architecture of today’s computers, in which data is processed sequentially, memory and processing components are separated and which continuously draw power throughout the entire operation, an event-driven system such as a NM computer fires up only when there is input to process, and memory and processing are distributed across an array of transistors. Companies like Intel and IBM have developed brain-inspired platforms, deploying billions of interconnected transistors and generating significant energy savings.

However, there’s still room for energy efficiency improvement, according to the researchers.

“In these systems, most of the energy is lost through leakage currents when the transistors are off, rather than during their active state,” Banerjee explained. A ubiquitous phenomenon in the world of electronics, leakage currents are small amounts of electricity that flow through a circuit even when it is in the off state (but still connected to power). According to the paper, current NM chips use traditional metal-oxide-semiconductor field-effect transistors (MOSFETs) which have a high on-state current, but also high off-state leakage. “Since the power efficiency of these chips is constrained by the off-state leakage, our approach — using tunneling transistors with much lower off-state current — can greatly improve power efficiency,” Banerjee said.

When integrated into a neuromorphic circuit, which emulates the firing and reset of neurons, the TFETs proved themselves more energy efficient than state-of-the-art MOSFETs, particularly the FinFETs (a MOSFET design that incorporates vertical “fins” as a way to provide better control of switching and leakage). TFETs are still in the experimental stage, however the performance and energy efficiency of neuromorphic circuits based on them makes them a promising candidate for the next generation of brain-inspired computing.

According to co-authors Vivek De (Intel Fellow) and Mike Davies (Director of Intel’s Neuromorphic Computing Lab), “Once realized, this platform can bring the energy consumption in chips to within two orders of magnitude with respect to the human brain — not accounting for the interface circuitry and memory storage elements. This represents a significant improvement from what is achievable today.”

Eventually, one can realize three-dimensional versions of these 2D-TFET based neuromorphic circuits to provide even closer emulation of the human brain, added Banerjee, widely recognized as one of the key visionaries behind 3D integrated circuits that are now witnessing wide scale commercial proliferation.

Here’s a link to and a citation for the latest paper,

An ultra energy-efficient hardware platform for neuromorphic computing enabled by 2D-TMD tunnel-FETs by Arnab Pal, Zichun Chai, Junkai Jiang, Wei Cao, Mike Davies, Vivek De & Kaustav Banerjee. Nature Communications volume 15, Article number: 3392 (2024) DOI: https://doi.org/10.1038/s41467-024-46397-3 Published: 22 April 2024

This paper is open access.

New approach to brain-inspired (neuromorphic) computing: measuring information transfer

An April 8, 2024 news item on Nanowerk announces a new approach to neuromorphic computing that involves measurement, Note: Links have been removed,

The biological brain, especially the human brain, is a desirable computing system that consumes little energy and runs at high efficiency. To build a computing system just as good, many neuromorphic scientists focus on designing hardware components intended to mimic the elusive learning mechanism of the brain. Recently, a research team has approached the goal from a different angle, focusing on measuring information transfer instead.

Their method went through biological and simulation experiments and then proved effective in an electronic neuromorphic system. It was published in Intelligent Computing (“Information Transfer in Neuronal Circuits: From Biological Neurons to Neuromorphic Electronics”).

An April 8, 2024 Intelligent Computing news release on EurekAlert delves further into the topic,

Although electronic systems have not fully replicated the complex information transfer between synapses and neurons, the team has demonstrated that it is possible to transform biological circuits into electronic circuits while maintaining the amount of information transferred. “This represents a key step toward brain-inspired low-power artificial systems,” the authors note.

To evaluate the efficiency of information transfer, the team drew inspiration from information theory. They quantified the amount of information conveyed by synapses in single neurons, then measured the quantity using mutual information, the analysis of which reveals the relationship between input stimuli and neuron responses.

First, the team conducted experiments with biological neurons. They used brain slices from rats, recording and analyzing the biological circuits in cerebellar granule cells. Then they evaluated the information transmitted at the synapses from mossy fiber neurons to the cerebellar granule cells. The mossy fibers were periodically stimulated with electrical spikes to induce synaptic plasticity, a fundamental biological feature where the information transfer at the synapses is constantly strengthened or weakened with repeated neuronal activity.

The results show that the changes in mutual information values are largely consistent with the changes in biological information transfer induced by synaptic plasticity. The findings from simulation and electronic neuromorphic experiments mirrored the biological results.

Second, the team conducted experiments with simulated neurons. They applied a spiking neural network model, which was developed by the same research group. Spiking neural networks were inspired by the functioning of biological neurons and are considered a promising approach for achieving efficient neuromorphic computing.

In the model, four mossy fibers are connected to one cerebellar granule cell, and each connection is given a random weight, which affects the information transfer efficiency like synaptic plasticity does in biological circuits. In the experiments, the team applied eight stimulation patterns to all mossy fibers and recorded the responses to evaluate the information transfer in the artificial neural network.

Third, the team conducted experiments with electronic neurons. A setup similar to those in the biological and simulation experiments was used. A previously developed semiconductor device functioned as a neuron, and four specialized memristors functioned as synapses. The team applied 20 spike sequences to decrease resistance values, then applied another 20 to increase them. The changes in resistance values were investigated to assess the information transfer efficiency within the neuromorphic system.

In addition to verifying the quantity of information transferred in biological, simulated and electronic neurons, the team also highlighted the importance of spike timing, which as they observed is closely related to information transfer. This observation could influence the development of neuromorphic computing, given that most devices are designed with spike-frequency-based algorithms.

Here’s a link to and a citation for the paper,

Information Transfer in Neuronal Circuits: From Biological Neurons to Neuromorphic Electronics by Daniela Gandolfi, Lorenzo Benatti, Tommaso Zanotti, Giulia M. Boiani, Albertino Bigiani, Francesco M. Puglisi, and Jonathan Mapell. Intelligent Computing 1 Feb 2024 Vol 3 Article ID: 0059 DOI: 10.34133/icomputing.0059

This paper is open access.

Butterfly mating inspires neuromorphic (brainlike) computing

Michael Berger writes about a multisensory approach to neuromorphic computing inspired by butterflies in his February 2, 2024 Nanowerk Spotlight article, Note: Links have been removed,

Artificial intelligence systems have historically struggled to integrate and interpret information from multiple senses the way animals intuitively do. Humans and other species rely on combining sight, sound, touch, taste and smell to better understand their surroundings and make decisions. However, the field of neuromorphic computing has largely focused on processing data from individual senses separately.

This unisensory approach stems in part from the lack of miniaturized hardware able to co-locate different sensing modules and enable in-sensor and near-sensor processing. Recent efforts have targeted fusing visual and tactile data. However, visuochemical integration, which merges visual and chemical information to emulate complex sensory processing such as that seen in nature—for instance, butterflies integrating visual signals with chemical cues for mating decisions—remains relatively unexplored. Smell can potentially alter visual perception, yet current AI leans heavily on visual inputs alone, missing a key aspect of biological cognition.

Now, researchers at Penn State University have developed bio-inspired hardware that embraces heterogeneous integration of nanomaterials to allow the co-location of chemical and visual sensors along with computing elements. This facilitates efficient visuochemical information processing and decision-making, taking cues from the courtship behaviors of a species of tropical butterfly.

In the paper published in Advanced Materials (“A Butterfly-Inspired Multisensory Neuromorphic Platform for Integration of Visual and Chemical Cues”), the researchers describe creating their visuochemical integration platform inspired by Heliconius butterflies. During mating, female butterflies rely on integrating visual signals like wing color from males along with chemical pheromones to select partners. Specialized neurons combine these visual and chemical cues to enable informed mate choice.

To emulate this capability, the team constructed hardware encompassing monolayer molybdenum disulfide (MoS2) memtransistors serving as visual capture and processing components. Meanwhile, graphene chemitransistors functioned as artificial olfactory receptors. Together, these nanomaterials provided the sensing, memory and computing elements necessary for visuochemical integration in a compact architecture.

While mating butterflies served as inspiration, the developed technology has much wider relevance. It represents a significant step toward overcoming the reliance of artificial intelligence on single data modalities. Enabling integration of multiple senses can greatly improve situational understanding and decision-making for autonomous robots, vehicles, monitoring devices and other systems interacting with complex environments.

The work also helps progress neuromorphic computing approaches seeking to emulate biological brains for next-generation ML acceleration, edge deployment and reduced power consumption. In nature, cross-modal learning underpins animals’ adaptable behavior and intelligence emerging from brains organizing sensory inputs into unified percepts. This research provides a blueprint for hardware co-locating sensors and processors to more closely replicate such capabilities

It’s fascinating to me how many times butterflies inspire science,

Butterfly-inspired visuo-chemical integration. a) A simplified abstraction of visual and chemical stimuli from male butterflies and visuo-chemical integration pathway in female butterflies. b) Butterfly-inspired neuromorphic hardware comprising of monolayer MoS2 memtransistor-based visual afferent neuron, graphene-based chemoreceptor neuron, and MoS2 memtransistor-based neuro-mimetic mating circuits. Courtesy: Wiley/Penn State University Researchers

Here’s a link to and a citation for the paper,

A Butterfly-Inspired Multisensory Neuromorphic Platform for Integration of Visual and Chemical Cues by Yikai Zheng, Subir Ghosh, Saptarshi Das. Advanced Materials SOI: https://doi.org/10.1002/adma.202307380 First published: 09 December 2023

This paper is open access.

Brain-inspired (neuromrophic) computing with twisted magnets and a patent for manufacturing permanent magnets without rare earths

I have two news bits both of them concerned with magnets.

Patent for magnets that can be made without rare earths

I’m starting with the patent news first since this is (as the company notes in its news release) a “Landmark Patent Issued for Technology Critically Needed to Combat Chinese Monopoly.”

For those who don’t know, China supplies most of the rare earths used in computers, smart phones, and other devices. On general principles, having a single supplier dominate production of and access to a necessary material for devices that most of us rely on can raise tensions. Plus, you can’t mine for resources forever.

This December 19, 2023 Nanocrystal Technology LP news release heralds an exciting development (for the impatient, further down the page I have highlighted the salient sections),

Nanotechnology Discovery by 2023 Nobel Prize Winner Became Launch Pad to Create Permanent Magnets without Rare Earths from China

NEW YORK, NY, UNITED STATES, December 19, 2023 /EINPresswire.com/ — Integrated Nano-Magnetics Corp, a wholly owned subsidiary of Nanocrystal Technology LP, was awarded a patent for technology built upon a fundamental nanoscience discovery made by Aleksey Yekimov, its former Chief Scientific Officer.

This patent will enable the creation of strong permanent magnets which are critically needed for both industrial and military applications but cannot be manufactured without certain “rare earth” elements available mostly from China.

At a glittering awards ceremony held in Stockholm on December10, 2023, three scientists, Aleksey Yekimov, Louis Brus (Professor at Columbia University) and Moungi Bawendi (Professor at MIT) were honored with the Nobel Prize in Chemistry for their discovery of the “quantum dot” which is now fueling practical applications in tuning the colors of LEDs, increasing the resolution of TV screens, and improving MRI imaging.

As stated by the Royal Swedish Academy of Sciences, “Quantum dots are … bringing the greatest benefits to humankind. Researchers believe that in the future they could contribute to flexible electronics, tiny sensors, thinner solar cells, and encrypted quantum communications – so we have just started exploring the potential of these tiny particles.”

Aleksey Yekimov worked for over 19 years until his retirement as Chief Scientific Officer of Nanocrystals Technology LP, an R & D company in New York founded by two Indian-American entrepreneurs, Rameshwar Bhargava and Rajan Pillai.

Yekimov, who was born in Russia, had already received the highest scientific honors for his work before he immigrated to USA in 1999. Yekimov was greatly intrigued by Nanocrystal Technology’s research project and chose to join the company as its Chief Scientific Officer.

During its early years, the company worked on efficient light generation by doping host nanoparticles about the same size as a quantum dot with an additional impurity atom. Bhargava came up with the novel idea of incorporating a single impurity atom, a dopant, into a quantum dot sized host, and thus achieve an extraordinary change in the host material’s properties such as inducing strong permanent magnetism in weak, readily available paramagnetic materials. To get a sense of the scale at which nanotechnology works, and as vividly illustrated by the Nobel Foundation, the difference in size between a quantum dot and a soccer ball is about the same as the difference between a soccer ball and planet Earth.

Currently, strong permanent magnets are manufactured from “rare earths” available mostly in China which has established a near monopoly on the supply of rare-earth based strong permanent magnets. Permanent magnets are a fundamental building block for electro-mechanical devices such as motors found in all automobiles including electric vehicles, trucks and tractors, military tanks, wind turbines, aircraft engines, missiles, etc. They are also required for the efficient functioning of audio equipment such as speakers and cell phones as well as certain magnetic storage media.

The existing market for permanent magnets is $28 billion and is projected to reach $50 billion by 2030 in view of the huge increase in usage of electric vehicles. China’s overwhelming dominance in this field has become a matter of great concern to governments of all Western and other industrialized nations. As the Wall St. Journal put it, China’s now has a “stranglehold” on the economies and security of other countries.

The possibility of making permanent magnets without the use of any rare earths mined in China has intrigued leading physicists and chemists for nearly 30 years. On December 19, 2023, a U.S. patent with the title ‘’Strong Non Rare Earth Permanent Magnets from Double Doped Magnetic Nanoparticles” was granted to Integrated Nano-Magnetics Corp. [emphasis mine] Referring to this major accomplishment Bhargava said, “The pioneering work done by Yekimov, Brus and Bawendi has provided the foundation for us to make other discoveries in nanotechnology which will be of great benefit to the world.”

I was not able to find any company websites. The best I could find is a Nanocrystals Technology LinkedIn webpage and some limited corporate data for Integrated Nano-Magnetics on opencorporates.com.

Twisted magnets and brain-inspired computing

This research offers a pathway to neuromorphic (brainlike) computing with chiral (or twisted) magnets, which, as best as I understand it, do not require rare earths. From a November13, 2023 news item on ScienceDaily,

A form of brain-inspired computing that exploits the intrinsic physical properties of a material to dramatically reduce energy use is now a step closer to reality, thanks to a new study led by UCL [University College London] and Imperial College London [ICL] researchers.

In the new study, published in the journal Nature Materials, an international team of researchers used chiral (twisted) magnets as their computational medium and found that, by applying an external magnetic field and changing temperature, the physical properties of these materials could be adapted to suit different machine-learning tasks.

A November 9, 2023 UCL press release (also on EurekAlert but published November 13, 2023), which originated the news item, fill s in a few more details about the research,

Dr Oscar Lee (London Centre for Nanotechnology at UCL and UCL Department of Electronic & Electrical Engineering), the lead author of the paper, said: “This work brings us a step closer to realising the full potential of physical reservoirs to create computers that not only require significantly less energy, but also adapt their computational properties to perform optimally across various tasks, just like our brains.

“The next step is to identify materials and device architectures that are commercially viable and scalable.”

Traditional computing consumes large amounts of electricity. This is partly because it has separate units for data storage and processing, meaning information has to be shuffled constantly between the two, wasting energy and producing heat. This is particularly a problem for machine learning, which requires vast datasets for processing. Training one large AI model can generate hundreds of tonnes of carbon dioxide.

Physical reservoir computing is one of several neuromorphic (or brain inspired) approaches that aims to remove the need for distinct memory and processing units, facilitating more efficient ways to process data. In addition to being a more sustainable alternative to conventional computing, physical reservoir computing could be integrated into existing circuitry to provide additional capabilities that are also energy efficient.

In the study, involving researchers in Japan and Germany, the team used a vector network analyser to determine the energy absorption of chiral magnets at different magnetic field strengths and temperatures ranging from -269 °C to room temperature.

They found that different magnetic phases of chiral magnets excelled at different types of computing task. The skyrmion phase, where magnetised particles are swirling in a vortex-like pattern, had a potent memory capacity apt for forecasting tasks. The conical phase, meanwhile, had little memory, but its non-linearity was ideal for transformation tasks and classification – for instance, identifying if an animal is a cat or dog.

Co-author Dr Jack Gartside, of Imperial College London, said: “Our collaborators at UCL in the group of Professor Hidekazu Kurebayashi recently identified a promising set of materials for powering unconventional computing. These materials are special as they can support an especially rich and varied range of magnetic textures. Working with the lead author Dr Oscar Lee, the Imperial College London group [led by Dr Gartside, Kilian Stenning and Professor Will Branford] designed a neuromorphic computing architecture to leverage the complex material properties to match the demands of a diverse set of challenging tasks. This gave great results, and showed how reconfiguring physical phases can directly tailor neuromorphic computing performance.”

The work also involved researchers at the University of Tokyo and Technische Universität München and was supported by the Leverhulme Trust, Engineering and Physical Sciences Research Council (EPSRC), Imperial College London President’s Excellence Fund for Frontier Research, Royal Academy of Engineering, the Japan Science and Technology Agency, Katsu Research Encouragement Award, Asahi Glass Foundation, and the DFG (German Research Foundation).

Here’s a link to and a citation for the paper,

Task-adaptive physical reservoir computing by Oscar Lee, Tianyi Wei, Kilian D. Stenning, Jack C. Gartside, Dan Prestwood, Shinichiro Seki, Aisha Aqeel, Kosuke Karube, Naoya Kanazawa, Yasujiro Taguchi, Christian Back, Yoshinori Tokura, Will R. Branford & Hidekazu Kurebayashi. Nature Materials volume 23, pages 79–87 (2024) DOI: https://doi.org/10.1038/s41563-023-01698-8 Published online: 13 November 2023 Issue Date: January 2024

This paper is open access.

A formal theory for neuromorphic (brainlike) computing hardware needed

This is one my older pieces as the information dates back to October 2023 but neuromorphic computing is one of my key interests and I’m particularly interested to see the upsurge in the discussion of hardware, here goes. From an October 17, 2023 news item on Nanowerk,

There is an intense, worldwide search for novel materials to build computer microchips with that are not based on classic transistors but on much more energy-saving, brain-like components. However, whereas the theoretical basis for classic transistor-based digital computers is solid, there are no real theoretical guidelines for the creation of brain-like computers.

Such a theory would be absolutely necessary to put the efforts that go into engineering new kinds of microchips on solid ground, argues Herbert Jaeger, Professor of Computing in Cognitive Materials at the University of Groningen [Netherlands].

Key Takeaways
Scientists worldwide are searching for new materials to build energy-saving, brain-like computer microchips as classic transistor miniaturization reaches its physical limit.

Theoretical guidelines for brain-like computers are lacking, making it crucial for advancements in the field.

The brain’s versatility and robustness serve as an inspiration, despite limited knowledge about its exact workings.

A recent paper suggests that a theory for non-digital computers should focus on continuous, analogue signals and consider the characteristics of new materials.

Bridging gaps between diverse scientific fields is vital for developing a foundational theory for neuromorphic computing..

An October 17, 2023 University of Groningen press release (also on EurekAlert), which originated the news item, provides more context for this proposal,

Computers have, so far, relied on stable switches that can be off or on, usually transistors. These digital computers are logical machines and their programming is also based on logical reasoning. For decades, computers have become more powerful by further miniaturization of the transistors, but this process is now approaching a physical limit. That is why scientists are working to find new materials to make more versatile switches, which could use more values than just the digitals 0 or 1.

Dangerous pitfall

Jaeger is part of the Groningen Cognitive Systems and Materials Center (CogniGron), which aims to develop neuromorphic (i.e. brain-like) computers. CogniGron is bringing together scientists who have very different approaches: experimental materials scientists and theoretical modelers from fields as diverse as mathematics, computer science, and AI. Working closely with materials scientists has given Jaeger a good idea of the challenges that they face when trying to come up with new computational materials, while it has also made him aware of a dangerous pitfall: there is no established theory for the use of non-digital physical effects in computing systems.

Our brain is not a logical system. We can reason logically, but that is only a small part of what our brain does. Most of the time, it must work out how to bring a hand to a teacup or wave to a colleague on passing them in a corridor. ‘A lot of the information-processing that our brain does is this non-logical stuff, which is continuous and dynamic. It is difficult to formalize this in a digital computer,’ explains Jaeger. Furthermore, our brains keep working despite fluctuations in blood pressure, external temperature, or hormone balance, and so on. How is it possible to create a computer that is as versatile and robust? Jaeger is optimistic: ‘The simple answer is: the brain is proof of principle that it can be done.’

Neurons

The brain is, therefore, an inspiration for materials scientists. Jaeger: ‘They might produce something that is made from a few hundred atoms and that will oscillate, or something that will show bursts of activity. And they will say: “That looks like how neurons work, so let’s build a neural network”.’ But they are missing a vital bit of knowledge here. ‘Even neuroscientists don’t know exactly how the brain works. This is where the lack of a theory for neuromorphic computers is problematic. Yet, the field doesn’t appear to see this.’

In a paper published in Nature Communications on 16 August, Jaeger and his colleagues Beatriz Noheda (scientific director of CogniGron) and Wilfred G. van der Wiel (University of Twente) present a sketch of what a theory for non-digital computers might look like. They propose that instead of stable 0/1 switches, the theory should work with continuous, analogue signals. It should also accommodate the wealth of non-standard nanoscale physical effects that the materials scientists are investigating.

Sub-theories

Something else that Jaeger has learned from listening to materials scientists is that devices from these new materials are difficult to construct. Jaeger: ‘If you make a hundred of them, they will not all be identical.’ This is actually very brain-like, as our neurons are not all exactly identical either. Another possible issue is that the devices are often brittle and temperature-sensitive, continues Jaeger. ‘Any theory for neuromorphic computing should take such characteristics into account.’

Importantly, a theory underpinning neuromorphic computing will not be a single theory but will be constructed from many sub-theories (see image below). Jaeger: ‘This is in fact how digital computer theory works as well, it is a layered system of connected sub-theories.’ Creating such a theoretical description of neuromorphic computers will require close collaboration of experimental materials scientists and formal theoretical modellers. Jaeger: ‘Computer scientists must be aware of the physics of all these new materials [emphasis mine] and materials scientists should be aware of the fundamental concepts in computing.’

Blind spots

Bridging this divide between materials science, neuroscience, computing science, and engineering is exactly why CogniGron was founded at the University of Groningen: it brings these different groups together. ‘We all have our blind spots,’ concludes Jaeger. ‘And the biggest gap in our knowledge is a foundational theory for neuromorphic computing. Our paper is a first attempt at pointing out how such a theory could be constructed and how we can create a common language.’

Here’s a link to and a citation for the paper,

Toward a formal theory for computing machines made out of whatever physics offers by Herbert Jaeger, Beatriz Noheda & Wilfred G. van der Wiel. Nature Communications volume 14, Article number: 4911 (2023) DOI: https://doi.org/10.1038/s41467-023-40533-1 Published: 16 August 2023

This paper is open access and there’s a 76 pp. version, “Toward a formal theory for computing machines made out of whatever physics offers: extended version” (emphasis mine) available on arXchiv.

Caption: A general theory of physical computing systems would comprise existing theories as special cases. Figure taken from an extended version of the Nature Comm paper on arXiv. Credit: Jaeger et al. / University of Groningen

With regard to new materials for neuromorphic computing, my January 4, 2024 posting highlights a proposed quantum material for this purpose.

A hardware (neuromorphic and quantum) proposal for handling increased AI workload

It’s been a while since I’ve featured anything from Purdue University (Indiana, US). From a November 7, 2023 news item on Nanowerk, Note Links have been removed,

Technology is edging closer and closer to the super-speed world of computing with artificial intelligence. But is the world equipped with the proper hardware to be able to handle the workload of new AI technological breakthroughs?

Key Takeaways
Current AI technologies are strained by the limitations of silicon-based computing hardware, necessitating new solutions.

Research led by Erica Carlson [Purdue University] suggests that neuromorphic [brainlike] architectures, which replicate the brain’s neurons and synapses, could revolutionize computing efficiency and power.

Vanadium oxides have been identified as a promising material for creating artificial neurons and synapses, crucial for neuromorphic computing.

Innovative non-volatile memory, observed in vanadium oxides, could be the key to more energy-efficient and capable AI hardware.

Future research will explore how to optimize the synaptic behavior of neuromorphic materials by controlling their memory properties.

The colored landscape above shows a transition temperature map of VO2 (pink surface) as measured by optical microscopy. This reveals the unique way that this neuromorphic quantum material [emphasis mine] stores memory like a synapse. Image credit: Erica Carlson, Alexandre Zimmers, and Adobe Stock

An October 13, 2023 Purdue University news release (also on EurekAlert but published November 6, 2023) by Cheryl Pierce, which originated the news item, provides more detail about the work, Note: A link has been removed,

“The brain-inspired codes of the AI revolution are largely being run on conventional silicon computer architectures which were not designed for it,” explains Erica Carlson, 150th Anniversary Professor of Physics and Astronomy at Purdue University.

A joint effort between Physicists from Purdue University, University of California San Diego (USCD) and École Supérieure de Physique et de Chimie Industrielles (ESPCI) in Paris, France, believe they may have discovered a way to rework the hardware…. [sic] By mimicking the synapses of the human brain.  They published their findings, “Spatially Distributed Ramp Reversal Memory in VO2” in Advanced Electronic Materials which is featured on the back cover of the October 2023 edition.

New paradigms in hardware will be necessary to handle the complexity of tomorrow’s computational advances. According to Carlson, lead theoretical scientist of this research, “neuromorphic architectures hold promise for lower energy consumption processors, enhanced computation, fundamentally different computational modes, native learning and enhanced pattern recognition.”

Neuromorphic architecture basically boils down to computer chips mimicking brain behavior.  Neurons are cells in the brain that transmit information. Neurons have small gaps at their ends that allow signals to pass from one neuron to the next which are called synapses. In biological brains, these synapses encode memory. This team of scientists concludes that vanadium oxides show tremendous promise for neuromorphic computing because they can be used to make both artificial neurons and synapses.

“The dissonance between hardware and software is the origin of the enormously high energy cost of training, for example, large language models like ChatGPT,” explains Carlson. “By contrast, neuromorphic architectures hold promise for lower energy consumption by mimicking the basic components of a brain: neurons and synapses. Whereas silicon is good at memory storage, the material does not easily lend itself to neuron-like behavior. Ultimately, to provide efficient, feasible neuromorphic hardware solutions requires research into materials with radically different behavior from silicon – ones that can naturally mimic synapses and neurons. Unfortunately, the competing design needs of artificial synapses and neurons mean that most materials that make good synaptors fail as neuristors, and vice versa. Only a handful of materials, most of them quantum materials, have the demonstrated ability to do both.”

The team relied on a recently discovered type of non-volatile memory which is driven by repeated partial temperature cycling through the insulator-to-metal transition. This memory was discovered in vanadium oxides.

Alexandre Zimmers, lead experimental scientist from Sorbonne University and École Supérieure de Physique et de Chimie Industrielles, Paris, explains, “Only a few quantum materials are good candidates for future neuromorphic devices, i.e., mimicking artificial synapses and neurons. For the first time, in one of them, vanadium dioxide, we can see optically what is changing in the material as it operates as an artificial synapse. We find that memory accumulates throughout the entirety of the sample, opening new opportunities on how and where to control this property.”

“The microscopic videos show that, surprisingly, the repeated advance and retreat of metal and insulator domains causes memory to be accumulated throughout the entirety of the sample, rather than only at the boundaries of domains,” explains Carlson. “The memory appears as shifts in the local temperature at which the material transitions from insulator to metal upon heating, or from metal to insulator upon cooling. We propose that these changes in the local transition temperature accumulate due to the preferential diffusion of point defects into the metallic domains that are interwoven through the insulator as the material is cycled partway through the transition.”

Now that the team has established that vanadium oxides are possible candidates for future neuromorphic devices, they plan to move forward in the next phase of their research.

“Now that we have established a way to see inside this neuromorphic material, we can locally tweak and observe the effects of, for example, ion bombardment on the material’s surface,” explains Zimmers. “This could allow us to guide the electrical current through specific regions in the sample where the memory effect is at its maximum. This has the potential to significantly enhance the synaptic behavior of this neuromorphic material.”

There’s a very interesting 16 mins. 52 secs. video embedded in the October 13, 2023 Purdue University news release. In an interview with Dr. Erica Carlson who hosts The Quantum Age website and video interviews on its YouTube Channel, Alexandre Zimmers takes you from an amusing phenomenon observed by 19th century scientists through the 20th century where it becomes of more interest as the nanscale phenonenon can be exploited (sonar, scanning tunneling microscopes, singing birthday cards, etc.) to the 21st century where we are integrating this new information into a quantum* material for neuromorphic hardware.

Here’s a link to and a citation for the paper,

Spatially Distributed Ramp Reversal Memory in VO2 by Sayan Basak, Yuxin Sun, Melissa Alzate Banguero, Pavel Salev, Ivan K. Schuller, Lionel Aigouy, Erica W. Carlson, Alexandre Zimmers. Advanced Electronic Materials Volume 9, Issue 10 October 2023 2300085 DOI: https://doi.org/10.1002/aelm.202300085 First published: 10 July 2023

This paper is open access.

There’s a lot of research into neuromorphic hardware, here’s a sampling of some of my most recent posts on the topic,

There’s more, just use ‘neuromorphic hardware’ for your search term.

*’meta’ changed to ‘quantum’ on January 8, 2024.

Dynamic magnetic fractal networks for neuromorphic (brainlike) computing

Credit: Advanced Materials (2023). DOI: 10.1002/adma.202300416 [cover image]

This is a different approach to neuromorphic (brainlike) computing being described in an August 28, 2023 news item on phys.org, Note: A link has been removed,

The word “fractals” might inspire images of psychedelic colors spiraling into infinity in a computer animation. An invisible, but powerful and useful, version of this phenomenon exists in the realm of dynamic magnetic fractal networks.

Dustin Gilbert, assistant professor in the Department of Materials Science and Engineering [University of Tennessee, US], and colleagues have published new findings in the behavior of these networks—observations that could advance neuromorphic computing capabilities.

Their research is detailed in their article “Skyrmion-Excited Spin-Wave Fractal Networks,” cover story for the August 17, 2023, issue of Advanced Materials.

An August 18, 2023 University of Tennessee news release, which originated the news item, provides more details,

“Most magnetic materials—like in refrigerator magnets—are just comprised of domains where the magnetic spins all orient parallel,” said Gilbert. “Almost 15 years ago, a German research group discovered these special magnets where the spins make loops—like a nanoscale magnetic lasso. These are called skyrmions.”

Named for legendary particle physicist Tony Skyrme, a skyrmion’s magnetic swirl gives it a non-trivial topology. As a result of this topology, the skyrmion has particle-like properties—they are hard to create or destroy, they can move and even bounce off of each other. The skyrmion also has dynamic modes—they can wiggle, shake, stretch, whirl, and breath[e].

As the skyrmions “jump and jive,” they are creating magnetic spin waves with a very narrow wavelength. The interactions of these waves form an unexpected fractal structure.

“Just like a person dancing in a pool of water, they generate waves which ripple outward,” said Gilbert. “Many people dancing make many waves, which normally would seem like a turbulent, chaotic sea. We measured these waves and showed that they have a well-defined structure and collectively form a fractal which changes trillions of times per second.”

Fractals are important and interesting because they are inherently tied to a “chaos effect”—small changes in initial conditions lead to big changes in the fractal network.

“Where we want to go with this is that if you have a skyrmion lattice and you illuminate it with spin waves, the way the waves make its way through this fractal-generating structure is going to depend very intimately on its construction,” said Gilbert. “So, if you could write individual skyrmions, it can effectively process incoming spin waves into something on the backside—and it’s programmable. It’s a neuromorphic architecture.”

The Advanced Materials cover illustration [image at top of this posting] depicts a visual representation of this process, with the skyrmions floating on top of a turbulent blue sea illustrative of the chaotic structure generated by the spin wave fractal.

“Those waves interfere just like if you throw a handful of pebbles into a pond,” said Gilbert. “You get a choppy, turbulent mess. But it’s not just any simple mess, it’s actually a fractal. We have an experiment now showing that the spin waves generated by skyrmions aren’t just a mess of waves, they have inherent structure of their very own. By, essentially, controlling those stones that we ‘throw in,’ you get very different patterns, and that’s what we’re driving towards.”

The discovery was made in part by neutron scattering experiments at the Oak Ridge National Laboratory (ORNL) High Flux Isotope Reactor and at the National Institute of Standards and Technology (NIST) Center for Neutron Research. Neutrons are magnetic and pass through materials easily, making them ideal probes for studying materials with complex magnetic behavior such as skyrmions and other quantum phenomena.

Gilbert’s co-authors for the new article are Nan Tang, Namila Liyanage, and Liz Quigley, students in his research group; Alex Grutter and Julie Borchers from National Institute of Standards and Technology (NIST), Lisa DeBeer-Schmidt and Mike Fitzsimmons from Oak Ridge National Laboratory; and Eric Fullerton, Sheena Patel, and Sergio Montoya from the University of California, San Diego.

The team’s next step is to build a working model using the skyrmion behavior.

“If we can develop thinking computers, that, of course, is extraordinarily important,” said Gilbert. “So, we will propose to make a miniaturized, spin wave neuromorphic architecture.” He also hopes that the ripples from this UT Knoxville discovery inspire researchers to explore uses for a spiraling range of future applications.

Here’s a link to and a citation for the paper,

Skyrmion-Excited Spin-Wave Fractal Networks by Nan Tang, W. L. N. C. Liyanage, Sergio A. Montoya, Sheena Patel, Lizabeth J. Quigley, Alexander J. Grutter, Michael R. Fitzsimmons, Sunil Sinha, Julie A. Borchers, Eric E. Fullerton, Lisa DeBeer-Schmitt, Dustin A. Gilbert. Advanced Materials Volume 35, Issue 33 August 17, 2023 2300416 DOI: https://doi.org/10.1002/adma.202300416 First published: 04 May 2023

This paper is behind a paywall.

IBM’s neuromorphic chip, a prototype and more

it seems IBM is very excited about neuromorphic computing. First, there’s an August 10, 2023 news article by Shiona McCallum & Chris Vallance for British Broadcasting Corporation (BBC) online news,

Concerns have been raised about emissions associated with warehouses full of computers powering AI systems.

IBM said its prototype could lead to more efficient, less battery draining AI chips for smartphones.

Its efficiency is down to components that work in a similar way to connections in human brains, it said.

Compared to traditional computers, “the human brain is able to achieve remarkable performance while consuming little power”, said scientist Thanos Vasilopoulos, based at IBM’s research lab in Zurich, Switzerland.

I sense a memristor about to be mentioned, from McCallum & Vallance’s article August 10, 2023 news article,

Most chips are digital, meaning they store information as 0s and 1s, but the new chip uses components called memristors [memory resistors] that are analogue and can store a range of numbers.

You can think of the difference between digital and analogue as like the difference between a light switch and a dimmer switch.

The human brain is analogue, and the way memristors work is similar to the way synapses in the brain work.

Prof Ferrante Neri, from the University of Surrey, explains that memristors fall into the realm of what you might call nature-inspired computing that mimics brain function.

A memristor could “remember” its electric history, in a similar way to a synapse in a biological system.

“Interconnected memristors can form a network resembling a biological brain,” he said.

He was cautiously optimistic about the future for chips using this technology: “These advancements suggest that we may be on the cusp of witnessing the emergence of brain-like chips in the near future.”

However, he warned that developing a memristor-based computer is not a simple task and that there would be a number of challenges ahead for widespread adoption, including the costs of materials and manufacturing difficulties.

Neri is most likely aware that researchers have been excited that ‘green’ computing could be made possible by memristors since at least 2008 (see my May 9, 2008 posting “Memristors and green energy“).

As it turns out, IBM published two studies on neuromorphic chips in August 2023.

The first study (mentioned in the BBC article) is also described in an August 22, 2023 article by Peter Grad for Tech Xpore. This one is a little more technical than the BBC article,

For those who are truly technical, here’s a link to and a citation for the paper,

A 64-core mixed-signal in-memory compute chip based on phase-change memory for deep neural network inference by Manuel Le Gallo, Riduan Khaddam-Aljameh, Milos Stanisavljevic, Athanasios Vasilopoulos, Benedikt Kersting, Martino Dazzi, Geethan Karunaratne, Matthias Brändli, Abhairaj Singh, Silvia M. Müller, Julian Büchel, Xavier Timoneda, Vinay Joshi, Malte J. Rasch, Urs Egger, Angelo Garofalo, Anastasios Petropoulos, Theodore Antonakopoulos, Kevin Brew, Samuel Choi, Injo Ok, Timothy Philip, Victor Chan, Claire Silvestre, Ishtiaq Ahsan, Nicole Saulnier, Nicole Saulnier, Pier Andrea Francese, Evangelos Eleftheriou & Abu Sebastian. Nature Electronics (2023) DOI: https://doi.org/10.1038/s41928-023-01010-1 Published: 10 August 2023

This paper is behind a paywall.

Before getting to the second paper, there’s an August 23, 2023 IBM blog post by Mike Murphy announcing its publication in Nature, Note: Links have been removed,

Although we’re still just at the precipice of the AI revolution, artificial intelligence has already begun to revolutionize the way we live and work. There’s just one problem: AI technology is incredibly power-hungry. By some estimates, running a large AI model generates more emissions over its lifetime than the average American car.

The future of AI requires new innovations in energy efficiency, from the way models are designed down to the hardware that runs them. And in a world that’s increasingly threatened by climate change, any advances in AI energy efficiency are essential to keep pace with AI’s rapidly expanding carbon footprint.

And one of the latest breakthroughs in AI efficiency from IBM Research relies on analog chips — ones that consume much less power. In a paper published in Nature today,1 researchers from IBM labs around the world presented their prototype analog AI chip for energy-efficient speech recognition and transcription. Their design was utilized in two AI inference experiments, and in both cases, the analog chips performed these tasks just as reliably as comparable all-digital devices — but finished the tasks faster and used less energy.

The concept of designing analog chips for AI inference is not new — researchers have been contemplating the idea for years. Back in 2021, a team at IBM developed chips that use Phase-change memory (PCM) works when an electrical pulse is applied to a material, which changes the conductance of the device. The material switches between amorphous and crystalline phases, where a lower electrical pulse will make the device more crystalline, providing less resistance, and a high enough electrical pulse makes the device amorphous, resulting in large resistance. Instead of recording the usual 0s or 1s you would see in digital systems, the PCM device records its state as a continuum of values between the amorphous and crystalline states. This value is called a synaptic weight, which can be stored in the physical atomic configuration of each PCM device. The memory is non-volatile, so the weights are retained when the power supply is switched off.phase-change memory to encode the weights of a neural network directly onto the physical chip. But previous research in the field hasn’t shown how chips like these could be used on the massive models we see dominating the AI landscape today. For example, GPT-3, one of the larger popular models, has 175 billion parameters, or weights.

Murphy also explains the difference (for amateurs like me) between this work and the earlier published study, from the August 23, 2023 IBM blog post, Note: Links have been removed,

Natural-language tasks aren’t the only AI problems that analog AI could solve — IBM researchers are working on a host of other uses. In a paper published earlier this month in Nature Electronics, the team showed it was possible to use an energy-efficient analog chip design for scalable mixed-signal architecture that can achieve high accuracy in the CIFAR-10 image dataset for computer vision image recognition.

These chips were conceived and designed by IBM researchers in the Tokyo, Zurich, Yorktown Heights, New York, and Almaden, California labs, and built by an external fabrication company. The phase change memory and metal levels were processed and validated at IBM Research’s lab in the Albany Nanotech Complex.

If you were to combine the benefits of the work published today in Nature, such as large arrays and parallel data-transport, with the capable digital compute-blocks of the chip shown in the Nature Electronics paper, you would see many of the building blocks needed to realize the vision of a fast, low-power analog AI inference accelerator. And pairing these designs with hardware-resilient training algorithms, the team expects these AI devices to deliver the software equivalent of neural network accuracies for a wide range of AI models in the future.

Here’s a link to and a citation for the second paper,

An analog-AI chip for energy-efficient speech recognition and transcription by S. Ambrogio, P. Narayanan, A. Okazaki, A. Fasoli, C. Mackin, K. Hosokawa, A. Nomura, T. Yasuda, A. Chen, A. Friz, M. Ishii, J. Luquin, Y. Kohda, N. Saulnier, K. Brew, S. Choi, I. Ok, T. Philip, V. Chan, C. Silvestre, I. Ahsan, V. Narayanan, H. Tsai & G. W. Burr. Nature volume 620, pages 768–775 (2023) DOI: https://doi.org/10.1038/s41586-023-06337-5 Published: 23 August 2023 Issue Date: 24 August 2023

This paper is open access.