Tag Archives: brainlike computing

Neuromorphic wires (inspired by nerve cells) amplify their own signals—no amplifiers needed

Katherine Bourzac’s September 16, 2024 article for the IEEE (Institute for Electrical and Electronics Engineers) Spectrum magazine provides an accessible (relatively speaking) description of a possible breakthrough for neuromorphic computing, Note: Links have been removed,

In electrical engineering, “we just take it for granted that the signal decays” as it travels, says Timothy Brown, a postdoc in materials physics at Sandia National Lab who was part of the group of researchers who made the self-amplifying device. Even the best wires and chip interconnects put up resistance to the flow of electrons, degrading signal quality over even relatively small distances. This constrains chip designs—lossy interconnects are broken up into ever smaller lengths, and signals are bolstered by buffers and drivers. A 1-square-centimeter chip has about 10,000 repeaters to drive signals, estimates R. Stanley Williams, a professor of computer engineering at Texas A&M University.

Williams is one of the pioneers of neuromorphic computing, which takes inspiration from the nervous system. Axons, the electrical cables that carry signals from the body of a nerve cell to synapses where they connect with projections from other cells, are made up of electrically resistant materials. Yet they can carry high fidelity signals over long distances. The longest axons in the human body are about 1 meter, running from the base of the spine to the feet. Blue whales are thought to have 30 m long axons stretching to the tips of their tails. If something bites the whale’s tail, it will react rapidly. Even from 30 meters away, “the pulses arrive perfectly,” says Williams. “That’s something that doesn’t exist in electrical engineering.”

That’s because axons are active transmission lines: they provide gain to the signal along their length. Williams says he started pondering how to mimic this in an inorganic system 12 years ago. A grant from the US Department of Energy enabled him to build a team with the necessary resources to make it happen. The team included Williams, Brown, and Suhas Kumar, a materials physicist at Sandia.

Axons are coated with an insulating layer called the myelin sheath. Where there are gaps in the sheath, negatively charged sodium ions and positively charged potassium ions can move in and out of the axon, changing the voltage across the cell membrane and pumping in energy in the process. Some of that energy gets taken up by the electrical signal, amplifying it.

Williams and his team wanted to mimic this in a simple structure. They didn’t try to mimic all the physical structures in axons—instead, they sought guidance in a mathematical description of how they amplify signals. Axons operate in a mode called the “edge of chaos,” which combines stable and unstable qualities. This may seem inherently contradictory. Brown likens this kind of system to a saddle that’s curved with two dips. The saddle curves up towards the front and the back, keeping you stable as you rock back and forth. But if you get jostled from side to side, you’re more likely to fall off. When you’re riding in the saddle, you’re operating at the edge of chaos, in a semistable state. In the abstract space of electrical engineering, that jostling is equivalent to wiggles in current and voltage.

There’s a long way to go from this first experimental demonstration to a reimagining of computer chip interconnects. The team is providing samples for other researchers [emphasis mine] who want to verify their measurements. And they’re trying other materials to see how well they do—LaCoO3 [lanthanum colbalt oxide] is only the first one they’ve tested.

Williams hopes this research will show electrical engineers new ideas about how to move forward. “The dream is to redesign chips,” he says. Electrical engineers have long known about nonlinear dynamics, but have hardly ever taken advantage of them, Williams says. “This requires thinking about things and doing measurements differently than they have been done for 50 years,” he says.

If you have the time, please read Bourzac’s September 16, 2024 article in its entirety. For those who want the technical nitty gritty, here’s a link to and a citation for the paper,

Axon-like active signal transmission by Timothy D. Brown, Alan Zhang, Frederick U. Nitta, Elliot D. Grant, Jenny L. Chong, Jacklyn Zhu, Sritharini Radhakrishnan, Mahnaz Islam, Elliot J. Fuller, A. Alec Talin, Patrick J. Shamberger, Eric Pop, R. Stanley Williams & Suhas Kumar. Nature volume 633, pages 804–810 (2024) DOI: https://doi.org/10.1038/s41586-024-07921 Published online: 11 September 2024 Issue Date: 26 September 2024

This paper is open access.

Light-based neural networks

It’s unusual to see the same headline used to highlight research from two different teams released in such proximity, February 2024 and July 2024, respectively. Both of these are neuromorphic (brainlike) computing stories.

February 2024: Neural networks made of light

The first team’s work is announced in a February 21, 2024 Friedrich Schiller University press release, Note: A link has been removed,

Researchers from the Leibniz Institute of Photonic Technology (Leibniz IPHT) and the Friedrich Schiller University in Jena, along with an international team, have developed a new technology that could significantly reduce the high energy demands of future AI systems. This innovation utilizes light for neuronal computing, inspired by the neural networks of the human brain. It promises not only more efficient data processing but also speeds many times faster than current methods, all while consuming considerably less energy. Published in the prestigious journal „Advanced Science,“ their work introduces new avenues for environmentally friendly AI applications, as well as advancements in computerless diagnostics and intelligent microscopy.

Artificial intelligence (AI) is pivotal in advancing biotechnology and medical procedures, ranging from cancer diagnostics to the creation of new antibiotics. However, the ecological footprint of large-scale AI systems is substantial. For instance, training extensive language models like ChatGPT-3 requires several gigawatt-hours of energy—enough to power an average nuclear power plant at full capacity for several hours.

Prof. Mario Chemnitz, new Junior Professor of Intelligent Photonic SystemsExternal link at Friedrich Schiller University Jena, and Dr Bennet Fischer from Leibniz IPHT in Jena, in collaboration with their international team, have devised an innovative method to develop potentially energy-efficient computing systems that forego the need for extensive electronic infrastructure. They harness the unique interactions of light waves within optical fibers to forge an advanced artificial learning system.

A single fiber instead of thousands of components

Unlike traditional systems that rely on computer chips containing thousands of electronic components, their system uses a single optical fiber. This fiber is capable of performing the tasks of various neural networks—at the speed of light. “We utilize a single optical fiber to mimic the computational power of numerous neural networks,“ Mario Chemnitz, who is also leader of the “Smart Photonics“ junior research group at Leibniz IPHT, explains. “By leveraging the unique physical properties of light, this system will enable the rapid and efficient processing of vast amounts of data in the future.

Delving into the mechanics reveals how information transmission occurs through the mixing of light frequencies: Data—whether pixel values from images or frequency components of an audio track—are encoded onto the color channels of ultrashort light pulses. These pulses carry the information through the fiber, undergoing various combinations, amplifications, or attenuations. The emergence of new color combinations at the fiber’s output enables the prediction of data types or contexts. For example, specific color channels can indicate visible objects in images or signs of illness in a voice.

A prime example of machine learning is identifying different numbers from thousands of handwritten characters. Mario Chemnitz, Bennet Fischer, and their colleagues from the Institut National de la Recherche Scientifique (INRS) in Québec utilized their technique to encode images of handwritten digits onto light signals and classify them via the optical fiber. The alteration in color composition at the fiber’s end forms a unique color spectrum—a „fingerprint“ for each digit. Following training, the system can analyze and recognize new handwriting digits with significantly reduced energy consumption.

System recognizes COVID-19 from voice samples

In simpler terms, pixel values are converted into varying intensities of primary colors—more red or less blue, for instance,“ Mario Chemnitz details. “Within the fiber, these primary colors blend to create the full spectrum of the rainbow. The shade of our mixed purple, for example, reveals much about the data processed by our system.“

The team has also successfully applied this method in a pilot study to diagnose COVID-19 infections using voice samples, achieving a detection rate that surpasses the best digital systems to date.

We are the first to demonstrate that such a vibrant interplay of light waves in optical fibers can directly classify complex information without any additional intelligent software,“ Mario Chemnitz states.

Since December 2023, Mario Chemnitz has held the position of Junior Professor of Intelligent Photonic Systems at Friedrich Schiller University Jena. Following his return from INRS in Canada in 2022, where he served as a postdoc, Chemnitz has been leading an international team at Leibniz IPHT in Jena. With Nexus funding support from the Carl Zeiss Foundation, their research focuses on exploring the potentials of non-linear optics. Their goal is to develop computer-free intelligent sensor systems and microscopes, as well as techniques for green computing.

Here’s a link to and a citation for the paper,

Neuromorphic Computing via Fission-based Broadband Frequency Generation by Bennet Fischer, Mario Chemnitz, Yi Zhu, Nicolas Perron, Piotr Roztocki, Benjamin MacLellan, Luigi Di Lauro, A. Aadhi, Cristina Rimoldi, Tiago H. Falk, Roberto Morandotti. Advanced Science Volume 10, Issue 35 December 15, 2023 2303835 DOI: https://doi.org/10.1002/advs.202303835. First published: 02 October 2023

This paper is open access.

July 2024: Neural networks made of light

A July 12, 2024 news item on ScienceDaily announces research from another German team,

Scientists propose a new way of implementing a neural network with an optical system which could make machine learning more sustainable in the future. The researchers at the Max Planck Institute for the Science of Light have published their new method in Nature Physics, demonstrating a method much simpler than previous approaches.

A July 12, 2024 Max Planck Institute for the Science of Light press release (also on EurekAlert), which originated the news item, provides more detail about their approach to neuromorphic computiing,

Machine learning and artificial intelligence are becoming increasingly widespread with applications ranging from computer vision to text generation, as demonstrated by ChatGPT. However, these complex tasks require increasingly complex neural networks; some with many billion parameters. This rapid growth of neural network size has put the technologies on an unsustainable path due to their exponentially growing energy consumption and training times. For instance, it is estimated that training GPT-3 consumed more than 1,000 MWh of energy, which amounts to the daily electrical energy consumption of a small town. This trend has created a need for faster, more energy- and cost-efficient alternatives, sparking the rapidly developing field of neuromorphic computing. The aim of this field is to replace the neural networks on our digital computers with physical neural networks. These are engineered to perform the required mathematical operations physically in a potentially faster and more energy-efficient way.

Optics and photonics are particularly promising platforms for neuromorphic computing since energy consumption can be kept to a minimum. Computations can be performed in parallel at very high speeds only limited by the speed of light. However, so far, there have been two significant challenges: Firstly, realizing the necessary complex mathematical computations requires high laser powers. Secondly, the lack of an efficient general training method for such physical neural networks.

Both challenges can be overcome with the new method proposed by Clara Wanjura and Florian Marquardt from the Max Planck Institute for the Science of Light in their new article in Nature Physics. “Normally, the data input is imprinted on the light field. However, in our new methods we propose to imprint the input by changing the light transmission,” explains Florian Marquardt, Director at the Institute. In this way, the input signal can be processed in an arbitrary fashion. This is true even though the light field itself behaves in the simplest way possible in which waves interfere without otherwise influencing each other. Therefore, their approach allows one to avoid complicated physical interactions to realize the required mathematical functions which would otherwise require high-power light fields. Evaluating and training this physical neural network would then become very straightforward: “It would really be as simple as sending light through the system and observing the transmitted light. This lets us evaluate the output of the network. At the same time, this allows one to measure all relevant information for the training”, says Clara Wanjura, the first author of the study. The authors demonstrated in simulations that their approach can be used to perform image classification tasks with the same accuracy as digital neural networks.

In the future, the authors are planning to collaborate with experimental groups to explore the implementation of their method. Since their proposal significantly relaxes the experimental requirements, it can be applied to many physically very different systems. This opens up new possibilities for neuromorphic devices allowing physical training over a broad range of platforms.

Here’s a link to and a citation for the paper,

Fully nonlinear neuromorphic computing with linear wave scattering by Clara C. Wanjura & Florian Marquardt. Nature Physics (2024) DOI: https://doi.org/10.1038/s41567-024-02534-9 Published: 09 July 2024

This paper is open access.

Proposed platform for brain-inspired computing

Researchers at the University of California at Santa Barbara (UCSB) have proposed a more energy-efficient architecture for neuromorphic (brainlike or brain-inspored) computing according to a June 25, 2024 news item on ScienceDaily,

Computers have come so far in terms of their power and potential, rivaling and even eclipsing human brains in their ability to store and crunch data, make predictions and communicate. But there is one domain where human brains continue to dominate: energy efficiency.

“The most efficient computers are still approximately four orders of magnitude — that’s 10,000 times — higher in energy requirements compared to the human brain for specific tasks such as image processing and recognition, although they outperform the brain in tasks like mathematical calculations,” said UC Santa Barbara electrical and computer engineering Professor Kaustav Banerjee, a world expert in the realm of nanoelectronics. “Making computers more energy efficient is crucial because the worldwide energy consumption by on-chip electronics stands at #4 in the global rankings of nation-wise energy consumption, and it is increasing exponentially each year, fueled by applications such as artificial intelligence.” Additionally, he said, the problem of energy inefficient computing is particularly pressing in the context of global warming, “highlighting the urgent need to develop more energy-efficient computing technologies.”

….

A June 24, 2024 UCSB news release (also on Eurekalert), which originated the news item, delves further into the subject,

Neuromorphic (NM) computing has emerged as a promising way to bridge the energy efficiency gap. By mimicking the structure and operations of the human brain, where processing occurs in parallel across an array of low power-consuming neurons, it may be possible to approach brain-like energy efficiency. In a paper published in the journal Nature Communications, Banerjee and co-workers Arnab Pal, Zichun Chai, Junkai Jiang and Wei Cao, in collaboration with researchers Vivek De and Mike Davies from Intel Labs propose such an ultra-energy efficient platform, using 2D transition metal dichalcogenide (TMD)-based tunnel-field-effect transistors (TFETs). Their platform, the researchers say, can bring the energy requirements to within two orders of magnitude (about 100 times) with respect to the human brain.

Leakage currents and subthreshold swing

The concept of neuromorphic computing has been around for decades, though the research around it has intensified only relatively recently. Advances in circuitry that enable smaller, denser arrays of transistors, and therefore more processing and functionality for less power consumption are just scratching the surface of what can be done to enable brain-inspired computing. Add to that an appetite generated by its many potential applications, such as AI and the Internet-of-Things, and it’s clear that expanding the options for a hardware platform for neuromorphic computing must be addressed in order to move forward.

Enter the team’s 2D tunnel-transistors. Emerging out of Banerjee’s longstanding research efforts to develop high-performance, low-power consumption transistors to meet the growing hunger for processing without a matching increase in power requirement, these atomically thin, nanoscale transistors are responsive at low voltages, and as the foundation of the researchers’ NM platform, can mimic the highly energy efficient operations of the human brain. In addition to lower off-state currents, the 2D TFETs also have a low subthreshold swing (SS), a parameter that describes how effectively a transistor can switch from off to on. According to Banerjee, a lower SS means a lower operating voltage, and faster and more efficient switching.

“Neuromorphic computing architectures are designed to operate with very sparse firing circuits,” said lead author Arnab Pal, “meaning they mimic how neurons in the brain fire only when necessary.” In contrast to the more conventional von Neumann architecture of today’s computers, in which data is processed sequentially, memory and processing components are separated and which continuously draw power throughout the entire operation, an event-driven system such as a NM computer fires up only when there is input to process, and memory and processing are distributed across an array of transistors. Companies like Intel and IBM have developed brain-inspired platforms, deploying billions of interconnected transistors and generating significant energy savings.

However, there’s still room for energy efficiency improvement, according to the researchers.

“In these systems, most of the energy is lost through leakage currents when the transistors are off, rather than during their active state,” Banerjee explained. A ubiquitous phenomenon in the world of electronics, leakage currents are small amounts of electricity that flow through a circuit even when it is in the off state (but still connected to power). According to the paper, current NM chips use traditional metal-oxide-semiconductor field-effect transistors (MOSFETs) which have a high on-state current, but also high off-state leakage. “Since the power efficiency of these chips is constrained by the off-state leakage, our approach — using tunneling transistors with much lower off-state current — can greatly improve power efficiency,” Banerjee said.

When integrated into a neuromorphic circuit, which emulates the firing and reset of neurons, the TFETs proved themselves more energy efficient than state-of-the-art MOSFETs, particularly the FinFETs (a MOSFET design that incorporates vertical “fins” as a way to provide better control of switching and leakage). TFETs are still in the experimental stage, however the performance and energy efficiency of neuromorphic circuits based on them makes them a promising candidate for the next generation of brain-inspired computing.

According to co-authors Vivek De (Intel Fellow) and Mike Davies (Director of Intel’s Neuromorphic Computing Lab), “Once realized, this platform can bring the energy consumption in chips to within two orders of magnitude with respect to the human brain — not accounting for the interface circuitry and memory storage elements. This represents a significant improvement from what is achievable today.”

Eventually, one can realize three-dimensional versions of these 2D-TFET based neuromorphic circuits to provide even closer emulation of the human brain, added Banerjee, widely recognized as one of the key visionaries behind 3D integrated circuits that are now witnessing wide scale commercial proliferation.

Here’s a link to and a citation for the latest paper,

An ultra energy-efficient hardware platform for neuromorphic computing enabled by 2D-TMD tunnel-FETs by Arnab Pal, Zichun Chai, Junkai Jiang, Wei Cao, Mike Davies, Vivek De & Kaustav Banerjee. Nature Communications volume 15, Article number: 3392 (2024) DOI: https://doi.org/10.1038/s41467-024-46397-3 Published: 22 April 2024

This paper is open access.

Butterfly mating inspires neuromorphic (brainlike) computing

Michael Berger writes about a multisensory approach to neuromorphic computing inspired by butterflies in his February 2, 2024 Nanowerk Spotlight article, Note: Links have been removed,

Artificial intelligence systems have historically struggled to integrate and interpret information from multiple senses the way animals intuitively do. Humans and other species rely on combining sight, sound, touch, taste and smell to better understand their surroundings and make decisions. However, the field of neuromorphic computing has largely focused on processing data from individual senses separately.

This unisensory approach stems in part from the lack of miniaturized hardware able to co-locate different sensing modules and enable in-sensor and near-sensor processing. Recent efforts have targeted fusing visual and tactile data. However, visuochemical integration, which merges visual and chemical information to emulate complex sensory processing such as that seen in nature—for instance, butterflies integrating visual signals with chemical cues for mating decisions—remains relatively unexplored. Smell can potentially alter visual perception, yet current AI leans heavily on visual inputs alone, missing a key aspect of biological cognition.

Now, researchers at Penn State University have developed bio-inspired hardware that embraces heterogeneous integration of nanomaterials to allow the co-location of chemical and visual sensors along with computing elements. This facilitates efficient visuochemical information processing and decision-making, taking cues from the courtship behaviors of a species of tropical butterfly.

In the paper published in Advanced Materials (“A Butterfly-Inspired Multisensory Neuromorphic Platform for Integration of Visual and Chemical Cues”), the researchers describe creating their visuochemical integration platform inspired by Heliconius butterflies. During mating, female butterflies rely on integrating visual signals like wing color from males along with chemical pheromones to select partners. Specialized neurons combine these visual and chemical cues to enable informed mate choice.

To emulate this capability, the team constructed hardware encompassing monolayer molybdenum disulfide (MoS2) memtransistors serving as visual capture and processing components. Meanwhile, graphene chemitransistors functioned as artificial olfactory receptors. Together, these nanomaterials provided the sensing, memory and computing elements necessary for visuochemical integration in a compact architecture.

While mating butterflies served as inspiration, the developed technology has much wider relevance. It represents a significant step toward overcoming the reliance of artificial intelligence on single data modalities. Enabling integration of multiple senses can greatly improve situational understanding and decision-making for autonomous robots, vehicles, monitoring devices and other systems interacting with complex environments.

The work also helps progress neuromorphic computing approaches seeking to emulate biological brains for next-generation ML acceleration, edge deployment and reduced power consumption. In nature, cross-modal learning underpins animals’ adaptable behavior and intelligence emerging from brains organizing sensory inputs into unified percepts. This research provides a blueprint for hardware co-locating sensors and processors to more closely replicate such capabilities

It’s fascinating to me how many times butterflies inspire science,

Butterfly-inspired visuo-chemical integration. a) A simplified abstraction of visual and chemical stimuli from male butterflies and visuo-chemical integration pathway in female butterflies. b) Butterfly-inspired neuromorphic hardware comprising of monolayer MoS2 memtransistor-based visual afferent neuron, graphene-based chemoreceptor neuron, and MoS2 memtransistor-based neuro-mimetic mating circuits. Courtesy: Wiley/Penn State University Researchers

Here’s a link to and a citation for the paper,

A Butterfly-Inspired Multisensory Neuromorphic Platform for Integration of Visual and Chemical Cues by Yikai Zheng, Subir Ghosh, Saptarshi Das. Advanced Materials SOI: https://doi.org/10.1002/adma.202307380 First published: 09 December 2023

This paper is open access.

Striking similarity between memory processing of artificial intelligence (AI) models and hippocampus of the human brain

A December 18, 2023 news item on ScienceDaily shifted my focus from hardware to software when considering memory in brainlike (neuromorphic) computing,

An interdisciplinary team consisting of researchers from the Center for Cognition and Sociality and the Data Science Group within the Institute for Basic Science (IBS) [Korea] revealed a striking similarity between the memory processing of artificial intelligence (AI) models and the hippocampus of the human brain. This new finding provides a novel perspective on memory consolidation, which is a process that transforms short-term memories into long-term ones, in AI systems.

A November 28 (?), 2023 IBS press release (also on EurekAlert but published December 18, 2023, which originated the news item, describes how the team went about its research,

In the race towards developing Artificial General Intelligence (AGI), with influential entities like OpenAI and Google DeepMind leading the way, understanding and replicating human-like intelligence has become an important research interest. Central to these technological advancements is the Transformer model [Figure 1], whose fundamental principles are now being explored in new depth.

The key to powerful AI systems is grasping how they learn and remember information. The team applied principles of human brain learning, specifically concentrating on memory consolidation through the NMDA receptor in the hippocampus, to AI models.

The NMDA receptor is like a smart door in your brain that facilitates learning and memory formation. When a brain chemical called glutamate is present, the nerve cell undergoes excitation. On the other hand, a magnesium ion acts as a small gatekeeper blocking the door. Only when this ionic gatekeeper steps aside, substances are allowed to flow into the cell. This is the process that allows the brain to create and keep memories, and the gatekeeper’s (the magnesium ion) role in the whole process is quite specific.

The team made a fascinating discovery: the Transformer model seems to use a gatekeeping process similar to the brain’s NMDA receptor [see Figure 1]. This revelation led the researchers to investigate if the Transformer’s memory consolidation can be controlled by a mechanism similar to the NMDA receptor’s gating process.

In the animal brain, a low magnesium level is known to weaken memory function. The researchers found that long-term memory in Transformer can be improved by mimicking the NMDA receptor. Just like in the brain, where changing magnesium levels affect memory strength, tweaking the Transformer’s parameters to reflect the gating action of the NMDA receptor led to enhanced memory in the AI model. This breakthrough finding suggests that how AI models learn can be explained with established knowledge in neuroscience.

C. Justin LEE, who is a neuroscientist director at the institute, said, “This research makes a crucial step in advancing AI and neuroscience. It allows us to delve deeper into the brain’s operating principles and develop more advanced AI systems based on these insights.”

CHA Meeyoung, who is a data scientist in the team and at KAIST [Korea Advanced Institute of Science and Technology], notes, “The human brain is remarkable in how it operates with minimal energy, unlike the large AI models that need immense resources. Our work opens up new possibilities for low-cost, high-performance AI systems that learn and remember information like humans.”

What sets this study apart is its initiative to incorporate brain-inspired nonlinearity into an AI construct, signifying a significant advancement in simulating human-like memory consolidation. The convergence of human cognitive mechanisms and AI design not only holds promise for creating low-cost, high-performance AI systems but also provides valuable insights into the workings of the brain through AI models.

Fig. 1: (a) Diagram illustrating the ion channel activity in post-synaptic neurons. AMPA receptors are involved in the activation of post-synaptic neurons, while NMDA receptors are blocked by magnesium ions (Mg²⁺) but induce synaptic plasticity through the influx of calcium ions (Ca²⁺) when the post-synaptic neuron is sufficiently activated. (b) Flow diagram representing the computational process within the Transformer AI model. Information is processed sequentially through stages such as feed-forward layers, layer normalization, and self-attention layers. The graph depicting the current-voltage relationship of the NMDA receptors is very similar to the nonlinearity of the feed-forward layer. The input-output graph, based on the concentration of magnesium (α), shows the changes in the nonlinearity of the NMDA receptors. Courtesy: IBS

This research was presented at the 37th Conference on Neural Information Processing Systems (NeurIPS 2023) before being published in the proceedings, I found a PDF of the presentation and an early online copy of the paper before locating the paper in the published proceedings.

PDF of presentation: Transformer as a hippocampal memory consolidation model based on NMDAR-inspired nonlinearity

PDF copy of paper:

Transformer as a hippocampal memory consolidation model based on NMDAR-inspired nonlinearity by Dong-Kyum Kim, Jea Kwon, Meeyoung Cha, C. Justin Lee.

This paper was made available on OpenReview.net:

OpenReview is a platform for open peer review, open publishing, open access, open discussion, open recommendations, open directory, open API and open source.

It’s not clear to me if this paper is finalized or not and I don’t know if its presence on OpenReview constitutes publication.

Finally, the paper published in the proceedings,

Transformer as a hippocampal memory consolidation model based on NMDAR-inspired nonlinearity by Dong Kyum Kim, Jea Kwon, Meeyoung Cha, C. Justin Lee. Part of Advances in Neural Information Processing Systems 36 (NeurIPS 2023) Main Conference Track

This link will take you to the abstract, access the paper by clicking on the Paper tab.

Brain-inspired (neuromrophic) computing with twisted magnets and a patent for manufacturing permanent magnets without rare earths

I have two news bits both of them concerned with magnets.

Patent for magnets that can be made without rare earths

I’m starting with the patent news first since this is (as the company notes in its news release) a “Landmark Patent Issued for Technology Critically Needed to Combat Chinese Monopoly.”

For those who don’t know, China supplies most of the rare earths used in computers, smart phones, and other devices. On general principles, having a single supplier dominate production of and access to a necessary material for devices that most of us rely on can raise tensions. Plus, you can’t mine for resources forever.

This December 19, 2023 Nanocrystal Technology LP news release heralds an exciting development (for the impatient, further down the page I have highlighted the salient sections),

Nanotechnology Discovery by 2023 Nobel Prize Winner Became Launch Pad to Create Permanent Magnets without Rare Earths from China

NEW YORK, NY, UNITED STATES, December 19, 2023 /EINPresswire.com/ — Integrated Nano-Magnetics Corp, a wholly owned subsidiary of Nanocrystal Technology LP, was awarded a patent for technology built upon a fundamental nanoscience discovery made by Aleksey Yekimov, its former Chief Scientific Officer.

This patent will enable the creation of strong permanent magnets which are critically needed for both industrial and military applications but cannot be manufactured without certain “rare earth” elements available mostly from China.

At a glittering awards ceremony held in Stockholm on December10, 2023, three scientists, Aleksey Yekimov, Louis Brus (Professor at Columbia University) and Moungi Bawendi (Professor at MIT) were honored with the Nobel Prize in Chemistry for their discovery of the “quantum dot” which is now fueling practical applications in tuning the colors of LEDs, increasing the resolution of TV screens, and improving MRI imaging.

As stated by the Royal Swedish Academy of Sciences, “Quantum dots are … bringing the greatest benefits to humankind. Researchers believe that in the future they could contribute to flexible electronics, tiny sensors, thinner solar cells, and encrypted quantum communications – so we have just started exploring the potential of these tiny particles.”

Aleksey Yekimov worked for over 19 years until his retirement as Chief Scientific Officer of Nanocrystals Technology LP, an R & D company in New York founded by two Indian-American entrepreneurs, Rameshwar Bhargava and Rajan Pillai.

Yekimov, who was born in Russia, had already received the highest scientific honors for his work before he immigrated to USA in 1999. Yekimov was greatly intrigued by Nanocrystal Technology’s research project and chose to join the company as its Chief Scientific Officer.

During its early years, the company worked on efficient light generation by doping host nanoparticles about the same size as a quantum dot with an additional impurity atom. Bhargava came up with the novel idea of incorporating a single impurity atom, a dopant, into a quantum dot sized host, and thus achieve an extraordinary change in the host material’s properties such as inducing strong permanent magnetism in weak, readily available paramagnetic materials. To get a sense of the scale at which nanotechnology works, and as vividly illustrated by the Nobel Foundation, the difference in size between a quantum dot and a soccer ball is about the same as the difference between a soccer ball and planet Earth.

Currently, strong permanent magnets are manufactured from “rare earths” available mostly in China which has established a near monopoly on the supply of rare-earth based strong permanent magnets. Permanent magnets are a fundamental building block for electro-mechanical devices such as motors found in all automobiles including electric vehicles, trucks and tractors, military tanks, wind turbines, aircraft engines, missiles, etc. They are also required for the efficient functioning of audio equipment such as speakers and cell phones as well as certain magnetic storage media.

The existing market for permanent magnets is $28 billion and is projected to reach $50 billion by 2030 in view of the huge increase in usage of electric vehicles. China’s overwhelming dominance in this field has become a matter of great concern to governments of all Western and other industrialized nations. As the Wall St. Journal put it, China’s now has a “stranglehold” on the economies and security of other countries.

The possibility of making permanent magnets without the use of any rare earths mined in China has intrigued leading physicists and chemists for nearly 30 years. On December 19, 2023, a U.S. patent with the title ‘’Strong Non Rare Earth Permanent Magnets from Double Doped Magnetic Nanoparticles” was granted to Integrated Nano-Magnetics Corp. [emphasis mine] Referring to this major accomplishment Bhargava said, “The pioneering work done by Yekimov, Brus and Bawendi has provided the foundation for us to make other discoveries in nanotechnology which will be of great benefit to the world.”

I was not able to find any company websites. The best I could find is a Nanocrystals Technology LinkedIn webpage and some limited corporate data for Integrated Nano-Magnetics on opencorporates.com.

Twisted magnets and brain-inspired computing

This research offers a pathway to neuromorphic (brainlike) computing with chiral (or twisted) magnets, which, as best as I understand it, do not require rare earths. From a November13, 2023 news item on ScienceDaily,

A form of brain-inspired computing that exploits the intrinsic physical properties of a material to dramatically reduce energy use is now a step closer to reality, thanks to a new study led by UCL [University College London] and Imperial College London [ICL] researchers.

In the new study, published in the journal Nature Materials, an international team of researchers used chiral (twisted) magnets as their computational medium and found that, by applying an external magnetic field and changing temperature, the physical properties of these materials could be adapted to suit different machine-learning tasks.

A November 9, 2023 UCL press release (also on EurekAlert but published November 13, 2023), which originated the news item, fill s in a few more details about the research,

Dr Oscar Lee (London Centre for Nanotechnology at UCL and UCL Department of Electronic & Electrical Engineering), the lead author of the paper, said: “This work brings us a step closer to realising the full potential of physical reservoirs to create computers that not only require significantly less energy, but also adapt their computational properties to perform optimally across various tasks, just like our brains.

“The next step is to identify materials and device architectures that are commercially viable and scalable.”

Traditional computing consumes large amounts of electricity. This is partly because it has separate units for data storage and processing, meaning information has to be shuffled constantly between the two, wasting energy and producing heat. This is particularly a problem for machine learning, which requires vast datasets for processing. Training one large AI model can generate hundreds of tonnes of carbon dioxide.

Physical reservoir computing is one of several neuromorphic (or brain inspired) approaches that aims to remove the need for distinct memory and processing units, facilitating more efficient ways to process data. In addition to being a more sustainable alternative to conventional computing, physical reservoir computing could be integrated into existing circuitry to provide additional capabilities that are also energy efficient.

In the study, involving researchers in Japan and Germany, the team used a vector network analyser to determine the energy absorption of chiral magnets at different magnetic field strengths and temperatures ranging from -269 °C to room temperature.

They found that different magnetic phases of chiral magnets excelled at different types of computing task. The skyrmion phase, where magnetised particles are swirling in a vortex-like pattern, had a potent memory capacity apt for forecasting tasks. The conical phase, meanwhile, had little memory, but its non-linearity was ideal for transformation tasks and classification – for instance, identifying if an animal is a cat or dog.

Co-author Dr Jack Gartside, of Imperial College London, said: “Our collaborators at UCL in the group of Professor Hidekazu Kurebayashi recently identified a promising set of materials for powering unconventional computing. These materials are special as they can support an especially rich and varied range of magnetic textures. Working with the lead author Dr Oscar Lee, the Imperial College London group [led by Dr Gartside, Kilian Stenning and Professor Will Branford] designed a neuromorphic computing architecture to leverage the complex material properties to match the demands of a diverse set of challenging tasks. This gave great results, and showed how reconfiguring physical phases can directly tailor neuromorphic computing performance.”

The work also involved researchers at the University of Tokyo and Technische Universität München and was supported by the Leverhulme Trust, Engineering and Physical Sciences Research Council (EPSRC), Imperial College London President’s Excellence Fund for Frontier Research, Royal Academy of Engineering, the Japan Science and Technology Agency, Katsu Research Encouragement Award, Asahi Glass Foundation, and the DFG (German Research Foundation).

Here’s a link to and a citation for the paper,

Task-adaptive physical reservoir computing by Oscar Lee, Tianyi Wei, Kilian D. Stenning, Jack C. Gartside, Dan Prestwood, Shinichiro Seki, Aisha Aqeel, Kosuke Karube, Naoya Kanazawa, Yasujiro Taguchi, Christian Back, Yoshinori Tokura, Will R. Branford & Hidekazu Kurebayashi. Nature Materials volume 23, pages 79–87 (2024) DOI: https://doi.org/10.1038/s41563-023-01698-8 Published online: 13 November 2023 Issue Date: January 2024

This paper is open access.

Physical neural network based on nanowires can learn and remember ‘on the fly’

A November 1, 2023 news item on Nanowerk announced new work on neuromorphic engineering from Australia,

For the first time, a physical neural network has successfully been shown to learn and remember ‘on the fly’, in a way inspired by and similar to how the brain’s neurons work.

The result opens a pathway for developing efficient and low-energy machine intelligence for more complex, real-world learning and memory tasks.

Key Takeaways
*The nanowire-based system can learn and remember ‘on the fly,’ processing dynamic, streaming data for complex learning and memory tasks.

*This advancement overcomes the challenge of heavy memory and energy usage commonly associated with conventional machine learning models.

*The technology achieved a 93.4% accuracy rate in image recognition tasks, using real-time data from the MNIST database of handwritten digits.

*The findings promise a new direction for creating efficient, low-energy machine intelligence applications, such as real-time sensor data processing.

Nanowire neural network
Caption: Electron microscope image of the nanowire neural network that arranges itself like ‘Pick Up Sticks’. The junctions where the nanowires overlap act in a way similar to how our brain’s synapses operate, responding to electric current. Credit: The University of Sydney

A November 1, 2023 University of Sydney news release (also on EurekAlert), which originated the news item, elaborates on the research,

Published today [November 1, 2023] in Nature Communications, the research is a collaboration between scientists at the University of Sydney and University of California at Los Angeles.

Lead author Ruomin Zhu, a PhD student from the University of Sydney Nano Institute and School of Physics, said: “The findings demonstrate how brain-inspired learning and memory functions using nanowire networks can be harnessed to process dynamic, streaming data.”

Nanowire networks are made up of tiny wires that are just billionths of a metre in diameter. The wires arrange themselves into patterns reminiscent of the children’s game ‘Pick Up Sticks’, mimicking neural networks, like those in our brains. These networks can be used to perform specific information processing tasks.

Memory and learning tasks are achieved using simple algorithms that respond to changes in electronic resistance at junctions where the nanowires overlap. Known as ‘resistive memory switching’, this function is created when electrical inputs encounter changes in conductivity, similar to what happens with synapses in our brain.

In this study, researchers used the network to recognise and remember sequences of electrical pulses corresponding to images, inspired by the way the human brain processes information.

Supervising researcher Professor Zdenka Kuncic said the memory task was similar to remembering a phone number. The network was also used to perform a benchmark image recognition task, accessing images in the MNIST database of handwritten digits, a collection of 70,000 small greyscale images used in machine learning.

“Our previous research established the ability of nanowire networks to remember simple tasks. This work has extended these findings by showing tasks can be performed using dynamic data accessed online,” she said.

“This is a significant step forward as achieving an online learning capability is challenging when dealing with large amounts of data that can be continuously changing. A standard approach would be to store data in memory and then train a machine learning model using that stored information. But this would chew up too much energy for widespread application.

“Our novel approach allows the nanowire neural network to learn and remember ‘on the fly’, sample by sample, extracting data online, thus avoiding heavy memory and energy usage.”

Mr Zhu said there were other advantages when processing information online.

“If the data is being streamed continuously, such as it would be from a sensor for instance, machine learning that relied on artificial neural networks would need to have the ability to adapt in real-time, which they are currently not optimised for,” he said.

In this study, the nanowire neural network displayed a benchmark machine learning capability, scoring 93.4 percent in correctly identifying test images. The memory task involved recalling sequences of up to eight digits. For both tasks, data was streamed into the network to demonstrate its capacity for online learning and to show how memory enhances that learning.

Here’s a link to and a citation for the paper,

Online dynamical learning and sequence memory with neuromorphic nanowire networks by Ruomin Zhu, Sam Lilak, Alon Loeffler, Joseph Lizier, Adam Stieg, James Gimzewski & Zdenka Kuncic. Nature Communications volume 14, Article number: 6697 (2023) DOI: https://doi.org/10.1038/s41467-023-42470-5 Published: 01 November 2023

This paper is open access.

You’ll notice a number of this team’s members are also listed in the citation in my June 21, 2023 posting “Learning and remembering like a human brain: nanowire networks” and you’ll see some familiar names in the citation in my June 17, 2020 posting “A tangle of silver nanowires for brain-like action.”

A formal theory for neuromorphic (brainlike) computing hardware needed

This is one my older pieces as the information dates back to October 2023 but neuromorphic computing is one of my key interests and I’m particularly interested to see the upsurge in the discussion of hardware, here goes. From an October 17, 2023 news item on Nanowerk,

There is an intense, worldwide search for novel materials to build computer microchips with that are not based on classic transistors but on much more energy-saving, brain-like components. However, whereas the theoretical basis for classic transistor-based digital computers is solid, there are no real theoretical guidelines for the creation of brain-like computers.

Such a theory would be absolutely necessary to put the efforts that go into engineering new kinds of microchips on solid ground, argues Herbert Jaeger, Professor of Computing in Cognitive Materials at the University of Groningen [Netherlands].

Key Takeaways
Scientists worldwide are searching for new materials to build energy-saving, brain-like computer microchips as classic transistor miniaturization reaches its physical limit.

Theoretical guidelines for brain-like computers are lacking, making it crucial for advancements in the field.

The brain’s versatility and robustness serve as an inspiration, despite limited knowledge about its exact workings.

A recent paper suggests that a theory for non-digital computers should focus on continuous, analogue signals and consider the characteristics of new materials.

Bridging gaps between diverse scientific fields is vital for developing a foundational theory for neuromorphic computing..

An October 17, 2023 University of Groningen press release (also on EurekAlert), which originated the news item, provides more context for this proposal,

Computers have, so far, relied on stable switches that can be off or on, usually transistors. These digital computers are logical machines and their programming is also based on logical reasoning. For decades, computers have become more powerful by further miniaturization of the transistors, but this process is now approaching a physical limit. That is why scientists are working to find new materials to make more versatile switches, which could use more values than just the digitals 0 or 1.

Dangerous pitfall

Jaeger is part of the Groningen Cognitive Systems and Materials Center (CogniGron), which aims to develop neuromorphic (i.e. brain-like) computers. CogniGron is bringing together scientists who have very different approaches: experimental materials scientists and theoretical modelers from fields as diverse as mathematics, computer science, and AI. Working closely with materials scientists has given Jaeger a good idea of the challenges that they face when trying to come up with new computational materials, while it has also made him aware of a dangerous pitfall: there is no established theory for the use of non-digital physical effects in computing systems.

Our brain is not a logical system. We can reason logically, but that is only a small part of what our brain does. Most of the time, it must work out how to bring a hand to a teacup or wave to a colleague on passing them in a corridor. ‘A lot of the information-processing that our brain does is this non-logical stuff, which is continuous and dynamic. It is difficult to formalize this in a digital computer,’ explains Jaeger. Furthermore, our brains keep working despite fluctuations in blood pressure, external temperature, or hormone balance, and so on. How is it possible to create a computer that is as versatile and robust? Jaeger is optimistic: ‘The simple answer is: the brain is proof of principle that it can be done.’

Neurons

The brain is, therefore, an inspiration for materials scientists. Jaeger: ‘They might produce something that is made from a few hundred atoms and that will oscillate, or something that will show bursts of activity. And they will say: “That looks like how neurons work, so let’s build a neural network”.’ But they are missing a vital bit of knowledge here. ‘Even neuroscientists don’t know exactly how the brain works. This is where the lack of a theory for neuromorphic computers is problematic. Yet, the field doesn’t appear to see this.’

In a paper published in Nature Communications on 16 August, Jaeger and his colleagues Beatriz Noheda (scientific director of CogniGron) and Wilfred G. van der Wiel (University of Twente) present a sketch of what a theory for non-digital computers might look like. They propose that instead of stable 0/1 switches, the theory should work with continuous, analogue signals. It should also accommodate the wealth of non-standard nanoscale physical effects that the materials scientists are investigating.

Sub-theories

Something else that Jaeger has learned from listening to materials scientists is that devices from these new materials are difficult to construct. Jaeger: ‘If you make a hundred of them, they will not all be identical.’ This is actually very brain-like, as our neurons are not all exactly identical either. Another possible issue is that the devices are often brittle and temperature-sensitive, continues Jaeger. ‘Any theory for neuromorphic computing should take such characteristics into account.’

Importantly, a theory underpinning neuromorphic computing will not be a single theory but will be constructed from many sub-theories (see image below). Jaeger: ‘This is in fact how digital computer theory works as well, it is a layered system of connected sub-theories.’ Creating such a theoretical description of neuromorphic computers will require close collaboration of experimental materials scientists and formal theoretical modellers. Jaeger: ‘Computer scientists must be aware of the physics of all these new materials [emphasis mine] and materials scientists should be aware of the fundamental concepts in computing.’

Blind spots

Bridging this divide between materials science, neuroscience, computing science, and engineering is exactly why CogniGron was founded at the University of Groningen: it brings these different groups together. ‘We all have our blind spots,’ concludes Jaeger. ‘And the biggest gap in our knowledge is a foundational theory for neuromorphic computing. Our paper is a first attempt at pointing out how such a theory could be constructed and how we can create a common language.’

Here’s a link to and a citation for the paper,

Toward a formal theory for computing machines made out of whatever physics offers by Herbert Jaeger, Beatriz Noheda & Wilfred G. van der Wiel. Nature Communications volume 14, Article number: 4911 (2023) DOI: https://doi.org/10.1038/s41467-023-40533-1 Published: 16 August 2023

This paper is open access and there’s a 76 pp. version, “Toward a formal theory for computing machines made out of whatever physics offers: extended version” (emphasis mine) available on arXchiv.

Caption: A general theory of physical computing systems would comprise existing theories as special cases. Figure taken from an extended version of the Nature Comm paper on arXiv. Credit: Jaeger et al. / University of Groningen

With regard to new materials for neuromorphic computing, my January 4, 2024 posting highlights a proposed quantum material for this purpose.

A hardware (neuromorphic and quantum) proposal for handling increased AI workload

It’s been a while since I’ve featured anything from Purdue University (Indiana, US). From a November 7, 2023 news item on Nanowerk, Note Links have been removed,

Technology is edging closer and closer to the super-speed world of computing with artificial intelligence. But is the world equipped with the proper hardware to be able to handle the workload of new AI technological breakthroughs?

Key Takeaways
Current AI technologies are strained by the limitations of silicon-based computing hardware, necessitating new solutions.

Research led by Erica Carlson [Purdue University] suggests that neuromorphic [brainlike] architectures, which replicate the brain’s neurons and synapses, could revolutionize computing efficiency and power.

Vanadium oxides have been identified as a promising material for creating artificial neurons and synapses, crucial for neuromorphic computing.

Innovative non-volatile memory, observed in vanadium oxides, could be the key to more energy-efficient and capable AI hardware.

Future research will explore how to optimize the synaptic behavior of neuromorphic materials by controlling their memory properties.

The colored landscape above shows a transition temperature map of VO2 (pink surface) as measured by optical microscopy. This reveals the unique way that this neuromorphic quantum material [emphasis mine] stores memory like a synapse. Image credit: Erica Carlson, Alexandre Zimmers, and Adobe Stock

An October 13, 2023 Purdue University news release (also on EurekAlert but published November 6, 2023) by Cheryl Pierce, which originated the news item, provides more detail about the work, Note: A link has been removed,

“The brain-inspired codes of the AI revolution are largely being run on conventional silicon computer architectures which were not designed for it,” explains Erica Carlson, 150th Anniversary Professor of Physics and Astronomy at Purdue University.

A joint effort between Physicists from Purdue University, University of California San Diego (USCD) and École Supérieure de Physique et de Chimie Industrielles (ESPCI) in Paris, France, believe they may have discovered a way to rework the hardware…. [sic] By mimicking the synapses of the human brain.  They published their findings, “Spatially Distributed Ramp Reversal Memory in VO2” in Advanced Electronic Materials which is featured on the back cover of the October 2023 edition.

New paradigms in hardware will be necessary to handle the complexity of tomorrow’s computational advances. According to Carlson, lead theoretical scientist of this research, “neuromorphic architectures hold promise for lower energy consumption processors, enhanced computation, fundamentally different computational modes, native learning and enhanced pattern recognition.”

Neuromorphic architecture basically boils down to computer chips mimicking brain behavior.  Neurons are cells in the brain that transmit information. Neurons have small gaps at their ends that allow signals to pass from one neuron to the next which are called synapses. In biological brains, these synapses encode memory. This team of scientists concludes that vanadium oxides show tremendous promise for neuromorphic computing because they can be used to make both artificial neurons and synapses.

“The dissonance between hardware and software is the origin of the enormously high energy cost of training, for example, large language models like ChatGPT,” explains Carlson. “By contrast, neuromorphic architectures hold promise for lower energy consumption by mimicking the basic components of a brain: neurons and synapses. Whereas silicon is good at memory storage, the material does not easily lend itself to neuron-like behavior. Ultimately, to provide efficient, feasible neuromorphic hardware solutions requires research into materials with radically different behavior from silicon – ones that can naturally mimic synapses and neurons. Unfortunately, the competing design needs of artificial synapses and neurons mean that most materials that make good synaptors fail as neuristors, and vice versa. Only a handful of materials, most of them quantum materials, have the demonstrated ability to do both.”

The team relied on a recently discovered type of non-volatile memory which is driven by repeated partial temperature cycling through the insulator-to-metal transition. This memory was discovered in vanadium oxides.

Alexandre Zimmers, lead experimental scientist from Sorbonne University and École Supérieure de Physique et de Chimie Industrielles, Paris, explains, “Only a few quantum materials are good candidates for future neuromorphic devices, i.e., mimicking artificial synapses and neurons. For the first time, in one of them, vanadium dioxide, we can see optically what is changing in the material as it operates as an artificial synapse. We find that memory accumulates throughout the entirety of the sample, opening new opportunities on how and where to control this property.”

“The microscopic videos show that, surprisingly, the repeated advance and retreat of metal and insulator domains causes memory to be accumulated throughout the entirety of the sample, rather than only at the boundaries of domains,” explains Carlson. “The memory appears as shifts in the local temperature at which the material transitions from insulator to metal upon heating, or from metal to insulator upon cooling. We propose that these changes in the local transition temperature accumulate due to the preferential diffusion of point defects into the metallic domains that are interwoven through the insulator as the material is cycled partway through the transition.”

Now that the team has established that vanadium oxides are possible candidates for future neuromorphic devices, they plan to move forward in the next phase of their research.

“Now that we have established a way to see inside this neuromorphic material, we can locally tweak and observe the effects of, for example, ion bombardment on the material’s surface,” explains Zimmers. “This could allow us to guide the electrical current through specific regions in the sample where the memory effect is at its maximum. This has the potential to significantly enhance the synaptic behavior of this neuromorphic material.”

There’s a very interesting 16 mins. 52 secs. video embedded in the October 13, 2023 Purdue University news release. In an interview with Dr. Erica Carlson who hosts The Quantum Age website and video interviews on its YouTube Channel, Alexandre Zimmers takes you from an amusing phenomenon observed by 19th century scientists through the 20th century where it becomes of more interest as the nanscale phenonenon can be exploited (sonar, scanning tunneling microscopes, singing birthday cards, etc.) to the 21st century where we are integrating this new information into a quantum* material for neuromorphic hardware.

Here’s a link to and a citation for the paper,

Spatially Distributed Ramp Reversal Memory in VO2 by Sayan Basak, Yuxin Sun, Melissa Alzate Banguero, Pavel Salev, Ivan K. Schuller, Lionel Aigouy, Erica W. Carlson, Alexandre Zimmers. Advanced Electronic Materials Volume 9, Issue 10 October 2023 2300085 DOI: https://doi.org/10.1002/aelm.202300085 First published: 10 July 2023

This paper is open access.

There’s a lot of research into neuromorphic hardware, here’s a sampling of some of my most recent posts on the topic,

There’s more, just use ‘neuromorphic hardware’ for your search term.

*’meta’ changed to ‘quantum’ on January 8, 2024.

Neuromorphic transistor with electric double layer

it may be my imagination but it seems as if neuromorphic (brainlike) engineering research has really taken off in the last few years and, even with my lazy approach to finding articles, I’m having trouble keeping up.

This latest work comes from Japan according to an August 4, 2023 news item on Nanowerk, Note: A link has been removed,

A research team consisting of NIMS [National Institute for Materials Science] and the Tokyo University of Science has developed the fastest electric double layer transistor using a highly ion conductive ceramic thin film and a diamond thin film. This transistor may be used to develop energy-efficient, high-speed edge AI devices with a wide range of applications, including future event prediction and pattern recognition/determination in images (including facial recognition), voices and odors.

The research was published in Materials Today Advances (“Ultrafast-switching of an all-solid-state electric double layer transistor with a porous yttria-stabilized zirconia proton conductor and the application to neuromorphic computing”).

A July 7, 2023 National Institute for Materials Science press release (also on EurekAlert but published August 3, 2023), which originated the news item, is arranged as a numbered list of points, the first point being the first paragraph in the news release/item,

2. An electric double layer transistor works as a switch using electrical resistance changes caused by the charge and discharge of an electric double layer formed at the interface between the electrolyte and semiconductor. Because this transistor is able to mimic the electrical response of human cerebral neurons (i.e., acting as a neuromorphic transistor), its use in AI devices is potentially promising. However, existing electric double layer transistors are slow in switching between on and off states. The typical transition time ranges from several hundreds of microseconds to 10 milliseconds. Development of faster electric double layer transistors is therefore desirable.

3. This research team developed an electric double layer transistor by depositing ceramic (yttria-stabilized porous zirconia thin film) and diamond thin films with a high degree of precision using a pulsed laser, forming an electric double layer at the ceramic/diamond interface. The zirconia thin film is able to adsorb large amounts of water into its nanopores and allow hydrogen ions from the water to readily migrate through it, enabling the electric double layer to be rapidly charged and discharged. This electric double layer effect enables the transistor to operate very quickly. The team actually measured the speed at which the transistor operates by applying pulsed voltage to it and found that it operates 8.5 times faster than existing electric double layer transistors, setting a new world record. The team also confirmed the ability of this transistor to convert input waveforms into many different output waveforms with precision—a prerequisite for transistors to be compatible with neuromorphic AI devices.

4. This research project produced a new ceramic thin film technology capable of rapidly charging and discharging an electric double layer several nanometers in thickness. This is a major achievement in efforts to create practical, high-speed, energy-efficient AI-assisted devices. These devices, in combination with various sensors (e.g., smart watches, surveillance cameras and audio sensors), are expected to offer useful tools in various industries, including medicine, disaster prevention, manufacturing and security.

Here’s a link to and a citation for the paper,

Ultrafast-switching of an all-solid-state electric double layer transistor with a porous yttria-stabilized zirconia proton conductor and the application to neuromorphic computing by Makoto Takayanagi, Daiki Nishioka, Takashi Tsuchiya, Masataka Imura, Yasuo Koide, Tohru Higuchi, and Kazuya Terabe. Materials Today Advances [June 16, 2023]; DOI : 10.1016/j.mtadv.2023.10039

This paper is open access.