Category Archives: neuromorphic engineering

A step forward for graphene-based memristors

This research comes from the UK according to an October 26, 2024 news item on phys.org, Note: A link has been removed,

Researchers from Queen Mary University of London and Paragraf Limited have demonstrated a significant step forward in the development of graphene-based memristors and unlocking their potential for use in future computing systems and artificial intelligence (AI).

This innovation, published in ACS Advanced Electronic Materials [this should be ACS Applied Electronic Materials] and featured on the cover of this month’s issue, has been achieved at wafer scale. It begins to pave the way toward scalable production of graphene-based memristors, which are devices crucial for non-volatile memory and artificial neural networks (ANNs).

An October 23, 2024 Queen Mary University of London press release, which originated the news item, explains why memristors are important and gives a little information about the researchers’ solution to a problem with incorporating them into electronics,

Memristors are recognised as potential game-changers in computing, offering the ability to perform analogue computations, store data without power, and mimic the synaptic functions of the human brain. The integration of graphene, a material just one atom thick with the highest electron mobility of any known substance, can enhance these devices dramatically, but has been notoriously difficult to incorporate into electronics in a scalable way until recently. “Graphene electrodes bring clear benefits to memristor technology,” says Dr Zhichao Weng, Research Scientist at School of Physical and Chemical Sciences at Queen Mary. “They offer not only improved endurance but also exciting new applications, such as light-sensitive synapses and optically tuneable memories.”

One of the key challenges in memristor development is device degradation, which graphene can help prevent. By blocking chemical pathways that degrade traditional electrodes, graphene could significantly extend the lifetime and reliability of these devices. Its remarkable transparency, transmitting 98% of light, also opens doors to advanced computing applications, particularly in AI and optoelectronics.

This research is a key step on the way to graphene electronics scalability. Historically, producing high-quality graphene compatible with semiconductor processes has been a significant hurdle. Paragraf’s proprietary Metal-Organic Chemical Vapour Deposition (MOCVD) process, however, has now made it possible to grow monolayer graphene directly on target substrates. This scalable approach is already being used in commercial devices like graphene-based Hall effect sensors and field-effect transistors (GFETs).

“The opportunity for graphene to help in creating next generation computing devices that can combine logic and storage in new ways gives opportunities in solving the energy costs of training large language models in AI,” says John Tingay, CTO at Paragraf. “This latest development with Queen Mary University of London to deliver a memristor proof of concept is an important step in extending graphene’s use in electronics from magnetic and molecular sensors to proving how it could be used in future logic and memory devices.”

The team used a multi-step photolithography process to pattern and integrate the graphene electrodes into memristors, producing reproducible results that point the way to large-scale production. “Our research not only establishes proof of concept but also confirms graphene’s suitability for enhancing memristor performance over other materials,” adds Professor Oliver Fenwick, Professor of Electronic Materials at Queen Mary’s School of Engineering and Materials Science.

This work, part of an Innovate UK Knowledge Transfer Partnership between Queen Mary and Paragraf, is a new milestone in expanding graphene’s role in the semiconductor industry.

Cover of ACS Applied Electronic Materials October issue
Cover of ACS Applied Electronic Materials October issue

Here’s a link to and a citation for the paper,

Memristors with Monolayer Graphene Electrodes Grown Directly on Sapphire Wafers by Zhichao Weng, Robert Wallis, Bryan Wingfield, Paul Evans, Piotr Baginski, Jaspreet Kainth, Andrey E. Nikolaenko, Lok Yi Lee, Joanna Baginska, William P. Gillin, Ivor Guiney, Colin J. Humphreys, Oliver Fenwick. ACS Appl. Electron. Mater. 2024, 6, 10, 7276–7285 DOI: https://doi.org/10.1021/acsaelm.4c01208 Published September 16, 2024

This article appears to be open access.

You can find Paragraf here.

Supercapacitors and memristors

Yes, as this October 23, 2024 Science China Press press release on EurekAlert notes, supercapacitors and memristors are not usually lumped together,

In a groundbreaking development, Professor Xingbin Yan and his team have successfully merged two seemingly disparate research areas: supercapacitors, traditionally used in energy storage, and memristors, integral to neuromorphic computing. Their study introduces a novel phenomenon—the memristive effect in supercapacitors.

“Scientifically, we combine two seemingly disparate research areas: supercapacitors, traditionally used in energy storage, and memristors, integral to neuromorphic computing.” Prof. Xingbin Yan said, “the introduction of memristive behavior in supercapacitors not only enriches the fundamental physics underlying capacitive and memristive components but also extends supercapacitor technology into the realm of artificial synapses. This advancement opens new avenues for developing neuromorphic devices with advanced functionalities, offering a fresh research paradigm for the field.”

In 1971, Chua et al. at UC Berkeley introduced the memristor, proposing it as the fourth fundamental circuit element. Memristors have variable resistance that retains its value after current stops, mimicking neuron behavior, and are considered key for future memory and AI devices. In 2008, HP Labs [R. Stanley Williams and his team] developed nanoscale memristors based on TiO2. However, solid-state devices struggle with simulating chemical synapses. Fluidic memristors are promising due to their biocompatibility and ability to perform various neuromorphic functions. Confining ions in nanoscale channels allows for functionalities like ion diodes and switches, with some systems exhibiting memristive effects.

In 2021, Bocquet [Marc Bocquet, Aix-Marseille Université] et al. predicted that two-dimensional nanoscopic channels could achieve ionic memory functions. Their simulations showed strong nonlinear transport effects in these channels. They confined electrolytes to a monolayer and observed that salts could form tightly bound pairs. Following this, Bocquet’s team created nanoscale fluidic devices with salt solutions, showing hysteresis effects and variable memory times. Similarly, Mao et al. found comparable results with polymer electrolyte brushes, demonstrating hysteresis and frequency-dependent I-V curves. Both studies highlight advancements in controlling ions in nanofluidic devices, mimicking biological systems.

Supercapacitors are known for their higher power density, rapid response, and long lifespan, making them essential for applications in electronics, aerospace, transportation, and smart grids. Recently, a novel type of capacitive ionic device, called supercapacitor-diodes (CAPodes), has been introduced. These devices enable controlled and selective unidirectional ion transport, enhancing the functionality of supercapacitors.

In supercapacitors, charge storage occurs through ion adsorption or rapid redox reactions at the electrode surface, a principle similar to that in fluidic memristors. Inspired by CAPodes, the innovative idea is to explore whether a supercapacitor can be designed with nano-ion channels within the electrode material to achieve memory performance similar to that of fluidic memristors. If feasible, this would not only enhance traditional energy storage but also enable hysteresis in the transport and redistribution of electrolyte ions under varying electric fields.

In this design, the nanochannels of the ZIF-7 electrode in an aqueous supercapacitor allow for the enrichment and dissipation of anionic species (OH) under varying voltage regimes. This results in a hysteresis effect in ion conductivity, which imparts memristive behavior to the supercapacitor. Consequently, the CAPistor combines the programmable resistance and memory functions of an ionic memristor with the energy storage capabilities of a supercapacitor. This integration opens up new possibilities for extending supercapacitors’ traditional applications into advanced fields such as biomimetic nanofluidic ionics and neuromorphic computing.

Here’s a link to and a citation for the paper,

Constructing a supercapacitor-memristor through non-linear ion transport in MOF nanochannels by Pei Tang, Pengwei Jing, Zhiyuan Luo, Kekang Liu, Xiaoxi Zhao, Yining Lao, Qianqian Yao, Chuyi Zhong, Qingfeng Fu, Jian Zhu, Yanghui Liu, Qingyun Dou, Xingbin Yan. National Science Review, Volume 11, Issue 10, October 2024, nwae322, DOI: https://doi.org/10.1093/nsr/nwae322 Published: 11 September 2024

This paper is open access.

Freeing up some ‘thinking space’ for robots

By mimicking how parts of the human body work, scientists may have found a way to give robots more complex instructions, from an October 9, 2024 news item on ScienceDaily,

Engineers have worked out how to give robots complex instructions without electricity for the first time which could free up more space in the robotic ‘brain’ for them to ‘think’.

Mimicking how some parts of the human body work, researchers from King’s College London have transmitted a series of commands to devices with a new kind of compact circuit, using variations in pressure from a fluid inside it.

They say this world first opens up the possibility of a new generation of robots, whose bodies could operate independently of their built-in control centre, with this space potentially being used instead for more complex AI powered software.

Here’s what it looks like,

An October 9, 2024 King’s College London press release (also on EurekAlert but published October 8, 2024), which originated the news item, explains how the researchers made this new technique possible, Note: A link has been removed,

“Delegating tasks to different parts of the body frees up computational space for robots to ‘think,’ allowing future generations of robots to be more aware of their social context or even more dexterous. This opens the door for a new kind of robotics in places like social care and manufacturing,” said Dr Antonio Forte, Senior Lecturer in Engineering at King’s College London and senior author of the study. 

The findings, published in Advanced Science could also enable the creation of robots able to operate in situations where electricity-powered devices cannot work, such as exploration in irradiated areas like Chernobyl which destroy circuits, and in electric sensitive environments like MRI rooms.  

The researchers also hope that these robots could eventually be used in low-income countries which do not have reliable access to electricity. 

Dr Forte said: “Put simply, robots are split into two parts: the brain and the body. An AI brain can help run the traffic system of a city, but many robots still struggle to open a door – why is that?  

“Software has advanced rapidly in recent years, but hardware has not kept up. By creating a hardware system independent from the software running it, we can offload a lot of the computational load onto the hardware, in the same way your brain doesn’t need to tell your heart to beat.”  

Currently, all robots rely on electricity and computer chips to function. A robotic ‘brain’ of algorithms and software translates information to the body or hardware through an encoder, which then performs an action.  

In ‘soft robotics,’ a field which creates devices like robotic muscles out of soft materials, this is particularly an issue as it introduces hard electronic encoders and puts strain on the software for the material to act in a complex way, e.g. grabbing a door handle. 

To circumvent this, the team developed a reconfigurable circuit with an adjustable valve to be placed within a robot’s hardware. This valve acts like a transistor in a normal circuit and engineers can send signals directly to hardware using pressure, mimicking binary code, allowing the robot to perform complex manoeuvres without the need for electricity or instruction from the central brain. This allows for a greater level of control than current fluid-based circuits.  

By offloading the work of the software onto the hardware, the new circuit frees up computational space for future robotic systems to be more adaptive, complex, and useful. 

As a next step, the researchers now hope to scale up their circuits from experimental hoppers and pipettes and embed them in larger robots, from crawlers used to monitor power plants to wheeled robots with entirely soft engines. 

Mostafa Mousa, Post-graduate Researcher at King’s College London and author, said: “Ultimately, without investment in embodied intelligence robots will plateau. Soon, if we do not offload the computational load that modern day robots take on, algorithmic improvements will have little impact on their performance. Our work is just a first step on this path, but the future holds smarter robots with smarter bodies.” 

Here’s a link to and a citation for the paper,

Frequency-Controlled Fluidic Oscillators for Soft Robots by Mostafa Mousa, Ashkan Rezanejad, Benjamin Gorissen, Antonio E. Forte. Advanced Science DOI: https://doi.org/10.1002/advs.202408879 First published online: 08 October 2024

This paper is open access.

Neuromorphic wires (inspired by nerve cells) amplify their own signals—no amplifiers needed

Katherine Bourzac’s September 16, 2024 article for the IEEE (Institute for Electrical and Electronics Engineers) Spectrum magazine provides an accessible (relatively speaking) description of a possible breakthrough for neuromorphic computing, Note: Links have been removed,

In electrical engineering, “we just take it for granted that the signal decays” as it travels, says Timothy Brown, a postdoc in materials physics at Sandia National Lab who was part of the group of researchers who made the self-amplifying device. Even the best wires and chip interconnects put up resistance to the flow of electrons, degrading signal quality over even relatively small distances. This constrains chip designs—lossy interconnects are broken up into ever smaller lengths, and signals are bolstered by buffers and drivers. A 1-square-centimeter chip has about 10,000 repeaters to drive signals, estimates R. Stanley Williams, a professor of computer engineering at Texas A&M University.

Williams is one of the pioneers of neuromorphic computing, which takes inspiration from the nervous system. Axons, the electrical cables that carry signals from the body of a nerve cell to synapses where they connect with projections from other cells, are made up of electrically resistant materials. Yet they can carry high fidelity signals over long distances. The longest axons in the human body are about 1 meter, running from the base of the spine to the feet. Blue whales are thought to have 30 m long axons stretching to the tips of their tails. If something bites the whale’s tail, it will react rapidly. Even from 30 meters away, “the pulses arrive perfectly,” says Williams. “That’s something that doesn’t exist in electrical engineering.”

That’s because axons are active transmission lines: they provide gain to the signal along their length. Williams says he started pondering how to mimic this in an inorganic system 12 years ago. A grant from the US Department of Energy enabled him to build a team with the necessary resources to make it happen. The team included Williams, Brown, and Suhas Kumar, a materials physicist at Sandia.

Axons are coated with an insulating layer called the myelin sheath. Where there are gaps in the sheath, negatively charged sodium ions and positively charged potassium ions can move in and out of the axon, changing the voltage across the cell membrane and pumping in energy in the process. Some of that energy gets taken up by the electrical signal, amplifying it.

Williams and his team wanted to mimic this in a simple structure. They didn’t try to mimic all the physical structures in axons—instead, they sought guidance in a mathematical description of how they amplify signals. Axons operate in a mode called the “edge of chaos,” which combines stable and unstable qualities. This may seem inherently contradictory. Brown likens this kind of system to a saddle that’s curved with two dips. The saddle curves up towards the front and the back, keeping you stable as you rock back and forth. But if you get jostled from side to side, you’re more likely to fall off. When you’re riding in the saddle, you’re operating at the edge of chaos, in a semistable state. In the abstract space of electrical engineering, that jostling is equivalent to wiggles in current and voltage.

There’s a long way to go from this first experimental demonstration to a reimagining of computer chip interconnects. The team is providing samples for other researchers [emphasis mine] who want to verify their measurements. And they’re trying other materials to see how well they do—LaCoO3 [lanthanum colbalt oxide] is only the first one they’ve tested.

Williams hopes this research will show electrical engineers new ideas about how to move forward. “The dream is to redesign chips,” he says. Electrical engineers have long known about nonlinear dynamics, but have hardly ever taken advantage of them, Williams says. “This requires thinking about things and doing measurements differently than they have been done for 50 years,” he says.

If you have the time, please read Bourzac’s September 16, 2024 article in its entirety. For those who want the technical nitty gritty, here’s a link to and a citation for the paper,

Axon-like active signal transmission by Timothy D. Brown, Alan Zhang, Frederick U. Nitta, Elliot D. Grant, Jenny L. Chong, Jacklyn Zhu, Sritharini Radhakrishnan, Mahnaz Islam, Elliot J. Fuller, A. Alec Talin, Patrick J. Shamberger, Eric Pop, R. Stanley Williams & Suhas Kumar. Nature volume 633, pages 804–810 (2024) DOI: https://doi.org/10.1038/s41586-024-07921 Published online: 11 September 2024 Issue Date: 26 September 2024

This paper is open access.

Huge leap forward in computing efficiency with Indian Institute of Science’s (IISc) neuromorphic (brainlike) platform

This is pretty thrilling news in a September 11, 2024 Indian Institute of Science (IISc) press release (also on EurekAlert), Note: A link has been removed,

In a landmark advancement, researchers at the Indian Institute of Science (IISc) have developed a brain-inspired analog computing platform capable of storing and processing data in an astonishing 16,500 conductance states within a molecular film. Published today in the journal Nature, this breakthrough represents a huge step forward over traditional digital computers in which data storage and processing are limited to just two states. 

Such a platform could potentially bring complex AI tasks, like training Large Language Models (LLMs), to personal devices like laptops and smartphones, thus taking us closer to democratising the development of AI tools. These developments are currently restricted to resource-heavy data centres, due to a lack of energy-efficient hardware. With silicon electronics nearing saturation, designing brain-inspired accelerators that can work alongside silicon chips to deliver faster, more efficient AI is also becoming crucial.

“Neuromorphic computing has had its fair share of unsolved challenges for over a decade,” explains Sreetosh Goswami, Assistant Professor at the Centre for Nano Science and Engineering (CeNSE), IISc, who led the research team. “With this discovery, we have almost nailed the perfect system – a rare feat.”

The fundamental operation underlying most AI algorithms is quite basic – matrix multiplication, a concept taught in high school maths. But in digital computers, these calculations hog a lot of energy. The platform developed by the IISc team drastically cuts down both the time and energy involved, making these calculations a lot faster and easier.

The molecular system at the heart of the platform was designed by Sreebrata Goswami, Visiting Professor at CeNSE. As molecules and ions wiggle and move within a material film, they create countless unique memory states, many of which have been inaccessible so far. Most digital devices are only able to access two states (high and low conductance), without being able to tap into the infinite number of intermediate states possible.

By using precisely timed voltage pulses, the IISc team found a way to effectively trace a much larger number of molecular movements, and map each of these to a distinct electrical signal, forming an extensive “molecular diary” of different states. “This project brought together the precision of electrical engineering with the creativity of chemistry, letting us control molecular kinetics very precisely inside an electronic circuit powered by nanosecond voltage pulses,” explains Sreebrata Goswami.

Tapping into these tiny molecular changes allowed the team to create a highly precise and efficient neuromorphic accelerator, which can store and process data within the same location, similar to the human brain. Such accelerators can be seamlessly integrated with silicon circuits to boost their performance and energy efficiency. 

A key challenge that the team faced was characterising the various conductance states, which proved impossible using existing equipment. The team designed a custom circuit board that could measure voltages as tiny as a millionth of a volt, to pinpoint these individual states with unprecedented accuracy.

The team also turned this scientific discovery into a technological feat. They were able to recreate NASA’s iconic “Pillars of Creation” image from the James Webb Space Telescope data – originally created by a supercomputer – using just a tabletop computer. They were also able to do this at a fraction of the time and energy that traditional computers would need.

The team includes several students and research fellows at IISc. Deepak Sharma performed the circuit and system design and electrical characterisation, Santi Prasad Rath handled synthesis and fabrication, Bidyabhusan Kundu tackled the mathematical modelling, and Harivignesh S crafted bio-inspired neuronal response behaviour. The team also collaborated with Stanley Williams [also known as R. Stanley Williams], Professor at Texas A&M University and Damien Thompson, Professor at the University of Limerick. 

The researchers believe that this breakthrough could be one of India’s biggest leaps in AI hardware, putting the country on the map of global technology innovation. Navakanta Bhat, Professor at CeNSE and an expert in silicon electronics led the circuit and system design in this project. “What stands out is how we have transformed complex physics and chemistry understanding into groundbreaking technology for AI hardware,” he explains. “In the context of the India Semiconductor Mission, this development could be a game-changer, revolutionising industrial, consumer and strategic applications. The national importance of such research cannot be overstated.” 

With support from the Ministry of Electronics and Information Technology, the IISc team is now focused on developing a fully indigenous integrated neuromorphic chip. “This is a completely home-grown effort, from materials to circuits and systems,” emphasises Sreetosh Goswami. “We are well on our way to translating this technology into a system-on-a-chip.”  

Caption: Using their AI accelerator, the team recreated NASA’s iconic “Pillars of Creation” image from the James Webb Space Telescope data on a simple tabletop computer – achieving this in a fraction of the time and energy required by traditional systems. Credit: CeNSE, IISc

Here’s a link to and a citation for the paper,

Linear symmetric self-selecting 14-bit kinetic molecular memristors by Deepak Sharma, Santi Prasad Rath, Bidyabhusan Kundu, Anil Korkmaz, Harivignesh S, Damien Thompson, Navakanta Bhat, Sreebrata Goswami, R. Stanley Williams & Sreetosh Goswami. Nature volume 633, pages 560–566 (2024) DOI: https://doi.org/10.1038/s41586-024-07902-2 Published online: 11 September 2024 Issue Date: 19 September 2024

This paper is behind a paywall.

Brain-inspired navigation technology for robots

An August 22, 2024 Beijing Institute of Technology Press Co. press release on EurekAlert announces the publication of a paper reviewing the current state of research into brain-inspired (neuromorphic) navigation technology,

In the ever-evolving field of robotics, a groundbreaking approach has emerged, revolutionizing how robots perceive, navigate, and interact with their environments. This new frontier, known as brain-inspired navigation technology, integrates insights from neuroscience into robotics, offering enhanced capabilities and efficiency.

Brain-inspired navigation technologies are not just a mere improvement over traditional methods; they represent a paradigm shift. By mimicking the neural mechanisms of animals, these technologies provide robots with the ability to navigate through complex and unknown terrains with unprecedented accuracy and adaptability.

At the heart of this technology lies the concept of spatial cognition, which is central to how animals, including humans, navigate their environments. Spatial cognition involves the brain’s ability to organize and interpret spatial data for navigation and memory. Robots equipped with brain-inspired navigation systems utilize a multi-layered network model that integrates sensory data from multiple sources. This model allows the robot to create a ‘cognitive map’ of its surroundings, much like the neural maps created by the hippocampus in the human brain.

One of the significant advantages of brain-inspired navigation is its robustness in challenging environments. Traditional navigation systems often struggle with dynamic and unpredictable settings, where the reliance on pre-mapped routes and landmarks can lead to failures. In contrast, brain-inspired systems continuously learn and adapt, improving their navigational strategies over time. This capability is particularly beneficial in environments like disaster zones or extraterrestrial surfaces, where prior mapping is either impossible or impractical.

Moreover, these systems significantly reduce energy consumption and computational needs. By focusing only on essential data and employing efficient neural network models, robots can operate longer and perform more complex tasks without the need for frequent recharging or maintenance.

The technology’s applications are vast and varied. For instance, autonomous vehicles equipped with brain-inspired systems could navigate more safely and efficiently, reacting in real-time to sudden changes in traffic conditions or road layouts. Similarly, drones used for delivery services could plan their routes more effectively, avoiding obstacles and optimizing delivery times.

Despite its promising potential, the development of brain-inspired navigation technology faces several challenges. Integrating biological principles into mechanical systems is inherently complex, requiring multidisciplinary efforts from fields such as neuroscience, cognitive science, robotics, and artificial intelligence. Moreover, these systems must be scalable and versatile enough to be customized for different types of robotic platforms and applications.

As researchers continue to unravel the mysteries of the brain’s navigational capabilities, the future of robotics looks increasingly intertwined with the principles of neuroscience. The collaboration across disciplines promises not only to advance our understanding of the brain but also to pave the way for a new generation of intelligent robots. These robots will not only assist in mundane tasks but also perform critical roles in search and rescue operations, planetary exploration, and much more.

In conclusion, brain-inspired navigation technology represents a significant leap forward in robotics, merging the abstract with the applied, the biological with the mechanical, and the theoretical with the practical. As this technology continues to evolve, it will undoubtedly open new horizons for robotic applications, making machines an even more integral part of our daily lives and work.

Here’s a link to and a citation for the paper,

A Review of Brain-Inspired Cognition and Navigation Technology for Mobile Robots by Yanan Bai, Shiliang Shao, Jin Zhang, Xianzhe Zhao, Chuxi Fang, Ting Wang, Yongliang Wang, and Hai Zhao. Cyborg and Bionic Systems 27 Jun 2024 Vol 5 Article ID: 0128 DOI: 10.34133/cbsystems.0128

This paper is open access.

About the journal publisher, Science Journal Partners

Cyborg and Bionic Systems is published by the American Association for the Advancement of Science (AAAS) and is part of an open access publishing project known as Science Journal Partners, from the Program Overview webpage of science.org,

The Science Partner Journal (SPJ) program was launched in late 2017 by the American Association for the Advancement of Science (AAAS), the nonprofit publisher of the Science family of journals.

The program features high-quality, online-only, Open Access publications produced in collaboration with international research institutions, foundations, funders, and societies. Through these collaborations, AAAS furthers its mission to communicate science broadly and for the benefit of all people by providing top-tier international research organizations with the technology, visibility, and publishing expertise that AAAS is uniquely positioned to offer as the world’s largest general science membership society.

Organizations participating in the SPJ program are editorially independent and responsible for the content published in each journal. To oversee the publication process, each organization appoints editors committed to best practices in peer review and author service. It is the responsibility of the staff of each SPJ title to establish and execute all aspects of the peer review process, including oversight of editorial policy, selection of papers, and editing of content, following best practices advised by AAAS.

I’m starting to catch up with changes in the world of science publishing as you can see in my October 1, 2024 posting titled, “Nanomedicine: two stories about wound healing,” which features a subhead near the end of the post, Science Publishing, should you be interested in another science publishing initiative.

Light-based neural networks

It’s unusual to see the same headline used to highlight research from two different teams released in such proximity, February 2024 and July 2024, respectively. Both of these are neuromorphic (brainlike) computing stories.

February 2024: Neural networks made of light

The first team’s work is announced in a February 21, 2024 Friedrich Schiller University press release, Note: A link has been removed,

Researchers from the Leibniz Institute of Photonic Technology (Leibniz IPHT) and the Friedrich Schiller University in Jena, along with an international team, have developed a new technology that could significantly reduce the high energy demands of future AI systems. This innovation utilizes light for neuronal computing, inspired by the neural networks of the human brain. It promises not only more efficient data processing but also speeds many times faster than current methods, all while consuming considerably less energy. Published in the prestigious journal „Advanced Science,“ their work introduces new avenues for environmentally friendly AI applications, as well as advancements in computerless diagnostics and intelligent microscopy.

Artificial intelligence (AI) is pivotal in advancing biotechnology and medical procedures, ranging from cancer diagnostics to the creation of new antibiotics. However, the ecological footprint of large-scale AI systems is substantial. For instance, training extensive language models like ChatGPT-3 requires several gigawatt-hours of energy—enough to power an average nuclear power plant at full capacity for several hours.

Prof. Mario Chemnitz, new Junior Professor of Intelligent Photonic SystemsExternal link at Friedrich Schiller University Jena, and Dr Bennet Fischer from Leibniz IPHT in Jena, in collaboration with their international team, have devised an innovative method to develop potentially energy-efficient computing systems that forego the need for extensive electronic infrastructure. They harness the unique interactions of light waves within optical fibers to forge an advanced artificial learning system.

A single fiber instead of thousands of components

Unlike traditional systems that rely on computer chips containing thousands of electronic components, their system uses a single optical fiber. This fiber is capable of performing the tasks of various neural networks—at the speed of light. “We utilize a single optical fiber to mimic the computational power of numerous neural networks,“ Mario Chemnitz, who is also leader of the “Smart Photonics“ junior research group at Leibniz IPHT, explains. “By leveraging the unique physical properties of light, this system will enable the rapid and efficient processing of vast amounts of data in the future.

Delving into the mechanics reveals how information transmission occurs through the mixing of light frequencies: Data—whether pixel values from images or frequency components of an audio track—are encoded onto the color channels of ultrashort light pulses. These pulses carry the information through the fiber, undergoing various combinations, amplifications, or attenuations. The emergence of new color combinations at the fiber’s output enables the prediction of data types or contexts. For example, specific color channels can indicate visible objects in images or signs of illness in a voice.

A prime example of machine learning is identifying different numbers from thousands of handwritten characters. Mario Chemnitz, Bennet Fischer, and their colleagues from the Institut National de la Recherche Scientifique (INRS) in Québec utilized their technique to encode images of handwritten digits onto light signals and classify them via the optical fiber. The alteration in color composition at the fiber’s end forms a unique color spectrum—a „fingerprint“ for each digit. Following training, the system can analyze and recognize new handwriting digits with significantly reduced energy consumption.

System recognizes COVID-19 from voice samples

In simpler terms, pixel values are converted into varying intensities of primary colors—more red or less blue, for instance,“ Mario Chemnitz details. “Within the fiber, these primary colors blend to create the full spectrum of the rainbow. The shade of our mixed purple, for example, reveals much about the data processed by our system.“

The team has also successfully applied this method in a pilot study to diagnose COVID-19 infections using voice samples, achieving a detection rate that surpasses the best digital systems to date.

We are the first to demonstrate that such a vibrant interplay of light waves in optical fibers can directly classify complex information without any additional intelligent software,“ Mario Chemnitz states.

Since December 2023, Mario Chemnitz has held the position of Junior Professor of Intelligent Photonic Systems at Friedrich Schiller University Jena. Following his return from INRS in Canada in 2022, where he served as a postdoc, Chemnitz has been leading an international team at Leibniz IPHT in Jena. With Nexus funding support from the Carl Zeiss Foundation, their research focuses on exploring the potentials of non-linear optics. Their goal is to develop computer-free intelligent sensor systems and microscopes, as well as techniques for green computing.

Here’s a link to and a citation for the paper,

Neuromorphic Computing via Fission-based Broadband Frequency Generation by Bennet Fischer, Mario Chemnitz, Yi Zhu, Nicolas Perron, Piotr Roztocki, Benjamin MacLellan, Luigi Di Lauro, A. Aadhi, Cristina Rimoldi, Tiago H. Falk, Roberto Morandotti. Advanced Science Volume 10, Issue 35 December 15, 2023 2303835 DOI: https://doi.org/10.1002/advs.202303835. First published: 02 October 2023

This paper is open access.

July 2024: Neural networks made of light

A July 12, 2024 news item on ScienceDaily announces research from another German team,

Scientists propose a new way of implementing a neural network with an optical system which could make machine learning more sustainable in the future. The researchers at the Max Planck Institute for the Science of Light have published their new method in Nature Physics, demonstrating a method much simpler than previous approaches.

A July 12, 2024 Max Planck Institute for the Science of Light press release (also on EurekAlert), which originated the news item, provides more detail about their approach to neuromorphic computiing,

Machine learning and artificial intelligence are becoming increasingly widespread with applications ranging from computer vision to text generation, as demonstrated by ChatGPT. However, these complex tasks require increasingly complex neural networks; some with many billion parameters. This rapid growth of neural network size has put the technologies on an unsustainable path due to their exponentially growing energy consumption and training times. For instance, it is estimated that training GPT-3 consumed more than 1,000 MWh of energy, which amounts to the daily electrical energy consumption of a small town. This trend has created a need for faster, more energy- and cost-efficient alternatives, sparking the rapidly developing field of neuromorphic computing. The aim of this field is to replace the neural networks on our digital computers with physical neural networks. These are engineered to perform the required mathematical operations physically in a potentially faster and more energy-efficient way.

Optics and photonics are particularly promising platforms for neuromorphic computing since energy consumption can be kept to a minimum. Computations can be performed in parallel at very high speeds only limited by the speed of light. However, so far, there have been two significant challenges: Firstly, realizing the necessary complex mathematical computations requires high laser powers. Secondly, the lack of an efficient general training method for such physical neural networks.

Both challenges can be overcome with the new method proposed by Clara Wanjura and Florian Marquardt from the Max Planck Institute for the Science of Light in their new article in Nature Physics. “Normally, the data input is imprinted on the light field. However, in our new methods we propose to imprint the input by changing the light transmission,” explains Florian Marquardt, Director at the Institute. In this way, the input signal can be processed in an arbitrary fashion. This is true even though the light field itself behaves in the simplest way possible in which waves interfere without otherwise influencing each other. Therefore, their approach allows one to avoid complicated physical interactions to realize the required mathematical functions which would otherwise require high-power light fields. Evaluating and training this physical neural network would then become very straightforward: “It would really be as simple as sending light through the system and observing the transmitted light. This lets us evaluate the output of the network. At the same time, this allows one to measure all relevant information for the training”, says Clara Wanjura, the first author of the study. The authors demonstrated in simulations that their approach can be used to perform image classification tasks with the same accuracy as digital neural networks.

In the future, the authors are planning to collaborate with experimental groups to explore the implementation of their method. Since their proposal significantly relaxes the experimental requirements, it can be applied to many physically very different systems. This opens up new possibilities for neuromorphic devices allowing physical training over a broad range of platforms.

Here’s a link to and a citation for the paper,

Fully nonlinear neuromorphic computing with linear wave scattering by Clara C. Wanjura & Florian Marquardt. Nature Physics (2024) DOI: https://doi.org/10.1038/s41567-024-02534-9 Published: 09 July 2024

This paper is open access.

Dual functions—neuromorphic (brainlike) and security—with papertronic devices

Michael Berger’s June 27, 2024 Nanowerk Spotlight article describes some of the latest work on developing electronic paper devices (yes, paper), Note 1: Links have been removed, Note 2: If you do check out Berger’s article, you will need to click a box confirming you are human,+

Paper-based electronic devices have long been an intriguing prospect for researchers, offering potential advantages in sustainability, cost-effectiveness, and flexibility. However, translating the unique properties of paper into functional electronic components has presented significant challenges. Traditional semiconductor manufacturing processes are incompatible with paper’s thermal sensitivity and porous structure. Previous attempts to create paper-based electronics often resulted in devices with limited functionality or poor durability.

Recent advances in materials science and nanofabrication techniques have opened new avenues for realizing sophisticated electronic devices on paper substrates. Researchers have made progress in developing conductive inks, flexible electrodes, and solution-processable semiconductors that can be applied to paper without compromising its inherent properties. These developments have paved the way for creating paper-based sensors, energy storage devices, and simple circuits.

Despite these advancements, achieving complex electronic functionalities on paper, particularly in areas like neuromorphic computing and security applications, has remained elusive. Neuromorphic devices, which mimic the behavior of biological synapses, typically require precise control of charge transport and storage mechanisms.

Similarly, physically unclonable functions (PUFs) used in security applications depend on the ability to generate random, unique patterns at the nanoscale level. Implementing these sophisticated functionalities on paper substrates has been a persistent challenge due to the material’s inherent variability and limited compatibility with advanced fabrication techniques.

A research team in Korea has now made significant strides in addressing these challenges, developing a versatile paper-based electronic device that demonstrates both neuromorphic and security capabilities. Their work, published in Advanced Materials (“Versatile Papertronics: Photo-Induced Synapse and Security Applications on Papers”), describes a novel approach to creating multifunctional “papertronics” using a combination of solution-processable materials and innovative device architectures.

The team showcased the potential of their device by simulating a facial recognition task. Using a simple neural network architecture and the light-responsive properties of their paper-based device, they achieved a recognition accuracy of 91.7% on a standard face database. This impressive performance was achieved with a remarkably low voltage bias of -0.01 V, demonstrating the energy efficiency of the approach. The ability to operate at such low voltages is particularly advantageous for portable and low-power applications.

In addition to its neuromorphic capabilities, the device also showed promise as a physically unclonable function (PUF) for security applications. The researchers leveraged the inherent randomness in the deposition of SnO2 nanoparticles [tin oxide nanoparticles] to create unique electrical characteristics in each device. By fabricating arrays of these devices on paper, they generated security keys that exhibited high levels of randomness and uniqueness.

One of the most intriguing aspects of this research is the dual functionality achieved with a single device structure. The ability to serve as both a neuromorphic component and a security element could lead to the development of highly integrated, secure edge computing devices on paper substrates. This convergence of functionalities addresses growing concerns about data privacy and security in Internet of Things (IoT) applications.

Berger’s June 27, 2024 Nanowerk Spotlight article offers more detail about the work and it’s written in an accessible fashion. Berger also notes at the end, that there are still a lot of challenges before this work leaves the laboratory.

Here’s a link to and a citation for the paper,

Versatile Papertronics: Photo-Induced Synapse and Security Applications on Papers by Wangmyung Choi, Jihyun Shin, Yeong Jae Kim, Jaehyun Hur, Byung Chul Jang, Hocheon Yoo. Advanced Materials DOI: https://doi.org/10.1002/adma.202312831 First published: 13 June 2024

This paper is behind a paywall.

Proposed platform for brain-inspired computing

Researchers at the University of California at Santa Barbara (UCSB) have proposed a more energy-efficient architecture for neuromorphic (brainlike or brain-inspored) computing according to a June 25, 2024 news item on ScienceDaily,

Computers have come so far in terms of their power and potential, rivaling and even eclipsing human brains in their ability to store and crunch data, make predictions and communicate. But there is one domain where human brains continue to dominate: energy efficiency.

“The most efficient computers are still approximately four orders of magnitude — that’s 10,000 times — higher in energy requirements compared to the human brain for specific tasks such as image processing and recognition, although they outperform the brain in tasks like mathematical calculations,” said UC Santa Barbara electrical and computer engineering Professor Kaustav Banerjee, a world expert in the realm of nanoelectronics. “Making computers more energy efficient is crucial because the worldwide energy consumption by on-chip electronics stands at #4 in the global rankings of nation-wise energy consumption, and it is increasing exponentially each year, fueled by applications such as artificial intelligence.” Additionally, he said, the problem of energy inefficient computing is particularly pressing in the context of global warming, “highlighting the urgent need to develop more energy-efficient computing technologies.”

….

A June 24, 2024 UCSB news release (also on Eurekalert), which originated the news item, delves further into the subject,

Neuromorphic (NM) computing has emerged as a promising way to bridge the energy efficiency gap. By mimicking the structure and operations of the human brain, where processing occurs in parallel across an array of low power-consuming neurons, it may be possible to approach brain-like energy efficiency. In a paper published in the journal Nature Communications, Banerjee and co-workers Arnab Pal, Zichun Chai, Junkai Jiang and Wei Cao, in collaboration with researchers Vivek De and Mike Davies from Intel Labs propose such an ultra-energy efficient platform, using 2D transition metal dichalcogenide (TMD)-based tunnel-field-effect transistors (TFETs). Their platform, the researchers say, can bring the energy requirements to within two orders of magnitude (about 100 times) with respect to the human brain.

Leakage currents and subthreshold swing

The concept of neuromorphic computing has been around for decades, though the research around it has intensified only relatively recently. Advances in circuitry that enable smaller, denser arrays of transistors, and therefore more processing and functionality for less power consumption are just scratching the surface of what can be done to enable brain-inspired computing. Add to that an appetite generated by its many potential applications, such as AI and the Internet-of-Things, and it’s clear that expanding the options for a hardware platform for neuromorphic computing must be addressed in order to move forward.

Enter the team’s 2D tunnel-transistors. Emerging out of Banerjee’s longstanding research efforts to develop high-performance, low-power consumption transistors to meet the growing hunger for processing without a matching increase in power requirement, these atomically thin, nanoscale transistors are responsive at low voltages, and as the foundation of the researchers’ NM platform, can mimic the highly energy efficient operations of the human brain. In addition to lower off-state currents, the 2D TFETs also have a low subthreshold swing (SS), a parameter that describes how effectively a transistor can switch from off to on. According to Banerjee, a lower SS means a lower operating voltage, and faster and more efficient switching.

“Neuromorphic computing architectures are designed to operate with very sparse firing circuits,” said lead author Arnab Pal, “meaning they mimic how neurons in the brain fire only when necessary.” In contrast to the more conventional von Neumann architecture of today’s computers, in which data is processed sequentially, memory and processing components are separated and which continuously draw power throughout the entire operation, an event-driven system such as a NM computer fires up only when there is input to process, and memory and processing are distributed across an array of transistors. Companies like Intel and IBM have developed brain-inspired platforms, deploying billions of interconnected transistors and generating significant energy savings.

However, there’s still room for energy efficiency improvement, according to the researchers.

“In these systems, most of the energy is lost through leakage currents when the transistors are off, rather than during their active state,” Banerjee explained. A ubiquitous phenomenon in the world of electronics, leakage currents are small amounts of electricity that flow through a circuit even when it is in the off state (but still connected to power). According to the paper, current NM chips use traditional metal-oxide-semiconductor field-effect transistors (MOSFETs) which have a high on-state current, but also high off-state leakage. “Since the power efficiency of these chips is constrained by the off-state leakage, our approach — using tunneling transistors with much lower off-state current — can greatly improve power efficiency,” Banerjee said.

When integrated into a neuromorphic circuit, which emulates the firing and reset of neurons, the TFETs proved themselves more energy efficient than state-of-the-art MOSFETs, particularly the FinFETs (a MOSFET design that incorporates vertical “fins” as a way to provide better control of switching and leakage). TFETs are still in the experimental stage, however the performance and energy efficiency of neuromorphic circuits based on them makes them a promising candidate for the next generation of brain-inspired computing.

According to co-authors Vivek De (Intel Fellow) and Mike Davies (Director of Intel’s Neuromorphic Computing Lab), “Once realized, this platform can bring the energy consumption in chips to within two orders of magnitude with respect to the human brain — not accounting for the interface circuitry and memory storage elements. This represents a significant improvement from what is achievable today.”

Eventually, one can realize three-dimensional versions of these 2D-TFET based neuromorphic circuits to provide even closer emulation of the human brain, added Banerjee, widely recognized as one of the key visionaries behind 3D integrated circuits that are now witnessing wide scale commercial proliferation.

Here’s a link to and a citation for the latest paper,

An ultra energy-efficient hardware platform for neuromorphic computing enabled by 2D-TMD tunnel-FETs by Arnab Pal, Zichun Chai, Junkai Jiang, Wei Cao, Mike Davies, Vivek De & Kaustav Banerjee. Nature Communications volume 15, Article number: 3392 (2024) DOI: https://doi.org/10.1038/s41467-024-46397-3 Published: 22 April 2024

This paper is open access.

New approach to brain-inspired (neuromorphic) computing: measuring information transfer

An April 8, 2024 news item on Nanowerk announces a new approach to neuromorphic computing that involves measurement, Note: Links have been removed,

The biological brain, especially the human brain, is a desirable computing system that consumes little energy and runs at high efficiency. To build a computing system just as good, many neuromorphic scientists focus on designing hardware components intended to mimic the elusive learning mechanism of the brain. Recently, a research team has approached the goal from a different angle, focusing on measuring information transfer instead.

Their method went through biological and simulation experiments and then proved effective in an electronic neuromorphic system. It was published in Intelligent Computing (“Information Transfer in Neuronal Circuits: From Biological Neurons to Neuromorphic Electronics”).

An April 8, 2024 Intelligent Computing news release on EurekAlert delves further into the topic,

Although electronic systems have not fully replicated the complex information transfer between synapses and neurons, the team has demonstrated that it is possible to transform biological circuits into electronic circuits while maintaining the amount of information transferred. “This represents a key step toward brain-inspired low-power artificial systems,” the authors note.

To evaluate the efficiency of information transfer, the team drew inspiration from information theory. They quantified the amount of information conveyed by synapses in single neurons, then measured the quantity using mutual information, the analysis of which reveals the relationship between input stimuli and neuron responses.

First, the team conducted experiments with biological neurons. They used brain slices from rats, recording and analyzing the biological circuits in cerebellar granule cells. Then they evaluated the information transmitted at the synapses from mossy fiber neurons to the cerebellar granule cells. The mossy fibers were periodically stimulated with electrical spikes to induce synaptic plasticity, a fundamental biological feature where the information transfer at the synapses is constantly strengthened or weakened with repeated neuronal activity.

The results show that the changes in mutual information values are largely consistent with the changes in biological information transfer induced by synaptic plasticity. The findings from simulation and electronic neuromorphic experiments mirrored the biological results.

Second, the team conducted experiments with simulated neurons. They applied a spiking neural network model, which was developed by the same research group. Spiking neural networks were inspired by the functioning of biological neurons and are considered a promising approach for achieving efficient neuromorphic computing.

In the model, four mossy fibers are connected to one cerebellar granule cell, and each connection is given a random weight, which affects the information transfer efficiency like synaptic plasticity does in biological circuits. In the experiments, the team applied eight stimulation patterns to all mossy fibers and recorded the responses to evaluate the information transfer in the artificial neural network.

Third, the team conducted experiments with electronic neurons. A setup similar to those in the biological and simulation experiments was used. A previously developed semiconductor device functioned as a neuron, and four specialized memristors functioned as synapses. The team applied 20 spike sequences to decrease resistance values, then applied another 20 to increase them. The changes in resistance values were investigated to assess the information transfer efficiency within the neuromorphic system.

In addition to verifying the quantity of information transferred in biological, simulated and electronic neurons, the team also highlighted the importance of spike timing, which as they observed is closely related to information transfer. This observation could influence the development of neuromorphic computing, given that most devices are designed with spike-frequency-based algorithms.

Here’s a link to and a citation for the paper,

Information Transfer in Neuronal Circuits: From Biological Neurons to Neuromorphic Electronics by Daniela Gandolfi, Lorenzo Benatti, Tommaso Zanotti, Giulia M. Boiani, Albertino Bigiani, Francesco M. Puglisi, and Jonathan Mapell. Intelligent Computing 1 Feb 2024 Vol 3 Article ID: 0059 DOI: 10.34133/icomputing.0059

This paper is open access.