Tag Archives: brainlike computing

Physical neural network based on nanowires can learn and remember ‘on the fly’

A November 1, 2023 news item on Nanowerk announced new work on neuromorphic engineering from Australia,

For the first time, a physical neural network has successfully been shown to learn and remember ‘on the fly’, in a way inspired by and similar to how the brain’s neurons work.

The result opens a pathway for developing efficient and low-energy machine intelligence for more complex, real-world learning and memory tasks.

Key Takeaways
*The nanowire-based system can learn and remember ‘on the fly,’ processing dynamic, streaming data for complex learning and memory tasks.

*This advancement overcomes the challenge of heavy memory and energy usage commonly associated with conventional machine learning models.

*The technology achieved a 93.4% accuracy rate in image recognition tasks, using real-time data from the MNIST database of handwritten digits.

*The findings promise a new direction for creating efficient, low-energy machine intelligence applications, such as real-time sensor data processing.

Nanowire neural network
Caption: Electron microscope image of the nanowire neural network that arranges itself like ‘Pick Up Sticks’. The junctions where the nanowires overlap act in a way similar to how our brain’s synapses operate, responding to electric current. Credit: The University of Sydney

A November 1, 2023 University of Sydney news release (also on EurekAlert), which originated the news item, elaborates on the research,

Published today [November 1, 2023] in Nature Communications, the research is a collaboration between scientists at the University of Sydney and University of California at Los Angeles.

Lead author Ruomin Zhu, a PhD student from the University of Sydney Nano Institute and School of Physics, said: “The findings demonstrate how brain-inspired learning and memory functions using nanowire networks can be harnessed to process dynamic, streaming data.”

Nanowire networks are made up of tiny wires that are just billionths of a metre in diameter. The wires arrange themselves into patterns reminiscent of the children’s game ‘Pick Up Sticks’, mimicking neural networks, like those in our brains. These networks can be used to perform specific information processing tasks.

Memory and learning tasks are achieved using simple algorithms that respond to changes in electronic resistance at junctions where the nanowires overlap. Known as ‘resistive memory switching’, this function is created when electrical inputs encounter changes in conductivity, similar to what happens with synapses in our brain.

In this study, researchers used the network to recognise and remember sequences of electrical pulses corresponding to images, inspired by the way the human brain processes information.

Supervising researcher Professor Zdenka Kuncic said the memory task was similar to remembering a phone number. The network was also used to perform a benchmark image recognition task, accessing images in the MNIST database of handwritten digits, a collection of 70,000 small greyscale images used in machine learning.

“Our previous research established the ability of nanowire networks to remember simple tasks. This work has extended these findings by showing tasks can be performed using dynamic data accessed online,” she said.

“This is a significant step forward as achieving an online learning capability is challenging when dealing with large amounts of data that can be continuously changing. A standard approach would be to store data in memory and then train a machine learning model using that stored information. But this would chew up too much energy for widespread application.

“Our novel approach allows the nanowire neural network to learn and remember ‘on the fly’, sample by sample, extracting data online, thus avoiding heavy memory and energy usage.”

Mr Zhu said there were other advantages when processing information online.

“If the data is being streamed continuously, such as it would be from a sensor for instance, machine learning that relied on artificial neural networks would need to have the ability to adapt in real-time, which they are currently not optimised for,” he said.

In this study, the nanowire neural network displayed a benchmark machine learning capability, scoring 93.4 percent in correctly identifying test images. The memory task involved recalling sequences of up to eight digits. For both tasks, data was streamed into the network to demonstrate its capacity for online learning and to show how memory enhances that learning.

Here’s a link to and a citation for the paper,

Online dynamical learning and sequence memory with neuromorphic nanowire networks by Ruomin Zhu, Sam Lilak, Alon Loeffler, Joseph Lizier, Adam Stieg, James Gimzewski & Zdenka Kuncic. Nature Communications volume 14, Article number: 6697 (2023) DOI: https://doi.org/10.1038/s41467-023-42470-5 Published: 01 November 2023

This paper is open access.

You’ll notice a number of this team’s members are also listed in the citation in my June 21, 2023 posting “Learning and remembering like a human brain: nanowire networks” and you’ll see some familiar names in the citation in my June 17, 2020 posting “A tangle of silver nanowires for brain-like action.”

A formal theory for neuromorphic (brainlike) computing hardware needed

This is one my older pieces as the information dates back to October 2023 but neuromorphic computing is one of my key interests and I’m particularly interested to see the upsurge in the discussion of hardware, here goes. From an October 17, 2023 news item on Nanowerk,

There is an intense, worldwide search for novel materials to build computer microchips with that are not based on classic transistors but on much more energy-saving, brain-like components. However, whereas the theoretical basis for classic transistor-based digital computers is solid, there are no real theoretical guidelines for the creation of brain-like computers.

Such a theory would be absolutely necessary to put the efforts that go into engineering new kinds of microchips on solid ground, argues Herbert Jaeger, Professor of Computing in Cognitive Materials at the University of Groningen [Netherlands].

Key Takeaways
Scientists worldwide are searching for new materials to build energy-saving, brain-like computer microchips as classic transistor miniaturization reaches its physical limit.

Theoretical guidelines for brain-like computers are lacking, making it crucial for advancements in the field.

The brain’s versatility and robustness serve as an inspiration, despite limited knowledge about its exact workings.

A recent paper suggests that a theory for non-digital computers should focus on continuous, analogue signals and consider the characteristics of new materials.

Bridging gaps between diverse scientific fields is vital for developing a foundational theory for neuromorphic computing..

An October 17, 2023 University of Groningen press release (also on EurekAlert), which originated the news item, provides more context for this proposal,

Computers have, so far, relied on stable switches that can be off or on, usually transistors. These digital computers are logical machines and their programming is also based on logical reasoning. For decades, computers have become more powerful by further miniaturization of the transistors, but this process is now approaching a physical limit. That is why scientists are working to find new materials to make more versatile switches, which could use more values than just the digitals 0 or 1.

Dangerous pitfall

Jaeger is part of the Groningen Cognitive Systems and Materials Center (CogniGron), which aims to develop neuromorphic (i.e. brain-like) computers. CogniGron is bringing together scientists who have very different approaches: experimental materials scientists and theoretical modelers from fields as diverse as mathematics, computer science, and AI. Working closely with materials scientists has given Jaeger a good idea of the challenges that they face when trying to come up with new computational materials, while it has also made him aware of a dangerous pitfall: there is no established theory for the use of non-digital physical effects in computing systems.

Our brain is not a logical system. We can reason logically, but that is only a small part of what our brain does. Most of the time, it must work out how to bring a hand to a teacup or wave to a colleague on passing them in a corridor. ‘A lot of the information-processing that our brain does is this non-logical stuff, which is continuous and dynamic. It is difficult to formalize this in a digital computer,’ explains Jaeger. Furthermore, our brains keep working despite fluctuations in blood pressure, external temperature, or hormone balance, and so on. How is it possible to create a computer that is as versatile and robust? Jaeger is optimistic: ‘The simple answer is: the brain is proof of principle that it can be done.’

Neurons

The brain is, therefore, an inspiration for materials scientists. Jaeger: ‘They might produce something that is made from a few hundred atoms and that will oscillate, or something that will show bursts of activity. And they will say: “That looks like how neurons work, so let’s build a neural network”.’ But they are missing a vital bit of knowledge here. ‘Even neuroscientists don’t know exactly how the brain works. This is where the lack of a theory for neuromorphic computers is problematic. Yet, the field doesn’t appear to see this.’

In a paper published in Nature Communications on 16 August, Jaeger and his colleagues Beatriz Noheda (scientific director of CogniGron) and Wilfred G. van der Wiel (University of Twente) present a sketch of what a theory for non-digital computers might look like. They propose that instead of stable 0/1 switches, the theory should work with continuous, analogue signals. It should also accommodate the wealth of non-standard nanoscale physical effects that the materials scientists are investigating.

Sub-theories

Something else that Jaeger has learned from listening to materials scientists is that devices from these new materials are difficult to construct. Jaeger: ‘If you make a hundred of them, they will not all be identical.’ This is actually very brain-like, as our neurons are not all exactly identical either. Another possible issue is that the devices are often brittle and temperature-sensitive, continues Jaeger. ‘Any theory for neuromorphic computing should take such characteristics into account.’

Importantly, a theory underpinning neuromorphic computing will not be a single theory but will be constructed from many sub-theories (see image below). Jaeger: ‘This is in fact how digital computer theory works as well, it is a layered system of connected sub-theories.’ Creating such a theoretical description of neuromorphic computers will require close collaboration of experimental materials scientists and formal theoretical modellers. Jaeger: ‘Computer scientists must be aware of the physics of all these new materials [emphasis mine] and materials scientists should be aware of the fundamental concepts in computing.’

Blind spots

Bridging this divide between materials science, neuroscience, computing science, and engineering is exactly why CogniGron was founded at the University of Groningen: it brings these different groups together. ‘We all have our blind spots,’ concludes Jaeger. ‘And the biggest gap in our knowledge is a foundational theory for neuromorphic computing. Our paper is a first attempt at pointing out how such a theory could be constructed and how we can create a common language.’

Here’s a link to and a citation for the paper,

Toward a formal theory for computing machines made out of whatever physics offers by Herbert Jaeger, Beatriz Noheda & Wilfred G. van der Wiel. Nature Communications volume 14, Article number: 4911 (2023) DOI: https://doi.org/10.1038/s41467-023-40533-1 Published: 16 August 2023

This paper is open access and there’s a 76 pp. version, “Toward a formal theory for computing machines made out of whatever physics offers: extended version” (emphasis mine) available on arXchiv.

Caption: A general theory of physical computing systems would comprise existing theories as special cases. Figure taken from an extended version of the Nature Comm paper on arXiv. Credit: Jaeger et al. / University of Groningen

With regard to new materials for neuromorphic computing, my January 4, 2024 posting highlights a proposed quantum material for this purpose.

A hardware (neuromorphic and quantum) proposal for handling increased AI workload

It’s been a while since I’ve featured anything from Purdue University (Indiana, US). From a November 7, 2023 news item on Nanowerk, Note Links have been removed,

Technology is edging closer and closer to the super-speed world of computing with artificial intelligence. But is the world equipped with the proper hardware to be able to handle the workload of new AI technological breakthroughs?

Key Takeaways
Current AI technologies are strained by the limitations of silicon-based computing hardware, necessitating new solutions.

Research led by Erica Carlson [Purdue University] suggests that neuromorphic [brainlike] architectures, which replicate the brain’s neurons and synapses, could revolutionize computing efficiency and power.

Vanadium oxides have been identified as a promising material for creating artificial neurons and synapses, crucial for neuromorphic computing.

Innovative non-volatile memory, observed in vanadium oxides, could be the key to more energy-efficient and capable AI hardware.

Future research will explore how to optimize the synaptic behavior of neuromorphic materials by controlling their memory properties.

The colored landscape above shows a transition temperature map of VO2 (pink surface) as measured by optical microscopy. This reveals the unique way that this neuromorphic quantum material [emphasis mine] stores memory like a synapse. Image credit: Erica Carlson, Alexandre Zimmers, and Adobe Stock

An October 13, 2023 Purdue University news release (also on EurekAlert but published November 6, 2023) by Cheryl Pierce, which originated the news item, provides more detail about the work, Note: A link has been removed,

“The brain-inspired codes of the AI revolution are largely being run on conventional silicon computer architectures which were not designed for it,” explains Erica Carlson, 150th Anniversary Professor of Physics and Astronomy at Purdue University.

A joint effort between Physicists from Purdue University, University of California San Diego (USCD) and École Supérieure de Physique et de Chimie Industrielles (ESPCI) in Paris, France, believe they may have discovered a way to rework the hardware…. [sic] By mimicking the synapses of the human brain.  They published their findings, “Spatially Distributed Ramp Reversal Memory in VO2” in Advanced Electronic Materials which is featured on the back cover of the October 2023 edition.

New paradigms in hardware will be necessary to handle the complexity of tomorrow’s computational advances. According to Carlson, lead theoretical scientist of this research, “neuromorphic architectures hold promise for lower energy consumption processors, enhanced computation, fundamentally different computational modes, native learning and enhanced pattern recognition.”

Neuromorphic architecture basically boils down to computer chips mimicking brain behavior.  Neurons are cells in the brain that transmit information. Neurons have small gaps at their ends that allow signals to pass from one neuron to the next which are called synapses. In biological brains, these synapses encode memory. This team of scientists concludes that vanadium oxides show tremendous promise for neuromorphic computing because they can be used to make both artificial neurons and synapses.

“The dissonance between hardware and software is the origin of the enormously high energy cost of training, for example, large language models like ChatGPT,” explains Carlson. “By contrast, neuromorphic architectures hold promise for lower energy consumption by mimicking the basic components of a brain: neurons and synapses. Whereas silicon is good at memory storage, the material does not easily lend itself to neuron-like behavior. Ultimately, to provide efficient, feasible neuromorphic hardware solutions requires research into materials with radically different behavior from silicon – ones that can naturally mimic synapses and neurons. Unfortunately, the competing design needs of artificial synapses and neurons mean that most materials that make good synaptors fail as neuristors, and vice versa. Only a handful of materials, most of them quantum materials, have the demonstrated ability to do both.”

The team relied on a recently discovered type of non-volatile memory which is driven by repeated partial temperature cycling through the insulator-to-metal transition. This memory was discovered in vanadium oxides.

Alexandre Zimmers, lead experimental scientist from Sorbonne University and École Supérieure de Physique et de Chimie Industrielles, Paris, explains, “Only a few quantum materials are good candidates for future neuromorphic devices, i.e., mimicking artificial synapses and neurons. For the first time, in one of them, vanadium dioxide, we can see optically what is changing in the material as it operates as an artificial synapse. We find that memory accumulates throughout the entirety of the sample, opening new opportunities on how and where to control this property.”

“The microscopic videos show that, surprisingly, the repeated advance and retreat of metal and insulator domains causes memory to be accumulated throughout the entirety of the sample, rather than only at the boundaries of domains,” explains Carlson. “The memory appears as shifts in the local temperature at which the material transitions from insulator to metal upon heating, or from metal to insulator upon cooling. We propose that these changes in the local transition temperature accumulate due to the preferential diffusion of point defects into the metallic domains that are interwoven through the insulator as the material is cycled partway through the transition.”

Now that the team has established that vanadium oxides are possible candidates for future neuromorphic devices, they plan to move forward in the next phase of their research.

“Now that we have established a way to see inside this neuromorphic material, we can locally tweak and observe the effects of, for example, ion bombardment on the material’s surface,” explains Zimmers. “This could allow us to guide the electrical current through specific regions in the sample where the memory effect is at its maximum. This has the potential to significantly enhance the synaptic behavior of this neuromorphic material.”

There’s a very interesting 16 mins. 52 secs. video embedded in the October 13, 2023 Purdue University news release. In an interview with Dr. Erica Carlson who hosts The Quantum Age website and video interviews on its YouTube Channel, Alexandre Zimmers takes you from an amusing phenomenon observed by 19th century scientists through the 20th century where it becomes of more interest as the nanscale phenonenon can be exploited (sonar, scanning tunneling microscopes, singing birthday cards, etc.) to the 21st century where we are integrating this new information into a quantum* material for neuromorphic hardware.

Here’s a link to and a citation for the paper,

Spatially Distributed Ramp Reversal Memory in VO2 by Sayan Basak, Yuxin Sun, Melissa Alzate Banguero, Pavel Salev, Ivan K. Schuller, Lionel Aigouy, Erica W. Carlson, Alexandre Zimmers. Advanced Electronic Materials Volume 9, Issue 10 October 2023 2300085 DOI: https://doi.org/10.1002/aelm.202300085 First published: 10 July 2023

This paper is open access.

There’s a lot of research into neuromorphic hardware, here’s a sampling of some of my most recent posts on the topic,

There’s more, just use ‘neuromorphic hardware’ for your search term.

*’meta’ changed to ‘quantum’ on January 8, 2024.

Neuromorphic transistor with electric double layer

it may be my imagination but it seems as if neuromorphic (brainlike) engineering research has really taken off in the last few years and, even with my lazy approach to finding articles, I’m having trouble keeping up.

This latest work comes from Japan according to an August 4, 2023 news item on Nanowerk, Note: A link has been removed,

A research team consisting of NIMS [National Institute for Materials Science] and the Tokyo University of Science has developed the fastest electric double layer transistor using a highly ion conductive ceramic thin film and a diamond thin film. This transistor may be used to develop energy-efficient, high-speed edge AI devices with a wide range of applications, including future event prediction and pattern recognition/determination in images (including facial recognition), voices and odors.

The research was published in Materials Today Advances (“Ultrafast-switching of an all-solid-state electric double layer transistor with a porous yttria-stabilized zirconia proton conductor and the application to neuromorphic computing”).

A July 7, 2023 National Institute for Materials Science press release (also on EurekAlert but published August 3, 2023), which originated the news item, is arranged as a numbered list of points, the first point being the first paragraph in the news release/item,

2. An electric double layer transistor works as a switch using electrical resistance changes caused by the charge and discharge of an electric double layer formed at the interface between the electrolyte and semiconductor. Because this transistor is able to mimic the electrical response of human cerebral neurons (i.e., acting as a neuromorphic transistor), its use in AI devices is potentially promising. However, existing electric double layer transistors are slow in switching between on and off states. The typical transition time ranges from several hundreds of microseconds to 10 milliseconds. Development of faster electric double layer transistors is therefore desirable.

3. This research team developed an electric double layer transistor by depositing ceramic (yttria-stabilized porous zirconia thin film) and diamond thin films with a high degree of precision using a pulsed laser, forming an electric double layer at the ceramic/diamond interface. The zirconia thin film is able to adsorb large amounts of water into its nanopores and allow hydrogen ions from the water to readily migrate through it, enabling the electric double layer to be rapidly charged and discharged. This electric double layer effect enables the transistor to operate very quickly. The team actually measured the speed at which the transistor operates by applying pulsed voltage to it and found that it operates 8.5 times faster than existing electric double layer transistors, setting a new world record. The team also confirmed the ability of this transistor to convert input waveforms into many different output waveforms with precision—a prerequisite for transistors to be compatible with neuromorphic AI devices.

4. This research project produced a new ceramic thin film technology capable of rapidly charging and discharging an electric double layer several nanometers in thickness. This is a major achievement in efforts to create practical, high-speed, energy-efficient AI-assisted devices. These devices, in combination with various sensors (e.g., smart watches, surveillance cameras and audio sensors), are expected to offer useful tools in various industries, including medicine, disaster prevention, manufacturing and security.

Here’s a link to and a citation for the paper,

Ultrafast-switching of an all-solid-state electric double layer transistor with a porous yttria-stabilized zirconia proton conductor and the application to neuromorphic computing by Makoto Takayanagi, Daiki Nishioka, Takashi Tsuchiya, Masataka Imura, Yasuo Koide, Tohru Higuchi, and Kazuya Terabe. Materials Today Advances [June 16, 2023]; DOI : 10.1016/j.mtadv.2023.10039

This paper is open access.

Single chip mimics human vision and memory abilities

A June 15, 2023 RMIT University (Australia) press release (also on EurekAlert but published June 14, 2023) announces a neuromorphic (brainlike) computer chip, which mimics human vision and ‘creates’ memories,

Researchers have created a small device that ‘sees’ and creates memories in a similar way to humans, in a promising step towards one day having applications that can make rapid, complex decisions such as in self-driving cars.

The neuromorphic invention is a single chip enabled by a sensing element, doped indium oxide, that’s thousands of times thinner than a human hair and requires no external parts to operate.

RMIT University engineers in Australia led the work, with contributions from researchers at Deakin University and the University of Melbourne.

The team’s research demonstrates a working device that captures, processes and stores visual information. With precise engineering of the doped indium oxide, the device mimics a human eye’s ability to capture light, pre-packages and transmits information like an optical nerve, and stores and classifies it in a memory system like the way our brains can.

Collectively, these functions could enable ultra-fast decision making, the team says.

Team leader Professor Sumeet Walia said the new device can perform all necessary functions – sensing, creating and processing information, and retaining memories – rather than relying on external energy-intensive computation, which prevents real-time decision making.

“Performing all of these functions on one small device had proven to be a big challenge until now,” said Walia from RMIT’s School of Engineering.

“We’ve made real-time decision making a possibility with our invention, because it doesn’t need to process large amounts of irrelevant data and it’s not being slowed down by data transfer to separate processors.”

What did the team achieve and how does the technology work?

The new device was able to demonstrate an ability to retain information for longer periods of time, compared to previously reported devices, without the need for frequent electrical signals to refresh the memory. This ability significantly reduces energy consumption and enhances the device’s performance.

Their findings and analysis are published in Advanced Functional Materials.

First author and RMIT PhD researcher Aishani Mazumder said the human brain used analog processing, which allowed it to process information quickly and efficiently using minimal energy.

“By contrast, digital processing is energy and carbon intensive, and inhibits rapid information gathering and processing,” she said.

“Neuromorphic vision systems are designed to use similar analog processing to the human brain, which can greatly reduce the amount of energy needed to perform complex visual tasks compared with today’s technologies

What are the potential applications?

The team used ultraviolet light as part of their experiments, and are working to expand this technology even further for visible and infrared light – with many possible applications such as bionic vision, autonomous operations in dangerous environments, shelf-life assessments of food and advanced forensics.

“Imagine a self-driving car that can see and recognise objects on the road in the same way that a human driver can or being able to able to rapidly detect and track space junk. This would be possible with neuromorphic vision technology.”

Walia said neuromorphic systems could adapt to new situations over time, becoming more efficient with more experience.

“Traditional computer vision systems – which cannot be miniaturised like neuromorphic technology – are typically programmed with specific rules and can’t adapt as easily,” he said.

“Neuromorphic robots have the potential to run autonomously for long periods, in dangerous situations where workers are exposed to possible cave-ins, explosions and toxic air.”

The human eye has a single retina that captures an entire image, which is then processed by the brain to identify objects, colours and other visual features.

The team’s device mimicked the retina’s capabilities by using single-element image sensors that capture, store and process visual information on one platform, Walia said.

“The human eye is exceptionally adept at responding to changes in the surrounding environment in a faster and much more efficient way than cameras and computers currently can,” he said.

“Taking inspiration from the eye, we have been working for several years on creating a camera that possesses similar abilities, through the process of neuromorphic engineering.” 

Here’s a link to and a citation for the paper,

Long Duration Persistent Photocurrent in 3 nm Thin Doped Indium Oxide for Integrated Light Sensing and In-Sensor Neuromorphic Computation by Aishani Mazumder, Chung Kim Nguyen, Thiha Aung, Mei Xian Low, Md. Ataur Rahman, Salvy P. Russo, Sherif Abdulkader Tawfik, Shifan Wang, James Bullock, Vaishnavi Krishnamurthi. Advanced Functional Materials DOI: https://doi.org/10.1002/adfm.202303641 First published: 14 June 2023

This paper is open access.

Neuromorphic engineering: an overview

In a February 13, 2023 essay, Michael Berger who runs the Nanowerk website provides an overview of brainlike (neuromorphic) engineering.

This essay is the most extensive piece I’ve seen on Berger’s website and it covers everything from the reasons why scientists are so interested in mimicking the human brain to specifics about memristors. Here are a few excerpts (Note: Links have been removed),

Neuromorphic engineering is a cutting-edge field that focuses on developing computer hardware and software systems inspired by the structure, function, and behavior of the human brain. The ultimate goal is to create computing systems that are significantly more energy-efficient, scalable, and adaptive than conventional computer systems, capable of solving complex problems in a manner reminiscent of the brain’s approach.

This interdisciplinary field draws upon expertise from various domains, including neuroscience, computer science, electronics, nanotechnology, and materials science. Neuromorphic engineers strive to develop computer chips and systems incorporating artificial neurons and synapses, designed to process information in a parallel and distributed manner, akin to the brain’s functionality.

Key challenges in neuromorphic engineering encompass developing algorithms and hardware capable of performing intricate computations with minimal energy consumption, creating systems that can learn and adapt over time, and devising methods to control the behavior of artificial neurons and synapses in real-time.

Neuromorphic engineering has numerous applications in diverse areas such as robotics, computer vision, speech recognition, and artificial intelligence. The aspiration is that brain-like computing systems will give rise to machines better equipped to tackle complex and uncertain tasks, which currently remain beyond the reach of conventional computers.

It is essential to distinguish between neuromorphic engineering and neuromorphic computing, two related but distinct concepts. Neuromorphic computing represents a specific application of neuromorphic engineering, involving the utilization of hardware and software systems designed to process information in a manner akin to human brain function.

One of the major obstacles in creating brain-inspired computing systems is the vast complexity of the human brain. Unlike traditional computers, the brain operates as a nonlinear dynamic system that can handle massive amounts of data through various input channels, filter information, store key information in short- and long-term memory, learn by analyzing incoming and stored data, make decisions in a constantly changing environment, and do all of this while consuming very little power.

The Human Brain Project [emphasis mine], a large-scale research project launched in 2013, aims to create a comprehensive, detailed, and biologically realistic simulation of the human brain, known as the Virtual Brain. One of the goals of the project is to develop new brain-inspired computing technologies, such as neuromorphic computing.

The Human Brain Project has been funded by the European Union (1B Euros over 10 years starting in 2013 and sunsetting in 2023). From the Human Brain Project Media Invite,

The final Human Brain Project Summit 2023 will take place in Marseille, France, from March 28-31, 2023.

As the ten-year European Flagship Human Brain Project (HBP) approaches its conclusion in September 2023, the final HBP Summit will highlight the scientific achievements of the project at the interface of neuroscience and technology and the legacy that it will leave for the brain research community. …

One last excerpt from the essay,

Neuromorphic computing is a radical reimagining of computer architecture at the transistor level, modeled after the structure and function of biological neural networks in the brain. This computing paradigm aims to build electronic systems that attempt to emulate the distributed and parallel computation of the brain by combining processing and memory in the same physical location.

This is unlike traditional computing, which is based on von Neumann systems consisting of three different units: processing unit, I/O unit, and storage unit. This stored program architecture is a model for designing computers that uses a single memory to store both data and instructions, and a central processing unit to execute those instructions. This design, first proposed by mathematician and computer scientist John von Neumann, is widely used in modern computers and is considered to be the standard architecture for computer systems and relies on a clear distinction between memory and processing.

I found the diagram Berger Included with von Neumann’s design contrasted with a neuromorphic design illuminating,

A graphical comparison of the von Neumann and Neuromorphic architecture. Left: The von Neumann architecture used in traditional computers. The red lines depict the data communication bottleneck in the von Neumann architecture. Right: A graphical representation of a general neuromorphic architecture. In this architecture, the processing and memory is decentralized across different neuronal units(the yellow nodes) and synapses(the black lines connecting the nodes), creating a naturally parallel computing environment via the mesh-like structure. (Source: DOI: 10.1109/IS.2016.7737434) [downloaded from https://www.nanowerk.com/spotlight/spotid=62353.php]

Berger offers a very good overview and I recommend reading his February 13, 2023 essay on neuromorphic engineering with one proviso, Note: A link has been removed,

Many researchers in this field see memristors as a key device component for neuromorphic engineering. Memristor – or memory resistor – devices are non-volatile nanoelectronic memory devices that were first theorized [emphasis mine] by Leon Chua in the 1970’s. However, it was some thirty years later that the first practical device was fabricated in 2008 by a group led by Stanley Williams [sometimes cited as R. Stanley Williams] at HP Research Labs.

Chua wasn’t the first as he, himself, has noted. Chua arrived at his theory independently in the 1970s but Bernard Widrow theorized what he called a ‘memistor’ in the 1960s. In fact “Memristors: they are older than you think” is a May 22, 2012 posting which featured an article “Two centuries of memristors” by Themistoklis Prodromakis, Christofer Toumazou and Leon Chua published in Nature Materials.

Most of us try to get it right but we don’t always succeed. It’s always good practice to read everyone (including me) with a little skepticism.

Combining silicon with metal oxide memristors to create powerful, low-energy intensive chips enabling AI in portable devices

In this one week, I’m publishing my first stories (see also June 13, 2023 posting “ChatGPT and a neuromorphic [brainlike] synapse“) where artificial intelligence (AI) software is combined with a memristor (hardware component) for brainlike (neuromorphic) computing.

Here’s more about some of the latest research from a March 30, 2023 news item on ScienceDaily,

Everyone is talking about the newest AI and the power of neural networks, forgetting that software is limited by the hardware on which it runs. But it is hardware, says USC [University of Southern California] Professor of Electrical and Computer Engineering Joshua Yang, that has become “the bottleneck.” Now, Yang’s new research with collaborators might change that. They believe that they have developed a new type of chip with the best memory of any chip thus far for edge AI (AI in portable devices).

A March 29, 2023 University of Southern California (USC) news release (also on EurekAlert), which originated the news item, contextualizes the research and delves further into the topic of neuromorphic hardware,

For approximately the past 30 years, while the size of the neural networks needed for AI and data science applications doubled every 3.5 months, the hardware capability needed to process them doubled only every 3.5 years. According to Yang, hardware presents a more and more severe problem for which few have patience. 

Governments, industry, and academia are trying to address this hardware challenge worldwide. Some continue to work on hardware solutions with silicon chips, while others are experimenting with new types of materials and devices.  Yang’s work falls into the middle—focusing on exploiting and combining the advantages of the new materials and traditional silicon technology that could support heavy AI and data science computation. 

Their new paper in Nature focuses on the understanding of fundamental physics that leads to a drastic increase in memory capacity needed for AI hardware. The team led by Yang, with researchers from USC (including Han Wang’s group), MIT [Massachusetts Institute of Technology], and the University of Massachusetts, developed a protocol for devices to reduce “noise” and demonstrated the practicality of using this protocol in integrated chips. This demonstration was made at TetraMem, a startup company co-founded by Yang and his co-authors  (Miao Hu, Qiangfei Xia, and Glenn Ge), to commercialize AI acceleration technology. According to Yang, this new memory chip has the highest information density per device (11 bits) among all types of known memory technologies thus far. Such small but powerful devices could play a critical role in bringing incredible power to the devices in our pockets. The chips are not just for memory but also for the processor. And millions of them in a small chip, working in parallel to rapidly run your AI tasks, could only require a small battery to power it. 

The chips that Yang and his colleagues are creating combine silicon with metal oxide memristors in order to create powerful but low-energy intensive chips. The technique focuses on using the positions of atoms to represent information rather than the number of electrons (which is the current technique involved in computations on chips). The positions of the atoms offer a compact and stable way to store more information in an analog, instead of digital fashion. Moreover, the information can be processed where it is stored instead of being sent to one of the few dedicated ‘processors,’ eliminating the so-called ‘von Neumann bottleneck’ existing in current computing systems.  In this way, says Yang, computing for AI is “more energy efficient with a higher throughput.”

How it works: 

Yang explains that electrons which are manipulated in traditional chips, are “light.” And this lightness, makes them prone to moving around and being more volatile.  Instead of storing memory through electrons, Yang and collaborators are storing memory in full atoms. Here is why this memory matters. Normally, says Yang, when one turns off a computer, the information memory is gone—but if you need that memory to run a new computation and your computer needs the information all over again, you have lost both time and energy.  This new method, focusing on activating atoms rather than electrons, does not require battery power to maintain stored information. Similar scenarios happen in AI computations, where a stable memory capable of high information density is crucial. Yang imagines this new tech that may enable powerful AI capability in edge devices, such as Google Glasses, which he says previously suffered from a frequent recharging issue.

Further, by converting chips to rely on atoms as opposed to electrons, chips become smaller.  Yang adds that with this new method, there is more computing capacity at a smaller scale. And this method, he says, could offer “many more levels of memory to help increase information density.” 

To put it in context, right now, ChatGPT is running on a cloud. The new innovation, followed by some further development, could put the power of a mini version of ChatGPT in everyone’s personal device. It could make such high-powered tech more affordable and accessible for all sorts of applications. 

Here’s a link to and a citation for the paper,

Thousands of conductance levels in memristors integrated on CMOS by Mingyi Rao, Hao Tang, Jiangbin Wu, Wenhao Song, Max Zhang, Wenbo Yin, Ye Zhuo, Fatemeh Kiani, Benjamin Chen, Xiangqi Jiang, Hefei Liu, Hung-Yu Chen, Rivu Midya, Fan Ye, Hao Jiang, Zhongrui Wang, Mingche Wu, Miao Hu, Han Wang, Qiangfei Xia, Ning Ge, Ju Li & J. Joshua Yang. Nature volume 615, pages 823–829 (2023) DOI: https://doi.org/10.1038/s41586-023-05759-5 Issue Date: 30 March 2023 Published: 29 March 2023

This paper is behind a paywall.

ChatGPT and a neuromorphic (brainlike) synapse

I was teaching an introductory course about nanotechnology back in 2014 and, at the end of a session, stated (more or less) that the full potential for artificial intelligence (software) wasn’t going to be perceived until the hardware (memistors) was part of the package. (It’s interesting to revisit that in light of the recent uproar around AI (covered in my May 25, 2023 posting, which offered a survey of the situation.)

One of the major problems with artificial intelligence is its memory. The other is energy consumption. Both problems could be addressed by the integration of memristors into the hardware, giving rise to neuromorphic (brainlike) computing. (For those who don’t know, the human brain in addition to its capacity for memory is remarkably energy efficient.)

This is the first time I’ve seen research into memristors where software has been included. Disclaimer: There may be a lot more research of this type; I just haven’t seen it before. A March 24, 2023 news item on ScienceDaily announces research from Korea,

ChatGPT’s impact extends beyond the education sector and is causing significant changes in other areas. The AI language model is recognized for its ability to perform various tasks, including paper writing, translation, coding, and more, all through question-and-answer-based interactions. The AI system relies on deep learning, which requires extensive training to minimize errors, resulting in frequent data transfers between memory and processors. However, traditional digital computer systems’ von Neumann architecture separates the storage and computation of information, resulting in increased power consumption and significant delays in AI computations. Researchers have developed semiconductor technologies suitable for AI applications to address this challenge.

A March 24, 2023 Pohang University of Science & Technology (POSTECH) press release (also on EurekAlert), which originated the news item, provides more detail,

A research team at POSTECH, led by Professor Yoonyoung Chung (Department of Electrical Engineering, Department of Semiconductor Engineering), Professor Seyoung Kim (Department of Materials Science and Engineering, Department of Semiconductor Engineering), and Ph.D. candidate Seongmin Park (Department of Electrical Engineering), has developed a high-performance AI semiconductor device [emphasis mine] using indium gallium zinc oxide (IGZO), an oxide semiconductor widely used in OLED [organic light-emitting diode] displays. The new device has proven to be excellent in terms of performance and power efficiency.

Efficient AI operations, such as those of ChatGPT, require computations to occur within the memory responsible for storing information. Unfortunately, previous AI semiconductor technologies were limited in meeting all the requirements, such as linear and symmetric programming and uniformity, to improve AI accuracy.

The research team sought IGZO as a key material for AI computations that could be mass-produced and provide uniformity, durability, and computing accuracy. This compound comprises four atoms in a fixed ratio of indium, gallium, zinc, and oxygen and has excellent electron mobility and leakage current properties, which have made it a backplane of the OLED display.

Using this material, the researchers developed a novel synapse device [emphasis mine] composed of two transistors interconnected through a storage node. The precise control of this node’s charging and discharging speed has enabled the AI semiconductor to meet the diverse performance metrics required for high-level performance. Furthermore, applying synaptic devices to a large-scale AI system requires the output current of synaptic devices to be minimized. The researchers confirmed the possibility of utilizing the ultra-thin film insulators inside the transistors to control the current, making them suitable for large-scale AI.

The researchers used the newly developed synaptic device to train and classify handwritten data, achieving a high accuracy of over 98%, [emphasis mine] which verifies its potential application in high-accuracy AI systems in the future.

Professor Chung explained, “The significance of my research team’s achievement is that we overcame the limitations of conventional AI semiconductor technologies that focused solely on material development. To do this, we utilized materials already in mass production. Furthermore, Linear and symmetrical programming characteristics were obtained through a new structure using two transistors as one synaptic device. Thus, our successful development and application of this new AI semiconductor technology show great potential to improve the efficiency and accuracy of AI.”

This study was published last week [March 2023] on the inside back cover of Advanced Electronic Materials [paper edition] and was supported by the Next-Generation Intelligent Semiconductor Technology Development Program through the National Research Foundation, funded by the Ministry of Science and ICT [Information and Communication Technologies] of Korea.

Here’s a link to and a citation for the paper,

Highly Linear and Symmetric Analog Neuromorphic Synapse Based on Metal Oxide Semiconductor Transistors with Self-Assembled Monolayer for High-Precision Neural Network Computatio by Seongmin Park, Suwon Seong, Gilsu Jeon, Wonjae Ji, Kyungmi Noh, Seyoung Kim, Yoonyoung Chun. Volume 9, Issue 3 March 2023 2200554 DOI: https://doi.org/10.1002/aelm.202200554 First published online: 29 December 2022

This paper is open access.

Also, there is another approach to using materials such as indium gallium zinc oxide (IGZO) for a memristor. That would be using biological cells as my June 6, 2023 posting, which features work on biological neural networks (BNNs), suggests in relation to creating robots that can perform brainlike computing.

Fluidic memristor with neuromorphic (brainlike) functions

I think this is the first time I’ve had occasion to feature a fluidic memristor. From a January 13, 2023 news item on Nahowerk, Note: Links have been removed,

Neuromorphic devices have attracted increasing attention because of their potential applications in neuromorphic [brainlike] computing, intelligence sensing, brain-machine interfaces and neuroprosthetics. However, most of the neuromorphic functions realized are based on the mimic of electric pulses with solid state devices. Mimicking the functions of chemical synapses, especially neurotransmitter-related functions, is still a challenge in this research area.

In a study published in Science (“Neuromorphic functions with a polyelectrolyte-confined fluidic memristor”), the research group led by Prof. YU Ping and MAO Lanqun from the Institute of Chemistry of the Chinese Academy of Sciences developed a polyelectrolyte-confined fluidic memristor (PFM), which could emulate diverse electric pulse with ultralow energy consumption. Moreover, benefitting from the fluidic nature of PFM, chemical-regulated electric pulses and chemical-electric signal transduction could also be emulated.

A January 12, 2023 Chinese Academy of Science (CAS) press release, which originated the news item, offers more technical detail,

The researchers first fabricated the polyelectrolyte-confined fluidic channel by surface-initiated atomic transfer polymerization. By systematically studying the current-voltage relationship, they found that the fabricated fluidic channel well satisfied the nature memristor, defined as PFM. The origin of the ion memory was originated from the relatively slow diffusion dynamics of anions into and out of the polyelectrolyte brushes.  

The PFM could well emulate the short-term plasticity patterns (STP), including paired-pulse facilitation and paired-pulse depression. These functions can be operated at the voltage and energy consumption as low as those biological systems, suggesting the potential application in bioinspired sensorimotor implementation, intelligent sensing and neuroprosthetics.  

The PFM could also emulate the chemical-regulated STP electric pulses. Based on the interaction between polyelectrolyte and counterions, the retention time could be regulated in different electrolyte.

More importantly, in a physiological electrolyte (i.e., phosphate-buffered saline solution, pH7.4), the PFM could emulate the regulation of memory by adenosine triphosphate (ATP), demonstrating the possibility to regulate the synaptic plasticity by neurotransmitter.  More importantly, based on the interaction between polyelectrolytes and counterions, the chemical-electric signal transduction was accomplished with the PFM, which is a key step towards the fabrication of artificial chemical synapses.

With structural emulation to ion channels, PFM features versatility and easily interfaces with biological systems, paving a way to building neuromorphic devices with advanced functions by introducing rich chemical designs. This study provides a new way to interface the chemistry with neuromorphic device. 

Here’s a link to and a citation for the paper,

Neuromorphic functions with a polyelectrolyte-confined fluidic memristor by Tianyi Xiong, Changwei Li, Xiulan He, Boyang Xie, Jianwei Zong, Yanan Jiang, Wenjie Ma, Fei Wu, Junjie Fei, Ping Yu, and Lanqun Mao. Science 12 Jan 2023 Vol 379, Issue 6628 pp. 156-161 DOI: 10.1126/science.adc9150

This paper is behind a paywall.

Analogue memristor for next-generation brain-mimicking (neuromorphic) computing

This research into an analogue memristor comes from The Korea Institute of Science and Technology (KIST) according to a September 20, 2022 news item on Nanowerk, Note: A link has been removed,

Neuromorphic computing system technology mimicking the human brain has emerged and overcome the limitation of excessive power consumption regarding the existing von Neumann computing method. A high-performance, analog artificial synapse device, capable of expressing various synapse connection strengths, is required to implement a semiconductor device that uses a brain information transmission method. This method uses signals transmitted between neurons when a neuron generates a spike signal.

However, considering conventional resistance-variable memory devices widely used as artificial synapses, as the filament grows with varying resistance, the electric field increases, causing a feedback phenomenon, resulting in rapid filament growth. Therefore, it is challenging to implement considerable plasticity while maintaining analog (gradual) resistance variation concerning the filament type.

The Korea Institute of Science and Technology (KIST), led by Dr. YeonJoo Jeong’s team at the Center for Neuromorphic Engineering, solved the limitations of analog synaptic characteristics, plasticity and information preservation, which are chronic obstacles regarding memristors, neuromorphic semiconductor devices. He announced the development of an artificial synaptic semiconductor device capable of highly reliable neuromorphic computing (Nature Communications, “Cluster-type analogue memristor by engineering redox dynamics for high-performance neuromorphic computing”).

Caption: Concept image of the article Credit: Korea Institute of Science and Technology (KIST)

A September 20, 2022 (Korea) National Research Council of Science & Technology press release on EurekAlert, which originated the news item, delves further into the research,

The KIST research team fine-tuned the redox properties of active electrode ions to solve small synaptic plasticity hindering the performance of existing neuromorphic semiconductor devices. Furthermore, various transition metals were doped and used in the synaptic device, controlling the reduction probability of active electrode ions. It was discovered that the high reduction probability of ions is a critical variable in the development of high-performance artificial synaptic devices.

Therefore, a titanium transition metal, having a high ion reduction probability, was introduced by the research team into an existing artificial synaptic device. This maintains the synapse’s analog characteristics and the device plasticity at the synapse of the biological brain, approximately five times the difference between high and low resistances. Furthermore, they developed a high-performance neuromorphic semiconductor that is approximately 50 times more efficient.

Additionally, due to the high alloy formation reaction concerning the doped titanium transition metal, the information retention increased up to 63 times compared with the existing artificial synaptic device. Furthermore, brain functions, including long-term potentiation and long-term depression, could be more precisely simulated.

The team implemented an artificial neural network learning pattern using the developed artificial synaptic device and attempted artificial intelligence image recognition learning. As a result, the error rate was reduced by more than 60% compared with the existing artificial synaptic device; additionally, the handwriting image pattern (MNIST) recognition accuracy increased by more than 69%. The research team confirmed the feasibility of a high-performance neuromorphic computing system through this improved the artificial synaptic device.

Dr. Jeong of KIST stated, “This study drastically improved the synaptic range of motion and information preservation, which were the greatest technical barriers of existing synaptic mimics.” “In the developed artificial synapse device, the device’s analog operation area to express the synapse’s various connection strengths has been maximized, so the performance of brain simulation-based artificial intelligence computing will be improved.” Additionally, he mentioned, “In the follow-up research, we will manufacture a neuromorphic semiconductor chip based on the developed artificial synapse device to realize a high-performance artificial intelligence system, thereby further enhancing competitiveness in the domestic system and artificial intelligence semiconductor field.”

Here’s a link to and a citation for the paper,

Cluster-type analogue memristor by engineering redox dynamics for high-performance neuromorphic computing by Jaehyun Kang, Taeyoon Kim, Suman Hu, Jaewook Kim, Joon Young Kwak, Jongkil Park, Jong Keuk Park, Inho Kim, Suyoun Lee, Sangbum Kim & YeonJoo Jeong. Nature Communications volume 13, Article number: 4040 (2022) DOI: https://doi.org/10.1038/s41467-022-31804-4 Published: 12 July 2022

This paper is open access.