Category Archives: neuromorphic engineering

Honey-based neuromorphic chips for brainlike computers?

Photo by Mariana Ibanez on Unsplash Courtesy Washington State University

An April 5, 2022 news item on Nanowerk explains the connection between honey and a neuromorphic (brainlike) computer chip, Note: Links have been removed,

Honey might be a sweet solution for developing environmentally friendly components for neuromorphic computers, systems designed to mimic the neurons and synapses found in the human brain.

Hailed by some as the future of computing, neuromorphic systems are much faster and use much less power than traditional computers. Washington State University engineers have demonstrated one way to make them more organic too.

In a study published in Journal of Physics D (“Memristive synaptic device based on a natural organic material—honey for spiking neural network in biodegradable neuromorphic systems”), the researchers show that honey can be used to make a memristor, a component similar to a transistor that can not only process but also store data in memory.

An April 5, 2022 Washington State University (WSU) news release (also on EurekAlert) by Sara Zaske, which originated the news item, describes the purpose for the work and details about making chips from honey,

“This is a very small device with a simple structure, but it has very similar functionalities to a human neuron,” said Feng Zhao, associate professor of WSU’s School of Engineering and Computer Science and corresponding author on the study.“This means if we can integrate millions or billions of these honey memristors together, then they can be made into a neuromorphic system that functions much like a human brain.”

For the study, Zhao and first author Brandon Sueoka, a WSU graduate student in Zhao’s lab, created memristors by processing honey into a solid form and sandwiching it between two metal electrodes, making a structure similar to a human synapse. They then tested the honey memristors’ ability to mimic the work of synapses with high switching on and off speeds of 100 and 500 nanoseconds respectively. The memristors also emulated the synapse functions known as spike-timing dependent plasticity and spike-rate dependent plasticity, which are responsible for learning processes in human brains and retaining new information in neurons.

The WSU engineers created the honey memristors on a micro-scale, so they are about the size of a human hair. The research team led by Zhao plans to develop them on a nanoscale, about 1/1000 of a human hair, and bundle many millions or even billions together to make a full neuromorphic computing system.

Currently, conventional computer systems are based on what’s called the von Neumann architecture. Named after its creator, this architecture involves an input, usually from a keyboard and mouse, and an output, such as the monitor. It also has a CPU, or central processing unit, and RAM, or memory storage. Transferring data through all these mechanisms from input to processing to memory to output takes a lot of power at least compared to the human brain, Zhao said. For instance, the Fugaku supercomputer uses upwards of 28 megawatts, roughly equivalent to 28 million watts, to run while the brain uses only around 10 to 20 watts.

The human brain has more than 100 billion neurons with more than 1,000 trillion synapses, or connections, among them. Each neuron can both process and store data, which makes the brain much more efficient than a traditional computer, and developers of neuromorphic computing systems aim to mimic that structure.

Several companies, including Intel and IBM, have released neuromorphic chips which have the equivalent of more than 100 million “neurons” per chip, but this is not yet near the number in the brain. Many developers are also still using the same nonrenewable and toxic materials that are currently used in conventional computer chips.

Many researchers, including Zhao’s team, are searching for biodegradable and renewable solutions for use in this promising new type of computing. Zhao is also leading investigations into using proteins and other sugars such as those found in Aloe vera leaves in this capacity, but he sees strong potential in honey.

“Honey does not spoil,” he said. “It has a very low moisture concentration, so bacteria cannot survive in it. This means these computer chips will be very stable and reliable for a very long time.”

The honey memristor chips developed at WSU should tolerate the lower levels of heat generated by neuromorphic systems which do not get as hot as traditional computers. The honey memristors will also cut down on electronic waste.

“When we want to dispose of devices using computer chips made of honey, we can easily dissolve them in water,” he said. “Because of these special properties, honey is very useful for creating renewable and biodegradable neuromorphic systems.”

This also means, Zhao cautioned, that just like conventional computers, users will still have to avoid spilling their coffee on them.

Nice note of humour at the end. There are a few questions, I wonder if the variety of honey (clover, orange blossom, blackberry, etc.) has an impact on the chip’s speed and/or longevity. Also, if someone spilled coffee and the chip melted and a child decided to lap it up, what would happen?

Here’s a link to and a citation for the paper,

Memristive synaptic device based on a natural organic material—honey for spiking neural network in biodegradable neuromorphic systems. Brandon Sueoka and Feng Zhao. Journal of Physics D: Applied Physics, Volume 55, Number 22 (225105) Published 7 March 2022 • © 2022 IOP Publishing Ltd

This paper is behind a paywall.

Save energy with neuromorphic (brainlike) hardware

It seems the appetite for computing power is bottomless, which presents a problem in a world where energy resources are increasingly constrained. A May 24, 2022 news item on ScienceDaily announces research into neuromorphic computing which hints the energy efficiency long promised by the technology may be realized in the foreseeable future,

For the first time TU Graz’s [Graz University of Technology; Austria] Institute of Theoretical Computer Science and Intel Labs demonstrated experimentally that a large neural network can process sequences such as sentences while consuming four to sixteen times less energy while running on neuromorphic hardware than non-neuromorphic hardware. The new research based on Intel Labs’ Loihi neuromorphic research chip that draws on insights from neuroscience to create chips that function similar to those in the biological brain.

Rich Uhlig, managing director of Intel Labs, holds one of Intel’s Nahuku boards, each of which contains 8 to 32 Intel Loihi neuromorphic chips. Intel’s latest neuromorphic system, Pohoiki Beach, is made up of multiple Nahuku boards and contains 64 Loihi chips. Pohoiki Beach was introduced in July 2019. (Credit: Tim Herman/Intel Corporation)

A May 24, 2022 Graz University of Technology (TU Graz) press release (also on EurekAlert), which originated the news item, delves further into the research, Note: Links have been removed,

The research was funded by The Human Brain Project (HBP), one of the largest research projects in the world with more than 500 scientists and engineers across Europe studying the human brain. The results of the research are published in the research paper “Memory for AI Applications in Spike-based Neuromorphic Hardware” [sic] (DOI 10.1038/s42256-022-00480-w) which in published in Nature Machine Intelligence.  

Human brain as a role model

Smart machines and intelligent computers that can autonomously recognize and infer objects and relationships between different objects are the subjects of worldwide artificial intelligence (AI) research. Energy consumption is a major obstacle on the path to a broader application of such AI methods. It is hoped that neuromorphic technology will provide a push in the right direction. Neuromorphic technology is modelled after the human brain, which is highly efficient in using energy. To process information, its hundred billion neurons consume only about 20 watts, not much more energy than an average energy-saving light bulb.

In the research, the group focused on algorithms that work with temporal processes. For example, the system had to answer questions about a previously told story and grasp the relationships between objects or people from the context. The hardware tested consisted of 32 Loihi chips.

Loihi research chip: up to sixteen times more energy-efficient than non-neuromorphic hardware

“Our system is four to sixteen times more energy-efficient than other AI models on conventional hardware,” says Philipp Plank, a doctoral student at TU Graz’s Institute of Theoretical Computer Science. Plank expects further efficiency gains as these models are migrated to the next generation of Loihi hardware, which significantly improves the performance of chip-to-chip communication.

“Intel’s Loihi research chips promise to bring gains in AI, especially by lowering their high energy cost,“ said Mike Davies, director of Intel’s Neuromorphic Computing Lab. “Our work with TU Graz provides more evidence that neuromorphic technology can improve the energy efficiency of today’s deep learning workloads by re-thinking their implementation from the perspective of biology.”

Mimicking human short-term memory

In their neuromorphic network, the group reproduced a presumed memory mechanism of the brain, as Wolfgang Maass, Philipp Plank’s doctoral supervisor at the Institute of Theoretical Computer Science, explains: “Experimental studies have shown that the human brain can store information for a short period of time even without neural activity, namely in so-called ‘internal variables’ of neurons. Simulations suggest that a fatigue mechanism of a subset of neurons is essential for this short-term memory.”

Direct proof is lacking because these internal variables cannot yet be measured, but it does mean that the network only needs to test which neurons are currently fatigued to reconstruct what information it has previously processed. In other words, previous information is stored in the non-activity of neurons, and non-activity consumes the least energy.

Symbiosis of recurrent and feed-forward network

The researchers link two types of deep learning networks for this purpose. Feedback neural networks are responsible for “short-term memory.” Many such so-called recurrent modules filter out possible relevant information from the input signal and store it. A feed-forward network then determines which of the relationships found are very important for solving the task at hand. Meaningless relationships are screened out, the neurons only fire in those modules where relevant information has been found. This process ultimately leads to energy savings.

“Recurrent neural structures are expected to provide the greatest gains for applications running on neuromorphic hardware in the future,” said Davies. “Neuromorphic hardware like Loihi is uniquely suited to facilitate the fast, sparse and unpredictable patterns of network activity that we observe in the brain and need for the most energy efficient AI applications.”

This research was financially supported by Intel and the European Human Brain Project, which connects neuroscience, medicine, and brain-inspired technologies in the EU. For this purpose, the project is creating a permanent digital research infrastructure, EBRAINS. This research work is anchored in the Fields of Expertise Human and Biotechnology and Information, Communication & Computing, two of the five Fields of Expertise of TU Graz.

Here’s a link to and a citation for the paper,

A Long Short-Term Memory for AI Applications in Spike-based Neuromorphic Hardware by Arjun Rao, Philipp Plank, Andreas Wild & Wolfgang Maass. Nature Machine Intelligence (2022) DOI: https://doi.org/10.1038/s42256-022-00480-w Published: 19 May 2022

This paper is behind a paywall.

For anyone interested in the EBRAINS project, here’s a description from their About page,

EBRAINS provides digital tools and services which can be used to address challenges in brain research and brain-inspired technology development. Its components are designed with, by, and for researchers. The tools assist scientists to collect, analyse, share, and integrate brain data, and to perform modelling and simulation of brain function.

EBRAINS’ goal is to accelerate the effort to understand human brain function and disease.

This EBRAINS research infrastructure is the entry point for researchers to discover EBRAINS services. The services are being developed and powered by the EU-funded Human Brain Project.

You can register to use the EBRAINS research infrastructure HERE

One last note, the Human Brain Project is a major European Union (EU)-funded science initiative (1B Euros) announced in 2013 and to be paid out over 10 years.

Simulating neurons and synapses with memristive devices

I’ve been meaning to get to this research on ‘neuromorphic memory’ for a while. From a May 20, 2022 news item on Nanowerk,

Researchers have reported a nano-sized neuromorphic memory device that emulates neurons and synapses simultaneously in a unit cell, another step toward completing the goal of neuromorphic computing designed to rigorously mimic the human brain with semiconductor devices.

Neuromorphic computing aims to realize artificial intelligence (AI) by mimicking the mechanisms of neurons and synapses that make up the human brain. Inspired by the cognitive functions of the human brain that current computers cannot provide, neuromorphic devices have been widely investigated.

However, current Complementary Metal-Oxide Semiconductor (CMOS)-based neuromorphic circuits simply connect artificial neurons and synapses without synergistic interactions, and the concomitant implementation of neurons and synapses still remains a challenge.

A May 20, 2022 Korea Advanced Institute of Science and Technology (KAIST) press release (also on EurekAlert), which originated the news item, delves further into the research,

To address these issues, a research team led by Professor Keon Jae Lee from the Department of Materials Science and Engineering implemented the biological working mechanisms of humans by introducing the neuron-synapse interactions in a single memory cell, rather than the conventional approach of electrically connecting artificial neuronal and synaptic devices.

Similar to commercial graphics cards, the artificial synaptic devices previously studied often used to accelerate parallel computations, which shows clear differences from the operational mechanisms of the human brain. The research team implemented the synergistic interactions between neurons and synapses in the neuromorphic memory device, emulating the mechanisms of the biological neural network. In addition, the developed neuromorphic device can replace complex CMOS neuron circuits with a single device, providing high scalability and cost efficiency. 

The human brain consists of a complex network of 100 billion neurons and 100 trillion synapses. The functions and structures of neurons and synapses can flexibly change according to the external stimuli, adapting to the surrounding environment. The research team developed a neuromorphic device in which short-term and long-term memories coexist using volatile and non-volatile memory devices that mimic the characteristics of neurons and synapses, respectively. A threshold switch device is used as volatile memory and phase-change memory is used as a non-volatile device. Two thin-film devices are integrated without intermediate electrodes, implementing the functional adaptability of neurons and synapses in the neuromorphic memory.

Professor Keon Jae Lee explained, “Neurons and synapses interact with each other to establish cognitive functions such as memory and learning, so simulating both is an essential element for brain-inspired artificial intelligence. The developed neuromorphic memory device also mimics the retraining effect that allows quick learning of the forgotten information by implementing a positive feedback effect between neurons and synapses.”

Here’s a link to and a citation for the paper,

Simultaneous emulation of synaptic and intrinsic plasticity using a memristive synapse by Sang Hyun Sung, Tae Jin Kim, Hyera Shin, Tae Hong Im & Keon Jae Lee. Nature Communications volume 13, Article number: 2811 (2022) DOI https://doi.org/10.1038/s41467-022-30432-2 Published 19 May 2022

This paper is open access.

Memristive control of mutual spin

It may be my imagination but it seems I’m stumbling across more research on neuromorphic (brainlike) computing than usual this year. In May 2022 alone I stumbled across three items. Today (August 24, 2022), here’s a May 14, 2022 news item on Nanowerk describes some work from the University of Gothenburg (Sweden),

Artificial Intelligence (AI) is making it possible for machines to do things that were once considered uniquely human. With AI, computers can use logic to solve problems, make decisions, learn from experience and perform human-like tasks. However, they still cannot do this as effectively and energy efficiently as the human brain.

Research conducted with support from the EU-funded TOPSPIN and SpinAge projects has brought scientists a step closer to achieving this goal.

“Finding new ways of performing calculations that resemble the brain’s energy-efficient processes has been a major goal of research for decades,” observes Prof. Johan Åkerman of TOPSIN project host University of Gothenburg, Sweden. “Cognitive tasks, like image and voice recognition, require significant computer power, and mobile applications, in particular, like mobile phones, drones and satellites, require energy efficient solutions,” continues Prof. Åkerman, who is also the founder and CEO of SpinAge project partner NanOsc, also in Sweden.

A May 13, 2022 CORDIS press release, which originated the news item, provides more detail,

The research team succeeded in combining a memory function and a calculation function in one component for the very first time. The achievement is described in their study published in the journal ‘Nature Materials’. The memory and calculation functions were combined by linking oscillator networks and memristors – the two main tools needed to carry out advanced calculations. Oscillators are described as oscillating circuits capable of performing calculations. Memristors, short for memory resistors, are electronic devices whose resistance can be programmed and remains stored. In other words, the memristor’s resistance performs a memory function by remembering what value it had when the device was powered on.

A major development

Prof. Åkerman comments on the discovery: “This is an important breakthrough because we show that it is possible to combine a memory function with a calculating function in the same component. These components work more like the brain’s energy-efficient neural networks, allowing them to become important building blocks in future, more brain-like computers.”

As reported in the news item, Prof. Åkerman believes this achievement will lead to the development of technologies that are faster, easier to use and less energy-consuming. Also, the fact that hundreds of components can fit into an area the size of a single bacterium could have a significant impact on smaller applications. “More energy-efficient calculations could lead to new functionality in mobile phones. An example is digital assistants like Siri or Google. Today, all processing is done by servers since the calculations require too much energy for the small size of a phone. If the calculations could instead be performed locally, on the actual phone, they could be done faster and easier without a need to connect to servers.”

Prof. Åkerman concludes: “The more energy-efficiently that cognitive calculations can be performed, the more applications become possible. That’s why our study really has the potential to advance the field.” The TOPSPIN (Topotronic multi-dimensional spin Hall nano-oscillator networks) and SpinAge (Weighted Spintronic-Nano-Oscillator-based Neuromorphic Computing System Assisted by laser for Cognitive Computing) projects end in 2024.

For more information, please see:
TOPSPIN project
SpinAge project

The University of Gothenburg first announced the research in a November 29, 2021 press release on EurekAlert,

Research has long strived to develop computers to work as energy efficiently as our brains. A study, led by researchers at the University of Gothenburg, has succeeded for the first time in combining a memory function with a calculation function in the same component. The discovery opens the way for more efficient technologies, everything from mobile phones to self-driving cars.

In recent years, computers have been able to tackle advanced cognitive tasks, like language and image recognition or displaying superhuman chess skills, thanks in large part to artificial intelligence (AI). At the same time, the human brain is still unmatched in its ability to perform tasks effectively and energy efficiently.

“Finding new ways of performing calculations that resemble the brain’s energy-efficient processes has been a major goal of research for decades. Cognitive tasks, like image and voice recognition, require significant computer power, and mobile applications, in particular, like mobile phones, drones and satellites, require energy efficient solutions,” says Johan Åkerman, professor of applied spintronics at the University of Gothenburg.

Important breakthrough
Working with a research team at Tohoko University, Åkerman led a study that has now taken an important step forward in achieving this goal. In the study, now published in the highly ranked journal Nature Materials, the researchers succeeded for the first time in linking the two main tools for advanced calculations: oscillator networks and memristors.

Åkerman describes oscillators as oscillating circuits that can perform calculations and that are comparable to human nerve cells. Memristors are programable resistors that can also perform calculations and that have integrated memory. This makes them comparable to memory cells. Integrating the two is a major advancement by the researchers.

“This is an important breakthrough because we show that it is possible to combine a memory function with a calculating function in the same component. These components work more like the brain’s energy-efficient neural networks, allowing them to become important building blocks in future, more brain-like computers.”

Enables energy-efficient technologies
According to Johan Åkerman, the discovery will enable faster, easier to use and less energy consuming technologies in many areas. He feels that it is a huge advantage that the research team has successfully produced the components in an extremely small footprint: hundreds of components fit into an area equivalent to a single bacterium. This can be of particular importance in smaller applications like mobile phones.

“More energy-efficient calculations could lead to new functionality in mobile phones. An example is digital assistants like Siri or Google. Today, all processing is done by servers since the calculations require too much energy for the small size of a phone. If the calculations could instead be performed locally, on the actual phone, they could be done faster and easier without a need to connect to servers.”

He notes self-driving cars and drones as other examples of where more energy-efficient calculations could drive developments.

“The more energy-efficiently that cognitive calculations can be performed, the more applications become possible. That’s why our study really has the potential to advance the field.”

Here’s a link to and a citation for the paper,

Memristive control of mutual spin Hall nano-oscillator synchronization for neuromorphic computing by Mohammad Zahedinejad, Himanshu Fulara, Roman Khymyn, Afshin Houshang, Mykola Dvornik, Shunsuke Fukami, Shun Kanai, Hideo Ohno & Johan Åkerman. Nature Materials volume 21, pages 81–87 (2022) DOI: https://doi.org/10.1038/s41563-021-01153-6 First Published: 29 November 2021 Issue Date: January 2022

This paper is behind a paywall.

Neuromorphic hardware could yield computational advantages for more than just artificial intelligence

Neuromorphic (brainlike) computing doesn’t have to be used for cognitive tasks only according to a research team at the US Dept. of Energy’s Sandia National Laboratories as per their March 11, 2022 news release by Neal Singer (also on EurekAlert but published March 10, 2022), Note: Links have been removed,

With the insertion of a little math, Sandia National Laboratories researchers have shown that neuromorphic computers, which synthetically replicate the brain’s logic, can solve more complex problems than those posed by artificial intelligence and may even earn a place in high-performance computing.

A random walk diffusion model based on data from Sandia National Laboratories algorithms running on an Intel Loihi neuromorphic platform. Video courtesy of Sandia National Laboratories. …

The findings, detailed in a recent article in the journal Nature Electronics, show that neuromorphic simulations employing the statistical method called random walks can track X-rays passing through bone and soft tissue, disease passing through a population, information flowing through social networks and the movements of financial markets, among other uses, said Sandia theoretical neuroscientist and lead researcher James Bradley Aimone.

“Basically, we have shown that neuromorphic hardware can yield computational advantages relevant to many applications, not just artificial intelligence to which it’s obviously kin,” said Aimone. “Newly discovered applications range from radiation transport and molecular simulations to computational finance, biology modeling and particle physics.”

In optimal cases, neuromorphic computers will solve problems faster and use less energy than conventional computing, he said.

The bold assertions should be of interest to the high-performance computing community because finding capabilities to solve statistical problems is of increasing concern, Aimone said.

“These problems aren’t really well-suited for GPUs [graphics processing units], which is what future exascale systems are likely going to rely on,” Aimone said. “What’s exciting is that no one really has looked at neuromorphic computing for these types of applications before.”

Sandia engineer and paper author Brian Franke said, “The natural randomness of the processes you list will make them inefficient when directly mapped onto vector processors like GPUs on next-generation computational efforts. Meanwhile, neuromorphic architectures are an intriguing and radically different alternative for particle simulation that may lead to a scalable and energy-efficient approach for solving problems of interest to us.”

Franke models photon and electron radiation to understand their effects on components.

The team successfully applied neuromorphic-computing algorithms to model random walks of gaseous molecules diffusing through a barrier, a basic chemistry problem, using the 50-million-chip Loihi platform Sandia received approximately a year and a half ago from Intel Corp., said Aimone. “Then we showed that our algorithm can be extended to more sophisticated diffusion processes useful in a range of applications.”

The claims are not meant to challenge the primacy of standard computing methods used to run utilities, desktops and phones. “There are, however, areas in which the combination of computing speed and lower energy costs may make neuromorphic computing the ultimately desirable choice,” he said.

Showing a neuromorphic advantage, both the IBM TrueNorth and Intel Loihi neuromorphic chips observed by Sandia National Laboratories researchers were significantly more energy efficient than conventional computing hardware. The graph shows Loihi can perform about 10 times more calculations per unit of energy than a conventional processor. Energy is the limiting factor — more chips can be inserted to run things in parallel, thus faster, but the same electric bill occurs whether it is one computer doing everything or 10,000 computers doing the work. Image courtesy of Sandia National Laboratories. Click on the thumbnail for a high-resolution image.

Unlike the difficulties posed by adding qubits to quantum computers — another interesting method of moving beyond the limitations of conventional computing — chips containing artificial neurons are cheap and easy to install, Aimone said.

There can still be a high cost for moving data on or off the neurochip processor. “As you collect more, it slows down the system, and eventually it won’t run at all,” said Sandia mathematician and paper author William Severa. “But we overcame this by configuring a small group of neurons that effectively computed summary statistics, and we output those summaries instead of the raw data.”

Severa wrote several of the experiment’s algorithms.

Like the brain, neuromorphic computing works by electrifying small pin-like structures, adding tiny charges emitted from surrounding sensors until a certain electrical level is reached. Then the pin, like a biological neuron, flashes a tiny electrical burst, an action known as spiking. Unlike the metronomical regularity with which information is passed along in conventional computers, said Aimone, the artificial neurons of neuromorphic computing flash irregularly, as biological ones do in the brain, and so may take longer to transmit information. But because the process only depletes energies from sensors and neurons if they contribute data, it requires less energy than formal computing, which must poll every processor whether contributing or not. The conceptually bio-based process has another advantage: Its computing and memory components exist in the same structure, while conventional computing uses up energy by distant transfer between these two functions. The slow reaction time of the artificial neurons initially may slow down its solutions, but this factor disappears as the number of neurons is increased so more information is available in the same time period to be totaled, said Aimone.

The process begins by using a Markov chain — a mathematical construct where, like a Monopoly gameboard, the next outcome depends only on the current state and not the history of all previous states. That randomness contrasts, said Sandia mathematician and paper author Darby Smith, with most linked events. For example, he said, the number of days a patient must remain in the hospital are at least partially determined by the preceding length of stay.

Beginning with the Markov random basis, the researchers used Monte Carlo simulations, a fundamental computational tool, to run a series of random walks that attempt to cover as many routes as possible.

“Monte Carlo algorithms are a natural solution method for radiation transport problems,” said Franke. “Particles are simulated in a process that mirrors the physical process.”

The energy of each walk was recorded as a single energy spike by an artificial neuron reading the result of each walk in turn. “This neural net is more energy efficient in sum than recording each moment of each walk, as ordinary computing must do. This partially accounts for the speed and efficiency of the neuromorphic process,” said Aimone. More chips will help the process move faster using the same amount of energy, he said.

The next version of Loihi, said Sandia researcher Craig Vineyard, will increase its current chip scale from 128,000 neurons per chip to up to one million. Larger scale systems then combine multiple chips to a board.

“Perhaps it makes sense that a technology like Loihi may find its way into a future high-performance computing platform,” said Aimone. “This could help make HPC much more energy efficient, climate-friendly and just all around more affordable.”

Here’s a link to and a citation for the paper,

Neuromorphic scaling advantages for energy-efficient random walk computations by J. Darby Smith, Aaron J. Hill, Leah E. Reeder, Brian C. Franke, Richard B. Lehoucq, Ojas Parekh, William Severa & James B. Aimone. Nature Electronics volume 5, pages 102–112 (2022) DOI: https://doi.org/10.1038/s41928-021-00705-7 Issue Date February 2022 Published 14 February 2022

This paper is open access.

An ‘artificial brain’ and life-long learning

Talk of artificial brains (also known as, brainlike computing or neuromorphic computing) usually turns to memory fairly quickly. This February 3, 2022 news item on ScienceDaily does too although the focus is on how memory and forgetting affect the ability to learn,

When the human brain learns something new, it adapts. But when artificial intelligence learns something new, it tends to forget information it already learned.

As companies use more and more data to improve how AI recognizes images, learns languages and carries out other complex tasks, a paper publishing in Science this week shows a way that computer chips could dynamically rewire themselves to take in new data like the brain does, helping AI to keep learning over time.

“The brains of living beings can continuously learn throughout their lifespan. We have now created an artificial platform for machines to learn throughout their lifespan,” said Shriram Ramanathan, a professor in Purdue University’s [Indiana, US] School of Materials Engineering who specializes in discovering how materials could mimic the brain to improve computing.

Unlike the brain, which constantly forms new connections between neurons to enable learning, the circuits on a computer chip don’t change. A circuit that a machine has been using for years isn’t any different than the circuit that was originally built for the machine in a factory.

This is a problem for making AI more portable, such as for autonomous vehicles or robots in space that would have to make decisions on their own in isolated environments. If AI could be embedded directly into hardware rather than just running on software as AI typically does, these machines would be able to operate more efficiently.

A February 3, 2022 Purdue University news release (also on EurekAlert), which originated the news item, provides more technical detail about the work (Note: Links have been removed),

In this study, Ramanathan and his team built a new piece of hardware that can be reprogrammed on demand through electrical pulses. Ramanathan believes that this adaptability would allow the device to take on all of the functions that are necessary to build a brain-inspired computer.

“If we want to build a computer or a machine that is inspired by the brain, then correspondingly, we want to have the ability to continuously program, reprogram and change the chip,” Ramanathan said.

Toward building a brain in chip form

The hardware is a small, rectangular device made of a material called perovskite nickelate,  which is very sensitive to hydrogen. Applying electrical pulses at different voltages allows the device to shuffle a concentration of hydrogen ions in a matter of nanoseconds, creating states that the researchers found could be mapped out to corresponding functions in the brain.

When the device has more hydrogen near its center, for example, it can act as a neuron, a single nerve cell. With less hydrogen at that location, the device serves as a synapse, a connection between neurons, which is what the brain uses to store memory in complex neural circuits.

Through simulations of the experimental data, the Purdue team’s collaborators at Santa Clara University and Portland State University showed that the internal physics of this device creates a dynamic structure for an artificial neural network that is able to more efficiently recognize electrocardiogram patterns and digits compared to static networks. This neural network uses “reservoir computing,” which explains how different parts of a brain communicate and transfer information.

Researchers from The Pennsylvania State University also demonstrated in this study that as new problems are presented, a dynamic network can “pick and choose” which circuits are the best fit for addressing those problems.

Since the team was able to build the device using standard semiconductor-compatible fabrication techniques and operate the device at room temperature, Ramanathan believes that this technique can be readily adopted by the semiconductor industry.

“We demonstrated that this device is very robust,” said Michael Park, a Purdue Ph.D. student in materials engineering. “After programming the device over a million cycles, the reconfiguration of all functions is remarkably reproducible.”

The researchers are working to demonstrate these concepts on large-scale test chips that would be used to build a brain-inspired computer.

Experiments at Purdue were conducted at the FLEX Lab and Birck Nanotechnology Center of Purdue’s Discovery Park. The team’s collaborators at Argonne National Laboratory, the University of Illinois, Brookhaven National Laboratory and the University of Georgia conducted measurements of the device’s properties.

Here’s a link to and a citation for the paper,

Reconfigurable perovskite nickelate electronics for artificial intelligence by Hai-Tian Zhang, Tae Joon Park, A. N. M. Nafiul Islam, Dat S. J. Tran, Sukriti Manna, Qi Wang, Sandip Mondal, Haoming Yu, Suvo Banik, Shaobo Cheng, Hua Zhou, Sampath Gamage, Sayantan Mahapatra, Yimei Zhu, Yohannes Abate, Nan Jiang, Subramanian K. R. S. Sankaranarayanan, Abhronil Sengupta, Christof Teuscher, Shriram Ramanathan. Science • 3 Feb 2022 • Vol 375, Issue 6580 • pp. 533-539 • DOI: 10.1126/science.abj7943

This paper is behind a paywall.

2D materials for a computer’s artificial brain synapses

A January 28, 2022 news item on Nanowerk describes for some of the latest work on hardware that could enable neuromorphic (brainlike) computing. Note: A link has been removed,

Researchers from KTH Royal Institute of Technology [Sweden] and Stanford University [US] have fabricated a material for computer components that enable the commercial viability of computers that mimic the human brain (Advanced Functional Materials, “High-Speed Ionic Synaptic Memory Based on 2D Titanium Carbide MXene”).

A January 31, 2022 KTH Royal Institute of Technology press release (also on EurekAlert but published January 28, 2022), which originated the news item, delves further into the research,

Electrochemical random access (ECRAM) memory components made with 2D titanium carbide showed outstanding potential for complementing classical transistor technology, and contributing toward commercialization of powerful computers that are modeled after the brain’s neural network. Such neuromorphic computers can be thousands times more energy efficient than today’s computers.

These advances in computing are possible because of some fundamental differences from the classic computing architecture in use today, and the ECRAM, a component that acts as a sort of synaptic cell in an artificial neural network, says KTH Associate Professor Max Hamedi.

“Instead of transistors that are either on or off, and the need for information to be carried back and forth between the processor and memory—these new computers rely on components that can have multiple states, and perform in-memory computation,” Hamedi says.

The scientists at KTH and Stanford have focused on testing better materials for building an ECRAM, a component in which switching occurs by inserting ions into an oxidation channel, in a sense similar to our brain which also works with ions. What has been needed to make these chips commercially viable are materials that overcome the slow kinetics of metal oxides and the poor temperature stability of plastics.                   

The key material in the ECRAM units that the researchers fabricated is referred to as MXene—a two-dimensional (2D) compound, barely a few atoms thick, consisting of titanium carbide (Ti3C2Tx). The MXene combines the high speed of organic chemistry with the integration compatibility of inorganic materials in a single device operating at the nexus of electrochemistry and electronics, Hamedi says.

Co-author Professor Alberto Salleo at Stanford University, says that MXene ECRAMs combine the speed, linearity, write noise, switching energy, and endurance metrics essential for parallel acceleration of artificial neural networks.

“MXenes are an exciting materials family for this particular application as they combine the temperature stability needed for integration with conventional electronics with the availability of a vast composition space to optimize performance, Salleo says”

While there are many other barriers to overcome before consumers can buy their own neuromorphic computers, Hamedi says the 2D ECRAMs represent a breakthrough at least in the area of neuromorphic materials, potentially leading to artificial intelligence that can adapt to confusing input and nuance, the way the brain does with thousands time smaller energy consumption. This can also enable portable devices capable of much heavier computing tasks without having to rely on the cloud.

Here’s a link to and a citation for the paper,

High-Speed Ionic Synaptic Memory Based on 2D Titanium Carbide MXene by
Armantas Melianas, Min-A Kang, Armin VahidMohammadi, Tyler James Quill, Weiqian Tian, Yury Gogotsi, Alberto Salleo, Mahiar Max Hamedi. Advanced Functional Materials DOI: https://doi.org/10.1002/adfm.202109970 First published: 21 November 2021

This paper is open access.

Can you make my nose more like a camel’s?

Camel Face Close Up [downloaded from https://www.asergeev.com/php/searchph/links.php?keywords=Camel_close_up]

I love that image which I found on Alexey Sergeev’s Camel Close Up webpage on his eponymous website. It turns out the photographer is in the Department of Mathematics at Texas A&M University. Thank you Mr. Sergeev.

A January 19, 2022 news item on Nanowerk describes research inspired by a camel’s nose, Note: A link has been removed,

Camels have a renowned ability to survive on little water. They are also adept at finding something to drink in the vast desert, using noses that are exquisite moisture detectors.

In a new study in ACS [American Chemical Society] Nano (“A Camel Nose-Inspired Highly Durable Neuromorphic Humidity Sensor with Water Source Locating Capability”), researchers describe a humidity sensor inspired by the structure and properties of camels’ noses. In experiments, they found this device could reliably detect variations in humidity in settings that included industrial exhaust and the air surrounding human skin.

A January 19, 2022 ACS news release (also on EurekAlert), which originated the news item, describes the work in more detail,

Humans sometimes need to determine the presence of moisture in the air, but people aren’t quite as skilled as camels at sensing water with their noses. Instead, people must use devices to locate water in arid environments, or to identify leaks or analyze exhaust in industrial facilities. However, currently available sensors all have significant drawbacks. Some devices may be durable, for example, but have a low sensitivity to the presence of water. Meanwhile, sunlight can interfere with some highly sensitive detectors, making them difficult to use outdoors, for example. To devise a durable, intelligent sensor that can detect even low levels of airborne water molecules, Weiguo Huang, Jian Song, and their colleagues looked to camels’ noses. 

Narrow, scroll-like passages within a camel’s nose create a large surface area, which is lined with water-absorbing mucus. To mimic the high-surface-area structure within the nose, the team created a porous polymer network. On it, they placed moisture-attracting molecules called zwitterions to simulate the property of mucus to change capacitance as humidity varies. In experiments, the device was durable and could monitor fluctuations in humidity in hot industrial exhaust, find the location of a water source and sense moisture emanating from the human body. Not only did the sensor respond to changes in a person’s skin perspiration as they exercised, it detected the presence of a human finger and could even follow its path in a V or L shape. This sensitivity suggests that the device could become the basis for a touchless interface through which someone could communicate with a computer, according to the researchers. What’s more, the sensor’s electrical response to moisture can be tuned or adjusted, much like the signals sent out by human neurons — potentially allowing it to learn via artificial intelligence, they say. 

The authors acknowledge funding from the Fujian Science and Technology Innovation Laboratory for Optoelectronic Information of China, Fujian Institute of Research on the Structure of Matter, Chinese Academy of Sciences, the Natural Science Foundation of Fujian Province, and the National Natural Science Foundation of China.

Here’s a link to and a citation for the paper,

A Camel Nose-Inspired Highly Durable Neuromorphic Humidity Sensor with Water Source Locating Capability by Caicong Li, Jie Liu, Hailong Peng, Yuan Sui, Jian Song, Yang Liu, Wei Huang, Xiaowei Chen, Jinghui Shen, Yao Ling, Chongyu Huang, Youwei Hong, and Weiguo Huang. ACS Nano 2022, 16, 1, 1511–1522 DOI: https://doi.org/10.1021/acsnano.1c10004 Publication Date:December 15, 2021 Copyright © 2021 American Chemical Society

This paper is behind a paywall.

Organic neuromorphic electronics

A December 13, 2021 news item on ScienceDaily describes some research from Germany’s Max Planck Institute for Polymer Research,

The human brain works differently from a computer – while the brain works with biological cells and electrical impulses, a computer uses silicon-based transistors. Scientists have equipped a toy robot with a smart and adaptive electrical circuit made of soft organic materials, similarly to the biological matter. With this bio-inspired approach, they were able to teach the robot to navigate independently through a maze using visual signs for guidance.

A December 13, 2021 Max Planck Institute for Polymer Research press release (also on EurekAlert), which originated the news item, fills in a few details,

The processor is the brain of a computer – an often-quoted phrase. But processors work fundamentally differently than the human brain. Transistors perform logic operations by means of electronic signals. In contrast, the brain works with nerve cells, so-called neurons, which are connected via biological conductive paths, so-called synapses. At a higher level, this signaling is used by the brain to control the body and perceive the surrounding environment. The reaction of the body/brain system when certain stimuli are perceived – for example, via the eyes, ears or sense of touch – is triggered through a learning process. For example, children learn not to reach twice for a hot stove: one input stimulus leads to a learning process with a clear behavioral outcome.

Scientists working with Paschalis Gkoupidenis, group leader in Paul Blom’s department at the Max Planck Institute for Polymer Research, have now applied this basic principle of learning through experience in a simplified form and steered a robot through a maze using a so-called organic neuromorphic circuit. The work was an extensive collaboration between the Universities of Eindhoven [Eindhoven University of Technology; Netherlands], Stanford [University; California, US], Brescia [University; Italy], Oxford [UK] and KAUST [King Abdullah University of Science and Technology, Saudi Arabia].

“We wanted to use this simple setup to show how powerful such ‘organic neuromorphic devices’ can be in real-world conditions,” says Imke Krauhausen, a doctoral student in Gkoupidenis’ group and at TU Eindhoven (van de Burgt group), and first author of the scientific paper.

To achieve the navigation of the robot inside the maze, the researchers fed the smart adaptive circuit with sensory signals coming from the environment. The path of maze towards the exit is indicated visually at each maze intersects. Initially, the robot often misinterprets the visual signs, thus it makes the wrong “turning” decisions at the maze intersects and loses the way out. When the robot takes these decisions and follows wrong dead-end paths, it is being discouraged to take these wrong decisions by receiving corrective stimuli. The corrective stimuli, for example when the robot hits a wall, are directly applied at the organic circuit via electrical signals induced by a touch sensor attached to the robot. With each subsequent execution of the experiment, the robot gradually learns to make the right “turning” decisions at the intersects, i. e. to avoid receiving corrective stimuli, and after a few trials it finds the way out of the maze. This learning process happens exclusively on the organic adaptive circuit. 

“We were really glad to see that the robot can pass through the maze after some runs by learning on a simple organic circuit. We have shown here a first, very simple setup. In the distant future, however, we hope that organic neuromorphic devices could also be used for local and distributed computing/learning. This will open up entirely new possibilities for applications in real-world robotics, human-machine interfaces and point-of-care diagnostics. Novel platforms for rapid prototyping and education, at the intersection of materials science and robotics, are also expected to emerge.” Gkoupidenis says.

Here’s a link to and a citation for the paper,

Organic neuromorphic electronics for sensorimotor integration and learning in robotics by Imke Krauhausen, Dimitrios A. Koutsouras, Armantas Melianas, Scott T. Keene, Katharina Lieberth, Hadrien Ledanseur, Rajendar Sheelamanthula, Alexander Giovannitti, Fabrizio Torricelli, Iain Mcculloch, Paul W. M. Blom, Alberto Salleo, Yoeri van de Burgt and Paschalis Gkoupidenis. Science Advances • 10 Dec 2021 • Vol 7, Issue 50 • DOI: 10.1126/sciadv.abl5068

This paper is open access.

Neuromorphic (brainlike) computing inspired by sea slugs

The sea slug has taught neuroscientists the intelligence features that any creature in the animal kingdom needs to survive. Now, the sea slug is teaching artificial intelligence how to use those strategies. Pictured: Aplysia californica. (Image by NOAA Monterey Bay National Marine Sanctuary/Chad King.)

I don’t think I’ve ever seen a picture of a sea slug before. Its appearance reminds me of its terrestrial cousin.

As for some of the latest news on brainlike computing, a December 7, 2021 news item on Nanowerk makes an announcement from the Argonne National Laboratory (a US Department of Energy laboratory; Note: Links have been removed),

A team of scientists has discovered a new material that points the way toward more efficient artificial intelligence hardware for everything from self-driving cars to surgical robots.

For artificial intelligence (AI) to get any smarter, it needs first to be as intelligent as one of the simplest creatures in the animal kingdom: the sea slug.

A new study has found that a material can mimic the sea slug’s most essential intelligence features. The discovery is a step toward building hardware that could help make AI more efficient and reliable for technology ranging from self-driving cars and surgical robots to social media algorithms.

The study, published in the Proceedings of the National Academy of Sciences [PNAS] (“Neuromorphic learning with Mott insulator NiO”), was conducted by a team of researchers from Purdue University, Rutgers University, the University of Georgia and the U.S. Department of Energy’s (DOE) Argonne National Laboratory. The team used the resources of the Advanced Photon Source (APS), a DOE Office of Science user facility at Argonne.

A December 6, 2021 Argonne National Laboratory news release (also on EurekAlert) by Kayla Wiles and Andre Salles, which originated the news item, provides more detail,

“Through studying sea slugs, neuroscientists discovered the hallmarks of intelligence that are fundamental to any organism’s survival,” said Shriram Ramanathan, a Purdue professor of Materials Engineering. ​“We want to take advantage of that mature intelligence in animals to accelerate the development of AI.”

Two main signs of intelligence that neuroscientists have learned from sea slugs are habituation and sensitization. Habituation is getting used to a stimulus over time, such as tuning out noises when driving the same route to work every day. Sensitization is the opposite — it’s reacting strongly to a new stimulus, like avoiding bad food from a restaurant.

AI has a really hard time learning and storing new information without overwriting information it has already learned and stored, a problem that researchers studying brain-inspired computing call the ​“stability-plasticity dilemma.” Habituation would allow AI to ​“forget” unneeded information (achieving more stability) while sensitization could help with retaining new and important information (enabling plasticity).

In this study, the researchers found a way to demonstrate both habituation and sensitization in nickel oxide, a quantum material. Quantum materials are engineered to take advantage of features available only at nature’s smallest scales, and useful for information processing. If a quantum material could reliably mimic these forms of learning, then it may be possible to build AI directly into hardware. And if AI could operate both through hardware and software, it might be able to perform more complex tasks using less energy.

“We basically emulated experiments done on sea slugs in quantum materials toward understanding how these materials can be of interest for AI,” Ramanathan said.

Neuroscience studies have shown that the sea slug demonstrates habituation when it stops withdrawing its gill as much in response to tapping. But an electric shock to its tail causes its gill to withdraw much more dramatically, showing sensitization.

For nickel oxide, the equivalent of a ​“gill withdrawal” is an increased change in electrical resistance. The researchers found that repeatedly exposing the material to hydrogen gas causes nickel oxide’s change in electrical resistance to decrease over time, but introducing a new stimulus like ozone greatly increases the change in electrical resistance.

Ramanathan and his colleagues used two experimental stations at the APS to test this theory, using X-ray absorption spectroscopy. A sample of nickel oxide was exposed to hydrogen and oxygen, and the ultrabright X-rays of the APS were used to see changes in the material at the atomic level over time.

“Nickel oxide is a relatively simple material,” said Argonne physicist Hua Zhou, a co-author on the paper who worked with the team at beamline 33-ID. ​“The goal was to use something easy to manufacture, and see if it would mimic this behavior. We looked at whether the material gained or lost a single electron after exposure to the gas.”

The research team also conducted scans at beamline 29-ID, which uses softer X-rays to probe different energy ranges. While the harder X-rays of 33-ID are more sensitive to the ​“core” electrons, those closer to the nucleus of the nickel oxide’s atoms, the softer X-rays can more readily observe the electrons on the outer shell. These are the electrons that define whether a material is conductive or resistive to electricity.

“We’re very sensitive to the change of resistivity in these samples,” said Argonne physicist Fanny Rodolakis, a co-author on the paper who led the work at beamline 29-ID. ​“We can directly probe how the electronic states of oxygen and nickel evolve under different treatments.”

Physicist Zhan Zhang and postdoctoral researcher Hui Cao, both of Argonne, contributed to the work, and are listed as co-authors on the paper. Zhang said the APS is well suited for research like this, due to its bright beam that can be tuned over different energy ranges.

For practical use of quantum materials as AI hardware, researchers will need to figure out how to apply habituation and sensitization in large-scale systems. They also would have to determine how a material could respond to stimuli while integrated into a computer chip.

This study is a starting place for guiding those next steps, the researchers said. Meanwhile, the APS is undergoing a massive upgrade that will not only increase the brightness of its beams by up to 500 times, but will allow for those beams to be focused much smaller than they are today. And this, Zhou said, will prove useful once this technology does find its way into electronic devices.

“If we want to test the properties of microelectronics,” he said, ​“the smaller beam that the upgraded APS will give us will be essential.”

In addition to the experiments performed at Purdue and Argonne, a team at Rutgers University performed detailed theory calculations to understand what was happening within nickel oxide at a microscopic level to mimic the sea slug’s intelligence features. The University of Georgia measured conductivity to further analyze the material’s behavior.

A version of this story was originally published by Purdue University

About the Advanced Photon Source

The U. S. Department of Energy Office of Science’s Advanced Photon Source (APS) at Argonne National Laboratory is one of the world’s most productive X-ray light source facilities. The APS provides high-brightness X-ray beams to a diverse community of researchers in materials science, chemistry, condensed matter physics, the life and environmental sciences, and applied research. These X-rays are ideally suited for explorations of materials and biological structures; elemental distribution; chemical, magnetic, electronic states; and a wide range of technologically important engineering systems from batteries to fuel injector sprays, all of which are the foundations of our nation’s economic, technological, and physical well-being. Each year, more than 5,000 researchers use the APS to produce over 2,000 publications detailing impactful discoveries, and solve more vital biological protein structures than users of any other X-ray light source research facility. APS scientists and engineers innovate technology that is at the heart of advancing accelerator and light-source operations. This includes the insertion devices that produce extreme-brightness X-rays prized by researchers, lenses that focus the X-rays down to a few nanometers, instrumentation that maximizes the way the X-rays interact with samples being studied, and software that gathers and manages the massive quantity of data resulting from discovery research at the APS.

This research used resources of the Advanced Photon Source, a U.S. DOE Office of Science User Facility operated for the DOE Office of Science by Argonne National Laboratory under Contract No. DE-AC02-06CH11357.

Argonne National Laboratory seeks solutions to pressing national problems in science and technology. The nation’s first national laboratory, Argonne conducts leading-edge basic and applied scientific research in virtually every scientific discipline. Argonne researchers work closely with researchers from hundreds of companies, universities, and federal, state and municipal agencies to help them solve their specific problems, advance America’s scientific leadership and prepare the nation for a better future. With employees from more than 60 nations, Argonne is managed by UChicago Argonne, LLC for the U.S. Department of Energy’s Office of Science.

The U.S. Department of Energy’s Office of Science is the single largest supporter of basic research in the physical sciences in the United States and is working to address some of the most pressing challenges of our time. For more information, visit https://​ener​gy​.gov/​s​c​ience.

You can find the September 24, 2021 Purdue University story, Taking lessons from a sea slug, study points to better hardware for artificial intelligence here.

Here’s a link to and a citation for the paper,

Neuromorphic learning with Mott insulator NiO by Zhen Zhang, Sandip Mondal, Subhasish Mandal, Jason M. Allred, Neda Alsadat Aghamiri, Alireza Fali, Zhan Zhang, Hua Zhou, Hui Cao, Fanny Rodolakis, Jessica L. McChesney, Qi Wang, Yifei Sun, Yohannes Abate, Kaushik Roy, Karin M. Rabe, and Shriram Ramanathan. PNAS September 28, 2021 118 (39) e2017239118 DOI: https://doi.org/10.1073/pnas.2017239118

This paper is behind a paywall.