Tag Archives: brainlike computing

Neuromorphic (brainlike) computing inspired by sea slugs

The sea slug has taught neuroscientists the intelligence features that any creature in the animal kingdom needs to survive. Now, the sea slug is teaching artificial intelligence how to use those strategies. Pictured: Aplysia californica. (Image by NOAA Monterey Bay National Marine Sanctuary/Chad King.)

I don’t think I’ve ever seen a picture of a sea slug before. Its appearance reminds me of its terrestrial cousin.

As for some of the latest news on brainlike computing, a December 7, 2021 news item on Nanowerk makes an announcement from the Argonne National Laboratory (a US Department of Energy laboratory; Note: Links have been removed),

A team of scientists has discovered a new material that points the way toward more efficient artificial intelligence hardware for everything from self-driving cars to surgical robots.

For artificial intelligence (AI) to get any smarter, it needs first to be as intelligent as one of the simplest creatures in the animal kingdom: the sea slug.

A new study has found that a material can mimic the sea slug’s most essential intelligence features. The discovery is a step toward building hardware that could help make AI more efficient and reliable for technology ranging from self-driving cars and surgical robots to social media algorithms.

The study, published in the Proceedings of the National Academy of Sciences [PNAS] (“Neuromorphic learning with Mott insulator NiO”), was conducted by a team of researchers from Purdue University, Rutgers University, the University of Georgia and the U.S. Department of Energy’s (DOE) Argonne National Laboratory. The team used the resources of the Advanced Photon Source (APS), a DOE Office of Science user facility at Argonne.

A December 6, 2021 Argonne National Laboratory news release (also on EurekAlert) by Kayla Wiles and Andre Salles, which originated the news item, provides more detail,

“Through studying sea slugs, neuroscientists discovered the hallmarks of intelligence that are fundamental to any organism’s survival,” said Shriram Ramanathan, a Purdue professor of Materials Engineering. ​“We want to take advantage of that mature intelligence in animals to accelerate the development of AI.”

Two main signs of intelligence that neuroscientists have learned from sea slugs are habituation and sensitization. Habituation is getting used to a stimulus over time, such as tuning out noises when driving the same route to work every day. Sensitization is the opposite — it’s reacting strongly to a new stimulus, like avoiding bad food from a restaurant.

AI has a really hard time learning and storing new information without overwriting information it has already learned and stored, a problem that researchers studying brain-inspired computing call the ​“stability-plasticity dilemma.” Habituation would allow AI to ​“forget” unneeded information (achieving more stability) while sensitization could help with retaining new and important information (enabling plasticity).

In this study, the researchers found a way to demonstrate both habituation and sensitization in nickel oxide, a quantum material. Quantum materials are engineered to take advantage of features available only at nature’s smallest scales, and useful for information processing. If a quantum material could reliably mimic these forms of learning, then it may be possible to build AI directly into hardware. And if AI could operate both through hardware and software, it might be able to perform more complex tasks using less energy.

“We basically emulated experiments done on sea slugs in quantum materials toward understanding how these materials can be of interest for AI,” Ramanathan said.

Neuroscience studies have shown that the sea slug demonstrates habituation when it stops withdrawing its gill as much in response to tapping. But an electric shock to its tail causes its gill to withdraw much more dramatically, showing sensitization.

For nickel oxide, the equivalent of a ​“gill withdrawal” is an increased change in electrical resistance. The researchers found that repeatedly exposing the material to hydrogen gas causes nickel oxide’s change in electrical resistance to decrease over time, but introducing a new stimulus like ozone greatly increases the change in electrical resistance.

Ramanathan and his colleagues used two experimental stations at the APS to test this theory, using X-ray absorption spectroscopy. A sample of nickel oxide was exposed to hydrogen and oxygen, and the ultrabright X-rays of the APS were used to see changes in the material at the atomic level over time.

“Nickel oxide is a relatively simple material,” said Argonne physicist Hua Zhou, a co-author on the paper who worked with the team at beamline 33-ID. ​“The goal was to use something easy to manufacture, and see if it would mimic this behavior. We looked at whether the material gained or lost a single electron after exposure to the gas.”

The research team also conducted scans at beamline 29-ID, which uses softer X-rays to probe different energy ranges. While the harder X-rays of 33-ID are more sensitive to the ​“core” electrons, those closer to the nucleus of the nickel oxide’s atoms, the softer X-rays can more readily observe the electrons on the outer shell. These are the electrons that define whether a material is conductive or resistive to electricity.

“We’re very sensitive to the change of resistivity in these samples,” said Argonne physicist Fanny Rodolakis, a co-author on the paper who led the work at beamline 29-ID. ​“We can directly probe how the electronic states of oxygen and nickel evolve under different treatments.”

Physicist Zhan Zhang and postdoctoral researcher Hui Cao, both of Argonne, contributed to the work, and are listed as co-authors on the paper. Zhang said the APS is well suited for research like this, due to its bright beam that can be tuned over different energy ranges.

For practical use of quantum materials as AI hardware, researchers will need to figure out how to apply habituation and sensitization in large-scale systems. They also would have to determine how a material could respond to stimuli while integrated into a computer chip.

This study is a starting place for guiding those next steps, the researchers said. Meanwhile, the APS is undergoing a massive upgrade that will not only increase the brightness of its beams by up to 500 times, but will allow for those beams to be focused much smaller than they are today. And this, Zhou said, will prove useful once this technology does find its way into electronic devices.

“If we want to test the properties of microelectronics,” he said, ​“the smaller beam that the upgraded APS will give us will be essential.”

In addition to the experiments performed at Purdue and Argonne, a team at Rutgers University performed detailed theory calculations to understand what was happening within nickel oxide at a microscopic level to mimic the sea slug’s intelligence features. The University of Georgia measured conductivity to further analyze the material’s behavior.

A version of this story was originally published by Purdue University

About the Advanced Photon Source

The U. S. Department of Energy Office of Science’s Advanced Photon Source (APS) at Argonne National Laboratory is one of the world’s most productive X-ray light source facilities. The APS provides high-brightness X-ray beams to a diverse community of researchers in materials science, chemistry, condensed matter physics, the life and environmental sciences, and applied research. These X-rays are ideally suited for explorations of materials and biological structures; elemental distribution; chemical, magnetic, electronic states; and a wide range of technologically important engineering systems from batteries to fuel injector sprays, all of which are the foundations of our nation’s economic, technological, and physical well-being. Each year, more than 5,000 researchers use the APS to produce over 2,000 publications detailing impactful discoveries, and solve more vital biological protein structures than users of any other X-ray light source research facility. APS scientists and engineers innovate technology that is at the heart of advancing accelerator and light-source operations. This includes the insertion devices that produce extreme-brightness X-rays prized by researchers, lenses that focus the X-rays down to a few nanometers, instrumentation that maximizes the way the X-rays interact with samples being studied, and software that gathers and manages the massive quantity of data resulting from discovery research at the APS.

This research used resources of the Advanced Photon Source, a U.S. DOE Office of Science User Facility operated for the DOE Office of Science by Argonne National Laboratory under Contract No. DE-AC02-06CH11357.

Argonne National Laboratory seeks solutions to pressing national problems in science and technology. The nation’s first national laboratory, Argonne conducts leading-edge basic and applied scientific research in virtually every scientific discipline. Argonne researchers work closely with researchers from hundreds of companies, universities, and federal, state and municipal agencies to help them solve their specific problems, advance America’s scientific leadership and prepare the nation for a better future. With employees from more than 60 nations, Argonne is managed by UChicago Argonne, LLC for the U.S. Department of Energy’s Office of Science.

The U.S. Department of Energy’s Office of Science is the single largest supporter of basic research in the physical sciences in the United States and is working to address some of the most pressing challenges of our time. For more information, visit https://​ener​gy​.gov/​s​c​ience.

You can find the September 24, 2021 Purdue University story, Taking lessons from a sea slug, study points to better hardware for artificial intelligence here.

Here’s a link to and a citation for the paper,

Neuromorphic learning with Mott insulator NiO by Zhen Zhang, Sandip Mondal, Subhasish Mandal, Jason M. Allred, Neda Alsadat Aghamiri, Alireza Fali, Zhan Zhang, Hua Zhou, Hui Cao, Fanny Rodolakis, Jessica L. McChesney, Qi Wang, Yifei Sun, Yohannes Abate, Kaushik Roy, Karin M. Rabe, and Shriram Ramanathan. PNAS September 28, 2021 118 (39) e2017239118 DOI: https://doi.org/10.1073/pnas.2017239118

This paper is behind a paywall.

Memristive spintronic neurons

A December 6, 2021 news item on Nanowerk on memristive spintronic neurons (Note: A link has been removed),

Researchers at Tohoku University and the University of Gothenburg have established a new spintronic technology for brain-inspired computing.

Their achievement was published in the journal Nature Materials (“Memristive control of mutual SHNO synchronization for neuromorphic computing”).

Sophisticated cognitive tasks, such as image and speech recognition, have seen recent breakthroughs thanks to deep learning. Even so, the human brain still executes these tasks without exerting much energy and with greater efficiency than any computer. The development of energy-efficient artificial neurons capable of emulating brain-inspired processes has therefore been a major research goal for decades.

A November 29, 2021 Tohoku University press release (also on EurekAlert but published November 30, 2021), which originated the news release, provides more technical detail,

Researchers demonstrated the first integration of a cognitive computing nano-element – the memristor – into another – a spintronic oscillator. Arrays of these memristor-controlled oscillators combine the non-volatile local storage of the memristor function with the microwave frequency computation of the nano-oscillator networks and can closely imitate the non-linear oscillatory neural networks of the human brain.

Resistance of the memristor changed with the voltage hysteresis applied to the top Ti/Cu electrode. Upon voltage application to the electrode, an electric field was applied at the high-resistance state, compared to electric current flows for the low-resistance state. The effects of electric field and current on the oscillator differed from each other, offering various controls of oscillation and synchronization properties.

Professor Johan Åkerman of the University of Gothenburg and leader of the study expressed his hopes for the future and the significance of the finding. “We are particularly interested in emerging quantum-inspired computing schemes, such as Ising Machines. The results also highlight the productive collaboration that we have established in neuromorphic spintronics between the University of Gothenburg and Tohoku University, something that is also part of the Sweden-Japan collaborative network MIRAI 2.0.”

“So far, artificial neurons and synapses have been developed separately in many fields; this work marks an important milestone: two functional elements have been combined into one,” said professor Shunsuke Fukami, who led the project on the Tohoku University side. Dr. Mohammad Zahedinejad of the University of Gothenburg and first author of the study adds, “Using the memristor-controlled spintronic oscillator arrays, we could tune the synaptic interactions between adjacent neurons and program them into mutually different and partially synchronized states.”

To put into practice their discovery, the researchers examined the operation of a test device comprising one oscillator and one memristor. The constricted region of W/CoFeB stack served as an oscillator, i.e., the neuron, whereas the MgO/AlOx/SiNx stack acted as a memristor, i.e., the synapse.

Resistance of the memristor changed with the voltage hysteresis applied to the top Ti/Cu electrode. Upon voltage application to the electrode, an electric field was applied at the high-resistance state, compared to electric current flows for the low-resistance state. The effects of electric field and current on the oscillator differed from each other, offering various controls of oscillation and synchronization properties.

Professor Johan Åkerman of the University of Gothenburg and leader of the study expressed his hopes for the future and the significance of the finding. “We are particularly interested in emerging quantum-inspired computing schemes, such as Ising Machines. The results also highlight the productive collaboration that we have established in neuromorphic spintronics between the University of Gothenburg and Tohoku University, something that is also part of the Sweden-Japan collaborative network MIRAI 2.0.” [sic]

Here’s a link to and a citation for the paper,

Memristive control of mutual spin Hall nano-oscillator synchronization for neuromorphic computing by Mohammad Zahedinejad, Himanshu Fulara, Roman Khymyn, Afshin Houshang, Mykola Dvornik, Shunsuke Fukami, Shun Kanai, Hideo Ohno & Johan Åkerman. Nature Materials (2021) DOI: https://doi.org/10.1038/s41563-021-01153-6 Published 29 November 2021

This paper is behind a paywall.

Device with brainlike plasticity

A September 1, 2021 news item on ScienceDaily announces a new type of memristor from Texas A&M University (Texas A&M or TAMU) and the National University of Singapore (NUS)

In a discovery published in the journal Nature, an international team of researchers has described a novel molecular device with exceptional computing prowess.

Reminiscent of the plasticity of connections in the human brain, the device can be reconfigured on the fly for different computational tasks by simply changing applied voltages. Furthermore, like nerve cells can store memories, the same device can also retain information for future retrieval and processing.

Two of the universities involved in the research have issued news/press releases. I’m going to start with the September 1, 2021 Texas A&M University news release (also on EurekAlert), which originated the news item on ScienceDaily,

“The brain has the remarkable ability to change its wiring around by making and breaking connections between nerve cells. Achieving something comparable in a physical system has been extremely challenging,” said Dr. R. Stanley Williams [emphasis mine], professor in the Department of Electrical and Computer Engineering at Texas A&M University. “We have now created a molecular device with dramatic reconfigurability, which is achieved not by changing physical connections like in the brain, but by reprogramming its logic.”

Dr. T. Venkatesan, director of the Center for Quantum Research and Technology (CQRT) at the University of Oklahoma, Scientific Affiliate at National Institute of Standards and Technology, Gaithersburg, and adjunct professor of electrical and computer engineering at the National University of Singapore, added that their molecular device might in the future help design next-generation processing chips with enhanced computational power and speed, but consuming significantly reduced energy.

Whether it is the familiar laptop or a sophisticated supercomputer, digital technologies face a common nemesis, the von Neumann bottleneck. This delay in computational processing is a consequence of current computer architectures, wherein the memory, containing data and programs, is physically separated from the processor. As a result, computers spend a significant amount of time shuttling information between the two systems, causing the bottleneck. Also, despite extremely fast processor speeds, these units can be idling for extended amounts of time during periods of information exchange.

As an alternative to conventional electronic parts used for designing memory units and processors, devices called memristors offer a way to circumvent the von Neumann bottleneck. Memristors, such as those made of niobium dioxide and vanadium dioxide, transition from being an insulator to a conductor at a set temperature. This property gives these types of memristors the ability to perform computations and store data.

However, despite their many advantages, these metal oxide memristors are made of rare-earth elements and can operate only in restrictive temperature regimes. Hence, there has been an ongoing search for promising organic molecules that can perform a comparable memristive function, said Williams.

Dr. Sreebrata Goswami, a professor at the Indian Association for the Cultivation of Science, designed the material used in this work. The compound has a central metal atom (iron) bound to three phenyl azo pyridine organic molecules called ligands.

“This behaves like an electron sponge that can absorb as many as six electrons reversibly, resulting in seven different redox states,” said Sreebrata. “The interconnectivity between these states is the key behind the reconfigurability shown in this work.”

Dr. Sreetosh Goswami, a researcher at the National University of Singapore, devised this project by creating a tiny electrical circuit consisting of a 40-nanometer layer of molecular film sandwiched between a layer of gold on top and gold-infused nanodisc and indium tin oxide at the bottom.

On applying a negative voltage on the device, Sreetosh witnessed a current-voltage profile that was nothing like anyone had seen before. Unlike metal-oxide memristors that can switch from metal to insulator at only one fixed voltage, the organic molecular devices could switch back and forth from insulator to conductor at several discrete sequential voltages.

“So, if you think of the device as an on-off switch, as we were sweeping the voltage more negative, the device first switched from on to off, then off to on, then on to off and then back to on. I’ll say that we were just blown out of our seat,” said Venkatesan. “We had to convince ourselves that what we were seeing was real.”

Sreetosh and Sreebrata investigated the molecular mechanisms underlying the curious switching behavior using an imaging technique called Raman spectroscopy. In particular, they looked for spectral signatures in the vibrational motion of the organic molecule that could explain the multiple transitions. Their investigation revealed that sweeping the voltage negative triggered the ligands on the molecule to undergo a series of reduction, or electron-gaining, events that caused the molecule to transition between off state and on states.

Next, to describe the extremely complex current-voltage profile of the molecular device mathematically, Williams deviated from the conventional approach of basic physics-based equations. Instead, he described the behavior of the molecules using a decision tree algorithm with “if-then-else” statements, a commonplace line of code in several computer programs, particularly digital games.

“Video games have a structure where you have a character that does something, and then something occurs as a result. And so, if you write that out in a computer algorithm, they are if-then-else statements,” said Williams. “Here, the molecule is switching from on to off as a consequence of applied voltage, and that’s when I had the eureka moment to use decision trees to describe these devices, and it worked very well.” 

But the researchers went a step further to exploit these molecular devices to run programs for different real-world computational tasks. Sreetosh showed experimentally that their devices could perform fairly complex computations in a single time step and then be reprogrammed to perform another task in the next instant.

“It was quite extraordinary; our device was doing something like what the brain does, but in a very different way,” said Sreetosh. “When you’re learning something new or when you’re deciding, the brain can actually reconfigure and change physical wiring around. Similarly, we can logically reprogram or reconfigure our devices by giving them a different voltage pulse then they’ve seen before.” 

Venkatesan noted that it would take thousands of transistors to perform the same computational functions as one of their molecular devices with its different decision trees. Hence, he said their technology might first be used in handheld devices, like cell phones and sensors, and other applications where power is limited.

Other contributors to the research include Dr. Abhijeet Patra and Dr. Ariando from the National University of Singapore; Dr. Rajib Pramanick and Dr. Santi Prasad Rath from the Indian Association for the Cultivation of Science; Dr. Martin Foltin from Hewlett Packard Enterprise, Colorado; and Dr. Damien Thompson from the University of Limerick, Ireland.

Venkatesan said that this research is indicative of the future discoveries from this collaborative team, which will include the center of nanoscience and engineering at the Indian Institute of Science and the Microsystems and Nanotechnology Division at the NIST.

I’ve highlighted R. Stanley Williams because he and his team at HP [Hewlett Packard] Labs helped to kick off current memristor research in 2008 with the publication of two papers as per my April 5, 2010 posting,

In 2008, two memristor papers were published in Nature and Nature Nanotechnology, respectively. In the first (Nature, May 2008 [article still behind a paywall], a team at HP Labs claimed they had proved the existence of memristors (a fourth member of electrical engineering’s ‘Holy Trinity of the capacitor, resistor, and inductor’). In the second paper (Nature Nanotechnology, July 2008 [article still behind a paywall]) the team reported that they had achieved engineering control.

The novel memory device is based on a molecular system that can transition between on and off states at several discrete sequential voltages Courtesy: National University of Singapore

There is more technical detail in the September 2, 2022 NUS press release (also on EurekAlert),

Many electronic devices today are dependent on semiconductor logic circuits based on switches hard-wired to perform predefined logic functions. Physicists from the National University of Singapore (NUS), together with an international team of researchers, have developed a novel molecular memristor, or an electronic memory device, that has exceptional memory reconfigurability. 

Unlike hard-wired standard circuits, the molecular device can be reconfigured using voltage to embed different computational tasks. The energy-efficient new technology, which is capable of enhanced computational power and speed, can potentially be used in edge computing, as well as handheld devices and applications with limited power resource.

“This work is a significant breakthrough in our quest to design low-energy computing. The idea of using multiple switching in a single element draws inspiration from how the brain works and fundamentally reimagines the design strategy of a logic circuit,” said Associate Professor Ariando from the NUS Department of Physics who led the research.

The research was first published in the journal Nature on 1 September 2021, and carried out in collaboration with the Indian Association for the Cultivation of Science, Hewlett Packard Enterprise, the University of Limerick, the University of Oklahoma, and Texas A&M University.

Brain-inspired technology

“This new discovery can contribute to developments in edge computing as a sophisticated in-memory computing approach to overcome the von Neumann bottleneck, a delay in computational processing seen in many digital technologies due to the physical separation of memory storage from a device’s processor,” said Assoc Prof Ariando. The new molecular device also has the potential to contribute to designing next generation processing chips with enhanced computational power and speed.

“Similar to the flexibility and adaptability of connections in the human brain, our memory device can be reconfigured on the fly for different computational tasks by simply changing applied voltages. Furthermore, like how nerve cells can store memories, the same device can also retain information for future retrieval and processing,” said first author Dr Sreetosh Goswami, Research Fellow from the Department of Physics at NUS.

Research team member Dr Sreebrata Goswami, who was a Senior Research Scientist at NUS and previously Professor at the Indian Association for the Cultivation of Science, conceptualised and designed a molecular system belonging to the chemical family of phenyl azo pyridines that have a central metal atom bound to organic molecules called ligands. “These molecules are like electron sponges that can offer as many as six electron transfers resulting in five different molecular states. The interconnectivity between these states is the key behind the device’s reconfigurability,” explained Dr Sreebrata Goswami.

Dr Sreetosh Goswami created a tiny electrical circuit consisting a 40-nanometer layer of molecular film sandwiched between a top layer of gold, and a bottom layer of gold-infused nanodisc and indium tin oxide. He observed an unprecedented current-voltage profile upon applying a negative voltage to the device. Unlike conventional metal-oxide memristors that are switched on and off at only one fixed voltage, these organic molecular devices could switch between on-off states at several discrete sequential voltages.

Using an imaging technique called Raman spectroscopy, spectral signatures in the vibrational motion of the organic molecule were observed to explain the multiple transitions. Dr Sreebrata Goswami explained, “Sweeping the negative voltage triggered the ligands on the molecule to undergo a series of reduction, or electron-gaining which caused the molecule to transition between off and on states.”

The researchers described the behavior of the molecules using a decision tree algorithm with “if-then-else” statements, which is used in the coding of several computer programs, particularly digital games, as compared to the conventional approach of using basic physics-based equations.

New possibilities for energy-efficient devices

Building on their research, the team used the molecular memory devices to run programs for different real-world computational tasks. As a proof of concept, the team demonstrated that their technology could perform complex computations in a single step, and could be reprogrammed to perform another task in the next instant. An individual molecular memory device could perform the same computational functions as thousands of transistors, making the technology a more powerful and energy-efficient memory option.

“The technology might first be used in handheld devices, like cell phones and sensors, and other applications where power is limited,” added Assoc Prof Ariando.

The team in the midst of building new electronic devices incorporating their innovation, and working with collaborators to conduct simulation and benchmarking relating to existing technologies.

Other contributors to the research paper include Abhijeet Patra and Santi Prasad Rath from NUS, Rajib Pramanick from the Indian Association for the Cultivation of Science, Martin Foltin from Hewlett Packard Enterprise, Damien Thompson from the University of Limerick, T. Venkatesan from the University of Oklahoma, and R. Stanley Williams from Texas A&M University.

Here’s a link to and a citation for the paper,

Decision trees within a molecular memristor by Sreetosh Goswami, Rajib Pramanick, Abhijeet Patra, Santi Prasad Rath, Martin Foltin, A. Ariando, Damien Thompson, T. Venkatesan, Sreebrata Goswami & R. Stanley Williams. Nature volume 597, pages 51–56 (2021) DOI: https://doi.org/10.1038/s41586-021-03748-0 Published 01 September 2021 Issue Date 02 September 2021

This paper is behind a paywall.

Pandemic science breakthroughs: combining supercomputing materials with specialized oxides to mimic brain function

This breakthrough in neuromorphic (brainlike) computing is being attributed to the pandemic (COVID-19) according to a September 3, 2021 news item on phys.org,

Isaac Newton’s groundbreaking scientific productivity while isolated from the spread of bubonic plague is legendary. University of California San Diego physicists can now claim a stake in the annals of pandemic-driven science.

A team of UC San Diego [University of California San Diego] researchers and colleagues at Purdue University have now simulated the foundation of new types of artificial intelligence computing devices that mimic brain functions, an achievement that resulted from the COVID-19 pandemic lockdown. By combining new supercomputing materials with specialized oxides, the researchers successfully demonstrated the backbone of networks of circuits and devices that mirror the connectivity of neurons and synapses in biologically based neural networks.

A September 3, 2021 UC San Diego news release by Mario Aguilera, which originated the news item, delves further into the topic of neuromorphic computing,

As bandwidth demands on today’s computers and other devices reach their technological limit, scientists are working towards a future in which new materials can be orchestrated to mimic the speed and precision of animal-like nervous systems. Neuromorphic computing based on quantum materials, which display quantum-mechanics-based properties, allow scientists the ability to move beyond the limits of traditional semiconductor materials. This advanced versatility opens the door to new-age devices that are far more flexible with lower energy demands than today’s devices. Some of these efforts are being led by Department of Physics Assistant Professor Alex Frañó and other researchers in UC San Diego’s Quantum Materials for Energy Efficient Neuromorphic Computing (Q-MEEN-C), a Department of Energy-supported Energy Frontier Research Center.

“In the past 50 years we’ve seen incredible technological achievements that resulted in computers that were progressively smaller and faster—but even these devices have limits for data storage and energy consumption,” said Frañó, who served as one of the PNAS paper’s authors, along with former UC San Diego chancellor, UC president and physicist Robert Dynes. “Neuromorphic computing is inspired by the emergent processes of the millions of neurons, axons and dendrites that are connected all over our body in an extremely complex nervous system.”

As experimental physicists, Frañó and Dynes are typically busy in their laboratories using state-of-the-art instruments to explore new materials. But with the onset of the pandemic, Frañó and his colleagues were forced into isolation with concerns about how they would keep their research moving forward. They eventually came to the realization that they could advance their science from the perspective of simulations of quantum materials.

“This is a pandemic paper,” said Frañó. “My co-authors and I decided to study this issue from a more theoretical perspective so we sat down and started having weekly (Zoom-based) meetings. Eventually the idea developed and took off.”

The researchers’ innovation was based on joining two types of quantum substances—superconducting materials based on copper oxide and metal insulator transition materials that are based on nickel oxide. They created basic “loop devices” that could be precisely controlled at the nano-scale with helium and hydrogen, reflecting the way neurons and synapses are connected. Adding more of these devices that link and exchange information with each other, the simulations showed that eventually they would allow the creation of an array of networked devices that display emergent properties like an animal’s brain.

Like the brain, neuromorphic devices are being designed to enhance connections that are more important than others, similar to the way synapses weigh more important messages than others.

“It’s surprising that when you start to put in more loops, you start to see behavior that you did not expect,” said Frañó. “From this paper we can imagine doing this with six, 20 or a hundred of these devices—then it gets exponentially rich from there. Ultimately the goal is to create a very large and complex network of these devices that will have the ability to learn and adapt.”

With eased pandemic restrictions, Frañó and his colleagues are back in the laboratory, testing the theoretical simulations described in the PNAS [Proceedings of the National Academy of Sciences] paper with real-world instruments.

Here’s a link to and a citation for the paper,

Low-temperature emergent neuromorphic networks with correlated oxide devices by Uday S. Goteti, Ivan A. Zaluzhnyy, Shriram Ramanathan, Robert C. Dynes, and Alex Frano. PNAS August 31, 2021 118 (35) e2103934118; DOI: https://doi.org/10.1073/pnas.2103934118

This paper is open access.

Highly scalable neuromorphic (brainlike) computing hardware

This work comes from Korea (or South Korea, if you prefer). An August 5, 2021 news item on ScienceDaily announces a step forward in the future production of neuromorphic hardware,

KAIST [The Korea Advanced Institute of Science and Technology] researchers fabricated a brain-inspired highly scalable neuromorphic hardware by co-integrating single transistor neurons and synapses. Using standard silicon complementary metal-oxide-semiconductor (CMOS) technology, the neuromorphic hardware is expected to reduce chip cost and simplify fabrication procedures.

Caption: Single transistor neurons and synapses fabricated using a standard silicon CMOS process. They are co-integrated on the same 8-inch wafer. Credit: KAIST

An August 5, 2021 The Korea Advanced Institute of Science and Technology (KAIST) press release (also on EurekAlert), which originated the news item, provides more detail about the research,

The research team led by Yang-Kyu Choi and Sung-Yool Choi produced a [sic] neurons and synapses based on single transistor for highly scalable neuromorphic hardware and showed the ability to recognize text and face images. This research was featured in Science Advances on August 4 [2021].

Neuromorphic hardware has attracted a great deal of attention because of its artificial intelligence functions, but consuming ultra-low power of less than 20 watts by mimicking the human brain. To make neuromorphic hardware work, a neuron that generates a spike when integrating a certain signal, and a synapse remembering the connection between two neurons are necessary, just like the biological brain. However, since neurons and synapses constructed on digital or analog circuits occupy a large space, there is a limit in terms of hardware efficiency and costs. Since the human brain consists of about 1011 neurons and 1014 synapses, it is necessary to improve the hardware cost in order to apply it to mobile and IoT devices.

To solve the problem, the research team mimicked the behavior of biological neurons and synapses with a single transistor, and co-integrated them onto an 8-inch wafer. The manufactured neuromorphic transistors have the same structure as the transistors for memory and logic that are currently mass-produced. In addition, the neuromorphic transistors proved for the first time that they can be implemented with a ‘Janus structure’ that functions as both neuron and synapse, just like coins have heads and tails.

Professor Yang-Kyu Choi said that this work can dramatically reduce the hardware cost by replacing the neurons and synapses that were based on complex digital and analog circuits with a single transistor. “We have demonstrated that neurons and synapses can be implemented using a single transistor,” said Joon-Kyu Han, the first author. “By co-integrating single transistor neurons and synapses on the same wafer using a standard CMOS process, the hardware cost of the neuromorphic hardware has been improved, which will accelerate the commercialization of neuromorphic hardware,” Han added.This research was supported by the National Research Foundation (NRF) and IC Design Education Center (IDEC). 

Here’s a link to and a citation for the paper,

Cointegration of single-transistor neurons and synapses by nanoscale CMOS fabrication for highly scalable neuromorphic hardware by Joon-Kyu Han, Jungyeop Oh, Gyeong-Jun Yun, Dongeun Yoo, Myung-Su Kim, Ji-Man Yu, Sung-Yool Choi, and Yang-Kyu Choi. Science Advances 04 Aug 2021: Vol. 7, no. 32, eabg8836 DOI: 10.1126/sciadv.abg8836

This article appears to be open access.

Memristors, it’s all about the oxides

I have one research announcement from China and another from the Netherlands, both of which concern memristors and oxides.

China

A May 17, 2021 news item on Nanowerk announces work, which suggests that memristors may not need to rely solely on oxides but could instead utilize light more gainfully,

Scientists are getting better at making neuron-like junctions for computers that mimic the human brain’s random information processing, storage and recall. Fei Zhuge of the Chinese Academy of Sciences and colleagues reviewed the latest developments in the design of these ‘memristors’ for the journal Science and Technology of Advanced Materials …

Computers apply artificial intelligence programs to recall previously learned information and make predictions. These programs are extremely energy- and time-intensive: typically, vast volumes of data must be transferred between separate memory and processing units. To solve this issue, researchers have been developing computer hardware that allows for more random and simultaneous information transfer and storage, much like the human brain.

Electronic circuits in these ‘neuromorphic’ computers include memristors that resemble the junctions between neurons called synapses. Energy flows through a material from one electrode to another, much like a neuron firing a signal across the synapse to the next neuron. Scientists are now finding ways to better tune this intermediate material so the information flow is more stable and reliable.

I had no success locating the original news release, which originated the news item, but have found this May 17, 2021 news item on eedesignit.com, which provides the remaining portion of the news release.

“Oxides are the most widely used materials in memristors,” said Zhuge. “But oxide memristors have unsatisfactory stability and reliability. Oxide-based hybrid structures can effectively improve this.”

Memristors are usually made of an oxide-based material sandwiched between two electrodes. Researchers are getting better results when they combine two or more layers of different oxide-based materials between the electrodes. When an electrical current flows through the network, it induces ions to drift within the layers. The ions’ movements ultimately change the memristor’s resistance, which is necessary to send or stop a signal through the junction.

Memristors can be tuned further by changing the compounds used for electrodes or by adjusting the intermediate oxide-based materials. Zhuge and his team are currently developing optoelectronic neuromorphic computers based on optically-controlled oxide memristors. Compared to electronic memristors, photonic ones are expected to have higher operation speeds and lower energy consumption. They could be used to construct next generation artificial visual systems with high computing efficiency.

Now for a picture that accompanied the news release, which follows,

Fig. The all-optically controlled memristor developed for optoelectronic neuromorphic computing (Image by NIMTE)

Here’s the February 7, 2021 Ningbo Institute of Materials Technology and Engineering (NIMTE) press release featuring this work and a more technical description,

A research group led by Prof. ZHUGE Fei at the Ningbo Institute of Materials Technology and Engineering (NIMTE) of the Chinese Academy of Sciences (CAS) developed an all-optically controlled (AOC) analog memristor, whose memconductance can be reversibly tuned by varying only the wavelength of the controlling light.

As the next generation of artificial intelligence (AI), neuromorphic computing (NC) emulates the neural structure and operation of the human brain at the physical level, and thus can efficiently perform multiple advanced computing tasks such as learning, recognition and cognition.

Memristors are promising candidates for NC thanks to the feasibility of high-density 3D integration and low energy consumption. Among them, the emerging optoelectronic memristors are competitive by virtue of combining the advantages of both photonics and electronics. However, the reversible tuning of memconductance depends highly on the electric excitation, which have severely limited the development and application of optoelectronic NC.

To address this issue, researchers at NIMTE proposed a bilayered oxide AOC memristor, based on the relatively mature semiconductor material InGaZnO and a memconductance tuning mechanism of light-induced electron trapping and detrapping.

The traditional electrical memristors require strong electrical stimuli to tune their memconductance, leading to high power consumption, a large amount of Joule heat, microstructural change triggered by the Joule heat, and even high crosstalk in memristor crossbars.

On the contrary, the developed AOC memristor does not involve microstructure changes, and can operate upon weak light irradiation with light power density of only 20 μW cm-2, which has provided a new approach to overcome the instability of the memristor.

Specifically, the AOC memristor can serve as an excellent synaptic emulator and thus mimic spike-timing-dependent plasticity (STDP) which is an important learning rule in the brain, indicating its potential applications in AOC spiking neural networks for high-efficiency optoelectronic NC.

Moreover, compared to purely optical computing, the optoelectronic computing using our AOC memristor showed higher practical feasibility, on account of the simple structure and fabrication process of the device.

The study may shed light on the in-depth research and practical application of optoelectronic NC, and thus promote the development of the new generation of AI.

This work was supported by the National Natural Science Foundation of China (No. 61674156 and 61874125), the Strategic Priority Research Program of Chinese Academy of Sciences (No. XDB32050204), and the Zhejiang Provincial Natural Science Foundation of China (No. LD19E020001).

Here’s a link to and a citation for the paper,

Hybrid oxide brain-inspired neuromorphic devices for hardware implementation of artificial intelligence by Jingrui Wang, Xia Zhuge & Fei Zhuge. Science and Technology of Advanced Materials Volume 22, 2021 – Issue 1 Pages 326-344 DOI: https://doi.org/10.1080/14686996.2021.1911277 Published online:14 May 2021

This paper appears to be open access.

Netherlands

In this case, a May 18, 2021 news item on Nanowerk marries oxides to spintronics,

Classic computers use binary values (0/1) to perform. By contrast, our brain cells can use more values to operate, making them more energy-efficient than computers. This is why scientists are interested in neuromorphic (brain-like) computing.

Physicists from the University of Groningen (the Netherlands) have used a complex oxide to create elements comparable to the neurons and synapses in the brain using spins, a magnetic property of electrons.

The press release, which follows, was accompanied by this image illustrating the work,

Caption: Schematic of the proposed device structure for neuromorphic spintronic memristors. The write path is between the terminals through the top layer (black dotted line), the read path goes through the device stack (red dotted line). The right side of the figure indicates how the choice of substrate dictates whether the device will show deterministic or probabilistic behaviour. Credit: Banerjee group, University of Groningen

A May 18, 2021 University of Groningen press release (also on EurekAlert), which originated the news item, adds more ‘spin’ to the story,

Although computers can do straightforward calculations much faster than humans, our brains outperform silicon machines in tasks like object recognition. Furthermore, our brain uses less energy than computers. Part of this can be explained by the way our brain operates: whereas a computer uses a binary system (with values 0 or 1), brain cells can provide more analogue signals with a range of values.

Thin films

The operation of our brains can be simulated in computers, but the basic architecture still relies on a binary system. That is why scientist look for ways to expand this, creating hardware that is more brain-like, but will also interface with normal computers. ‘One idea is to create magnetic bits that can have intermediate states’, says Tamalika Banerjee, Professor of Spintronics of Functional Materials at the Zernike Institute for Advanced Materials, University of Groningen. She works on spintronics, which uses a magnetic property of electrons called ‘spin’ to transport, manipulate and store information.

In this study, her PhD student Anouk Goossens, first author of the paper, created thin films of a ferromagnetic metal (strontium-ruthenate oxide, SRO) grown on a substrate of strontium titanate oxide. The resulting thin film contained magnetic domains that were perpendicular to the plane of the film. ‘These can be switched more efficiently than in-plane magnetic domains’, explains Goossens. By adapting the growth conditions, it is possible to control the crystal orientation in the SRO. Previously, out-of-plane magnetic domains have been made using other techniques, but these typically require complex layer structures.

Magnetic anisotropy

The magnetic domains can be switched using a current through a platinum electrode on top of the SRO. Goossens: ‘When the magnetic domains are oriented perfectly perpendicular to the film, this switching is deterministic: the entire domain will switch.’ However, when the magnetic domains are slightly tilted, the response is probabilistic: not all the domains are the same, and intermediate values occur when only part of the crystals in the domain have switched.

By choosing variants of the substrate on which the SRO is grown, the scientists can control its magnetic anisotropy. This allows them to produce two different spintronic devices. ‘This magnetic anisotropy is exactly what we wanted’, says Goossens. ‘Probabilistic switching compares to how neurons function, while the deterministic switching is more like a synapse.’

The scientists expect that in the future, brain-like computer hardware can be created by combining these different domains in a spintronic device that can be connected to standard silicon-based circuits. Furthermore, probabilistic switching would also allow for stochastic computing, a promising technology which represents continuous values by streams of random bits. Banerjee: ‘We have found a way to control intermediate states, not just for memory but also for computing.’

Here’s a link to and a citation for the paper,

Anisotropy and Current Control of Magnetization in SrRuO3/SrTiO3 Heterostructures for Spin-Memristors by A.S. Goossens, M.A.T. Leiviskä and T. Banerjee. Frontiers in Nanotechnology DOI: https://doi.org/10.3389/fnano.2021.680468 Published: 18 May 2021

This appears to be open access.

Synaptic transistor better then memristor when it comes to brainlike learning for computers

An April 30, 2021 news item on Nanowerk announced research from a joint team at Northwestern University (located in Chicago, Illinois, US) and University of Hong Kong of researchers in the field of neuromorphic (brainlike) computing,

Researchers have developed a brain-like computing device that is capable of learning by association.

Similar to how famed physiologist Ivan Pavlov conditioned dogs to associate a bell with food, researchers at Northwestern University and the University of Hong Kong successfully conditioned their circuit to associate light with pressure.

The device’s secret lies within its novel organic, electrochemical “synaptic transistors,” which simultaneously process and store information just like the human brain. The researchers demonstrated that the transistor can mimic the short-term and long-term plasticity of synapses in the human brain, building on memories to learn over time.

With its brain-like ability, the novel transistor and circuit could potentially overcome the limitations of traditional computing, including their energy-sapping hardware and limited ability to perform multiple tasks at the same time. The brain-like device also has higher fault tolerance, continuing to operate smoothly even when some components fail.

“Although the modern computer is outstanding, the human brain can easily outperform it in some complex and unstructured tasks, such as pattern recognition, motor control and multisensory integration,” said Northwestern’s Jonathan Rivnay, a senior author of the study. “This is thanks to the plasticity of the synapse, which is the basic building block of the brain’s computational power. These synapses enable the brain to work in a highly parallel, fault tolerant and energy-efficient manner. In our work, we demonstrate an organic, plastic transistor that mimics key functions of a biological synapse.”

Rivnay is an assistant professor of biomedical engineering at Northwestern’s McCormick School of Engineering. He co-led the study with Paddy Chan, an associate professor of mechanical engineering at the University of Hong Kong. Xudong Ji, a postdoctoral researcher in Rivnay’s group, is the paper’s first author.

Caption: By connecting single synaptic transistors into a neuromorphic circuit, researchers demonstrated that their device could simulate associative learning. Credit: Northwestern University

An April 30, 2021 Northwestern University news release (also on EurekAlert), which originated the news item, includes a good explanation about brainlike computing and information about how synaptic transistors work along with some suggestions for future applications,

Conventional, digital computing systems have separate processing and storage units, causing data-intensive tasks to consume large amounts of energy. Inspired by the combined computing and storage process in the human brain, researchers, in recent years, have sought to develop computers that operate more like the human brain, with arrays of devices that function like a network of neurons.

“The way our current computer systems work is that memory and logic are physically separated,” Ji said. “You perform computation and send that information to a memory unit. Then every time you want to retrieve that information, you have to recall it. If we can bring those two separate functions together, we can save space and save on energy costs.”

Currently, the memory resistor, or “memristor,” is the most well-developed technology that can perform combined processing and memory function, but memristors suffer from energy-costly switching and less biocompatibility. These drawbacks led researchers to the synaptic transistor — especially the organic electrochemical synaptic transistor, which operates with low voltages, continuously tunable memory and high compatibility for biological applications. Still, challenges exist.

“Even high-performing organic electrochemical synaptic transistors require the write operation to be decoupled from the read operation,” Rivnay said. “So if you want to retain memory, you have to disconnect it from the write process, which can further complicate integration into circuits or systems.”

How the synaptic transistor works

To overcome these challenges, the Northwestern and University of Hong Kong team optimized a conductive, plastic material within the organic, electrochemical transistor that can trap ions. In the brain, a synapse is a structure through which a neuron can transmit signals to another neuron, using small molecules called neurotransmitters. In the synaptic transistor, ions behave similarly to neurotransmitters, sending signals between terminals to form an artificial synapse. By retaining stored data from trapped ions, the transistor remembers previous activities, developing long-term plasticity.

The researchers demonstrated their device’s synaptic behavior by connecting single synaptic transistors into a neuromorphic circuit to simulate associative learning. They integrated pressure and light sensors into the circuit and trained the circuit to associate the two unrelated physical inputs (pressure and light) with one another.

Perhaps the most famous example of associative learning is Pavlov’s dog, which naturally drooled when it encountered food. After conditioning the dog to associate a bell ring with food, the dog also began drooling when it heard the sound of a bell. For the neuromorphic circuit, the researchers activated a voltage by applying pressure with a finger press. To condition the circuit to associate light with pressure, the researchers first applied pulsed light from an LED lightbulb and then immediately applied pressure. In this scenario, the pressure is the food and the light is the bell. The device’s corresponding sensors detected both inputs.

After one training cycle, the circuit made an initial connection between light and pressure. After five training cycles, the circuit significantly associated light with pressure. Light, alone, was able to trigger a signal, or “unconditioned response.”

Future applications

Because the synaptic circuit is made of soft polymers, like a plastic, it can be readily fabricated on flexible sheets and easily integrated into soft, wearable electronics, smart robotics and implantable devices that directly interface with living tissue and even the brain [emphasis mine].

“While our application is a proof of concept, our proposed circuit can be further extended to include more sensory inputs and integrated with other electronics to enable on-site, low-power computation,” Rivnay said. “Because it is compatible with biological environments, the device can directly interface with living tissue, which is critical for next-generation bioelectronics.”

Here’s a link to and a citation for the paper,

Mimicking associative learning using an ion-trapping non-volatile synaptic organic electrochemical transistor by Xudong Ji, Bryan D. Paulsen, Gary K. K. Chik, Ruiheng Wu, Yuyang Yin, Paddy K. L. Chan & Jonathan Rivnay . Nature Communications volume 12, Article number: 2480 (2021) DOI: https://doi.org/10.1038/s41467-021-22680-5 Published: 30 April 2021

This paper is open access.

“… devices that directly interface with living tissue and even the brain,” would I be the only one thinking about cyborgs?

Mechano-photonic artificial synapse is bio-inspired

The word ‘memristor’ usually pops up when there’s research into artificial synapses but not in this new piece of research. I didn’t see any mention of the memristor in the paper’s references either but I did find James Gimzewski from the University of California at Los Angeles (UCLA) whose research into brainlike computing (neuromorphic computing) is running parallel but separately to the memristor research.

Dr. Thamarasee Jeewandara has written a March 25, 2021 article for phys.org about the latest neuromorphic computing research (Note: Links have been removed)

Multifunctional and diverse artificial neural systems can incorporate multimodal plasticity, memory and supervised learning functions to assist neuromorphic computation. In a new report, Jinran Yu and a research team in nanoenergy, nanoscience and materials science in China and the US., presented a bioinspired mechano-photonic artificial synapse with synergistic mechanical and optical plasticity. The team used an optoelectronic transistor made of graphene/molybdenum disulphide (MoS2) heterostructure and an integrated triboelectric nanogenerator to compose the artificial synapse. They controlled the charge transfer/exchange in the heterostructure with triboelectric potential and modulated the optoelectronic synapse behaviors readily, including postsynaptic photocurrents, photosensitivity and photoconductivity. The mechano-photonic artificial synapse is a promising implementation to mimic the complex biological nervous system and promote the development of interactive artificial intelligence. The work is now published on Science Advances.

The human brain can integrate cognition, learning and memory tasks via auditory, visual, olfactory and somatosensory interactions. This process is difficult to be mimicked using conventional von Neumann architectures that require additional sophisticated functions. Brain-inspired neural networks are made of various synaptic devices to transmit information and process using the synaptic weight. Emerging photonic synapse combine the optical and electric neuromorphic modulation and computation to offer a favorable option with high bandwidth, fast speed and low cross-talk to significantly reduce power consumption. Biomechanical motions including touch, eye blinking and arm waving are other ubiquitous triggers or interactive signals to operate electronics during artificial synapse plasticization. In this work, Yu et al. presented a mechano-photonic artificial synapse with synergistic mechanical and optical plasticity. The device contained an optoelectronic transistor and an integrated triboelectric nanogenerator (TENG) in contact-separation mode. The mechano-optical artificial synapses have huge functional potential as interactive optoelectronic interfaces, synthetic retinas and intelligent robots. [emphasis mine]

As you can see Jeewandara has written quite a technical summary of the work. Here’s an image from the Science Advances paper,

Fig. 1 Biological tactile/visual neurons and mechano-photonic artificial synapse. (A) Schematic illustrations of biological tactile/visual sensory system. (B) Schematic diagram of the mechano-photonic artificial synapse based on graphene/MoS2 (Gr/MoS2) heterostructure. (i) Top-view scanning electron microscope (SEM) image of the optoelectronic transistor; scale bar, 5 μm. The cyan area indicates the MoS2 flake, while the white strip is graphene. (ii) Illustration of charge transfer/exchange for Gr/MoS2 heterostructure. (iii) Output mechano-photonic signals from the artificial synapse for image recognition.

You can find the paper here,

Bioinspired mechano-photonic artificial synapse based on graphene/MoS2 heterostructure by Jinran Yu, Xixi Yang, Guoyun Gao, Yao Xiong, Yifei Wang, Jing Han, Youhui Chen, Huai Zhang, Qijun Sun and Zhong Lin Wang. Science Advances 17 Mar 2021: Vol. 7, no. 12, eabd9117 DOI: 10.1126/sciadv.abd9117

This appears to be open access.

Memristor artificial neural network learning based on phase-change memory (PCM)

Caption: Professor Hongsik Jeong and his research team in the Department of Materials Science and Engineering at UNIST. Credit: UNIST

I’m pretty sure that Professor Hongsik Jeong is the one on the right. He seems more relaxed, like he’s accustomed to posing for pictures highlighting his work.

Now on to the latest memristor news, which features the number 8.

For anyone unfamiliar with the term memristor, it’s a device (of sorts) which scientists, involved in neuromorphic computing (computers that operate like human brains), are researching as they attempt to replicate brainlike processes for computers.

From a January 22, 2021 Ulsan National Institute of Science and Technology (UNIST) press release (also on EurekAlert but published March 15, 2021),

An international team of researchers, affiliated with UNIST has unveiled a novel technology that could improve the learning ability of artificial neural networks (ANNs).

Professor Hongsik Jeong and his research team in the Department of Materials Science and Engineering at UNIST, in collaboration with researchers from Tsinghua University in China, proposed a new learning method to improve the learning ability of ANN chips by challenging its instability.

Artificial neural network chips are capable of mimicking the structural, functional and biological features of human neural networks, and thus have been considered the technology of the future. In this study, the research team demonstrated the effectiveness of the proposed learning method by building phase change memory (PCM) memristor arrays that operate like ANNs. This learning method is also advantageous in that its learning ability can be improved without additional power consumption, since PCM undergoes a spontaneous resistance increase due to the structural relaxation after amorphization.

ANNs, like human brains, use less energy even when performing computation and memory tasks, simultaneously. However, the artificial neural network chip in which a large number of physical devices are integrated has a disadvantage that there is an error. The existing artificial neural network learning method assumes a perfect artificial neural network chip with no errors, so the learning ability of the artificial neural network is poor.

The research team developed a memristor artificial neural network learning method based on a phase-change memory, conceiving that the real human brain does not require near-perfect motion. This learning method reflects the “resistance drift” (increased electrical resistance) of the phase change material in the memory semiconductor in learning. During the learning process, since the information update pattern is recorded in the form of increasing electrical resistance in the memristor, which serves as a synapse, the synapse additionally learns the association between the pattern it changes and the data it is learning.

The research team showed that the learning method developed through an experiment to classify handwriting composed of numbers 0-9 has an effect of improving learning ability by about 3%. In particular, the accuracy of the number 8, which is difficult to classify handwriting, has improved significantly. [emphasis mine] The learning ability improved thanks to the synaptic update pattern that changes differently according to the difficulty of handwriting classification.

Researchers expect that their findings are expected to promote the learning algorithms with the intrinsic properties of memristor devices, opening a new direction for development of neuromorphic computing chips.

Here’s a link to and a citation for the paper,

Spontaneous sparse learning for PCM-based memristor neural networks by Dong-Hyeok Lim, Shuang Wu, Rong Zhao, Jung-Hoon Lee, Hongsik Jeong & Luping Shi. Nature Communications volume 12, Article number: 319 (2021) DOI: https://doi.org/10.1038/s41467-020-20519-z Published 12 January 2021

This paper is open access.

Supercomputing capability at home with Graphical Processing Units (GPUs)

Researchers at the University of Sussex (in the UK) have found a way to make your personal computer as powerful as a supercomputer according to a February 2, 2021 University of Sussex press release (also on EurekAlert),

University of Sussex academics have established a method of turbocharging desktop PCs to give them the same capability as supercomputers worth tens of millions of pounds.

Dr James Knight and Prof Thomas Nowotny from the University of Sussex’s School of Engineering and Informatics used the latest Graphical Processing Units (GPUs) to give a single desktop PC the capacity to simulate brain models of almost unlimited size.

The researchers believe the innovation, detailed in Nature Computational Science, will make it possible for many more researchers around the world to carry out research on large-scale brain simulation, including the investigation of neurological disorders.

Currently, the cost of supercomputers is so prohibitive they are only affordable to very large institutions and government agencies and so are not accessible for large numbers of researchers.

As well as shaving tens of millions of pounds off the costs of a supercomputer, the simulations run on the desktop PC require approximately 10 times less energy bringing a significant sustainability benefit too.

Dr Knight, Research Fellow in Computer Science at the University of Sussex, said: “I think the main benefit of our research is one of accessibility. Outside of these very large organisations, academics typically have to apply to get even limited time on a supercomputer for a particular scientific purpose. This is quite a high barrier for entry which is potentially holding back a lot of significant research.

“Our hope for our own research now is to apply these techniques to brain-inspired machine learning so that we can help solve problems that biological brains excel at but which are currently beyond simulations.

“As well as the advances we have demonstrated in procedural connectivity in the context of GPU hardware, we also believe that there is also potential for developing new types of neuromorphic hardware built from the ground up for procedural connectivity. Key components could be implemented directly in hardware which could lead to even more truly significant compute time improvements.”

The research builds on the pioneering work of US researcher Eugene Izhikevich who pioneered a similar method for large-scale brain simulation in 2006.

At the time, computers were too slow for the method to be widely applicable meaning simulating large-scale brain models has until now only been possible for a minority of researchers privileged to have access to supercomputer systems.

The researchers applied Izhikevich’s technique to a modern GPU, with approximately 2,000 times the computing power available 15 years ago, to create a cutting-edge model of a Macaque’s visual cortex (with 4.13 × 106 neurons and 24.2 × 109 synapse) which previously could only be simulated on a supercomputer.

The researchers’ GPU accelerated spiking neural network simulator uses the large amount of computational power available on a GPU to ‘procedurally’ generate connectivity and synaptic weights ‘on the go’ as spikes are triggered – removing the need to store connectivity data in memory.

Initialization of the researchers’ model took six minutes and simulation of each biological second took 7.7 min in the ground state and 8.4 min in the resting state- up to 35 % less time than a previous supercomputer simulation. In 2018, one rack of an IBM Blue Gene/Q supercomputer initialization of the model took around five minutes and simulating one second of biological time took approximately 12 minutes.

Prof Nowotny, Professor of Informatics at the University of Sussex, said: “Large-scale simulations of spiking neural network models are an important tool for improving our understanding of the dynamics and ultimately the function of brains. However, even small mammals such as mice have on the order of 1 × 1012 synaptic connections meaning that simulations require several terabytes of data – an unrealistic memory requirement for a single desktop machine.

“This research is a game-changer for computational Neuroscience and AI researchers who can now simulate brain circuits on their local workstations, but it also allows people outside academia to turn their gaming PC into a supercomputer and run large neural networks.”

Here’s a link to and a citation for the paper,

Larger GPU-accelerated brain simulations with procedural connectivity by James C. Knight & Thomas Nowotny. Nature Computational Science (2021) DOI: DOIhttps://doi.org/10.1038/s43588-020-00022-7 Published: 01 February 2021

This paper is behind a paywall.