The human brain works differently from a computer – while the brain works with biological cells and electrical impulses, a computer uses silicon-based transistors. Scientists have equipped a toy robot with a smart and adaptive electrical circuit made of soft organic materials, similarly to the biological matter. With this bio-inspired approach, they were able to teach the robot to navigate independently through a maze using visual signs for guidance.
The processor is the brain of a computer – an often-quoted phrase. But processors work fundamentally differently than the human brain. Transistors perform logic operations by means of electronic signals. In contrast, the brain works with nerve cells, so-called neurons, which are connected via biological conductive paths, so-called synapses. At a higher level, this signaling is used by the brain to control the body and perceive the surrounding environment. The reaction of the body/brain system when certain stimuli are perceived – for example, via the eyes, ears or sense of touch – is triggered through a learning process. For example, children learn not to reach twice for a hot stove: one input stimulus leads to a learning process with a clear behavioral outcome.
Scientists working with Paschalis Gkoupidenis, group leader in Paul Blom’s department at the Max Planck Institute for Polymer Research, have now applied this basic principle of learning through experience in a simplified form and steered a robot through a maze using a so-called organic neuromorphic circuit. The work was an extensive collaboration between the Universities of Eindhoven [Eindhoven University of Technology; Netherlands], Stanford [University; California, US], Brescia [University; Italy], Oxford [UK] and KAUST [King Abdullah University of Science and Technology, Saudi Arabia].
“We wanted to use this simple setup to show how powerful such ‘organic neuromorphic devices’ can be in real-world conditions,” says Imke Krauhausen, a doctoral student in Gkoupidenis’ group and at TU Eindhoven (van de Burgt group), and first author of the scientific paper.
To achieve the navigation of the robot inside the maze, the researchers fed the smart adaptive circuit with sensory signals coming from the environment. The path of maze towards the exit is indicated visually at each maze intersects. Initially, the robot often misinterprets the visual signs, thus it makes the wrong “turning” decisions at the maze intersects and loses the way out. When the robot takes these decisions and follows wrong dead-end paths, it is being discouraged to take these wrong decisions by receiving corrective stimuli. The corrective stimuli, for example when the robot hits a wall, are directly applied at the organic circuit via electrical signals induced by a touch sensor attached to the robot. With each subsequent execution of the experiment, the robot gradually learns to make the right “turning” decisions at the intersects, i. e. to avoid receiving corrective stimuli, and after a few trials it finds the way out of the maze. This learning process happens exclusively on the organic adaptive circuit.
“We were really glad to see that the robot can pass through the maze after some runs by learning on a simple organic circuit. We have shown here a first, very simple setup. In the distant future, however, we hope that organic neuromorphic devices could also be used for local and distributed computing/learning. This will open up entirely new possibilities for applications in real-world robotics, human-machine interfaces and point-of-care diagnostics. Novel platforms for rapid prototyping and education, at the intersection of materials science and robotics, are also expected to emerge.” Gkoupidenis says.
Here’s a link to and a citation for the paper,
Organic neuromorphic electronics for sensorimotor integration and learning in robotics by Imke Krauhausen, Dimitrios A. Koutsouras, Armantas Melianas, Scott T. Keene, Katharina Lieberth, Hadrien Ledanseur, Rajendar Sheelamanthula, Alexander Giovannitti, Fabrizio Torricelli, Iain Mcculloch, Paul W. M. Blom, Alberto Salleo, Yoeri van de Burgt and Paschalis Gkoupidenis. Science Advances • 10 Dec 2021 • Vol 7, Issue 50 • DOI: 10.1126/sciadv.abl5068
I don’t think I’ve ever seen a picture of a sea slug before. Its appearance reminds me of its terrestrial cousin.
As for some of the latest news on brainlike computing, a December 7, 2021 news item on Nanowerk makes an announcement from the Argonne National Laboratory (a US Department of Energy laboratory; Note: Links have been removed),
A team of scientists has discovered a new material that points the way toward more efficient artificial intelligence hardware for everything from self-driving cars to surgical robots.
For artificial intelligence (AI) to get any smarter, it needs first to be as intelligent as one of the simplest creatures in the animal kingdom: the sea slug.
A new study has found that a material can mimic the sea slug’s most essential intelligence features. The discovery is a step toward building hardware that could help make AI more efficient and reliable for technology ranging from self-driving cars and surgical robots to social media algorithms.
The study, published in the Proceedings of the National Academy of Sciences [PNAS] (“Neuromorphic learning with Mott insulator NiO”), was conducted by a team of researchers from Purdue University, Rutgers University, the University of Georgia and the U.S. Department of Energy’s (DOE) Argonne National Laboratory. The team used the resources of the Advanced Photon Source (APS), a DOE Office of Science user facility at Argonne.
“Through studying sea slugs, neuroscientists discovered the hallmarks of intelligence that are fundamental to any organism’s survival,” said Shriram Ramanathan, a Purdue professor of Materials Engineering. “We want to take advantage of that mature intelligence in animals to accelerate the development of AI.”
Two main signs of intelligence that neuroscientists have learned from sea slugs are habituation and sensitization. Habituation is getting used to a stimulus over time, such as tuning out noises when driving the same route to work every day. Sensitization is the opposite — it’s reacting strongly to a new stimulus, like avoiding bad food from a restaurant.
AI has a really hard time learning and storing new information without overwriting information it has already learned and stored, a problem that researchers studying brain-inspired computing call the “stability-plasticity dilemma.” Habituation would allow AI to “forget” unneeded information (achieving more stability) while sensitization could help with retaining new and important information (enabling plasticity).
In this study, the researchers found a way to demonstrate both habituation and sensitization in nickel oxide, a quantum material. Quantum materials are engineered to take advantage of features available only at nature’s smallest scales, and useful for information processing. If a quantum material could reliably mimic these forms of learning, then it may be possible to build AI directly into hardware. And if AI could operate both through hardware and software, it might be able to perform more complex tasks using less energy.
“We basically emulated experiments done on sea slugs in quantum materials toward understanding how these materials can be of interest for AI,” Ramanathan said.
Neuroscience studies have shown that the sea slug demonstrates habituation when it stops withdrawing its gill as much in response to tapping. But an electric shock to its tail causes its gill to withdraw much more dramatically, showing sensitization.
For nickel oxide, the equivalent of a “gill withdrawal” is an increased change in electrical resistance. The researchers found that repeatedly exposing the material to hydrogen gas causes nickel oxide’s change in electrical resistance to decrease over time, but introducing a new stimulus like ozone greatly increases the change in electrical resistance.
Ramanathan and his colleagues used two experimental stations at the APS to test this theory, using X-ray absorption spectroscopy. A sample of nickel oxide was exposed to hydrogen and oxygen, and the ultrabright X-rays of the APS were used to see changes in the material at the atomic level over time.
“Nickel oxide is a relatively simple material,” said Argonne physicist Hua Zhou, a co-author on the paper who worked with the team at beamline 33-ID. “The goal was to use something easy to manufacture, and see if it would mimic this behavior. We looked at whether the material gained or lost a single electron after exposure to the gas.”
The research team also conducted scans at beamline 29-ID, which uses softer X-rays to probe different energy ranges. While the harder X-rays of 33-ID are more sensitive to the “core” electrons, those closer to the nucleus of the nickel oxide’s atoms, the softer X-rays can more readily observe the electrons on the outer shell. These are the electrons that define whether a material is conductive or resistive to electricity.
“We’re very sensitive to the change of resistivity in these samples,” said Argonne physicist Fanny Rodolakis, a co-author on the paper who led the work at beamline 29-ID. “We can directly probe how the electronic states of oxygen and nickel evolve under different treatments.”
Physicist Zhan Zhang and postdoctoral researcher Hui Cao, both of Argonne, contributed to the work, and are listed as co-authors on the paper. Zhang said the APS is well suited for research like this, due to its bright beam that can be tuned over different energy ranges.
For practical use of quantum materials as AI hardware, researchers will need to figure out how to apply habituation and sensitization in large-scale systems. They also would have to determine how a material could respond to stimuli while integrated into a computer chip.
This study is a starting place for guiding those next steps, the researchers said. Meanwhile, the APS is undergoing a massive upgrade that will not only increase the brightness of its beams by up to 500 times, but will allow for those beams to be focused much smaller than they are today. And this, Zhou said, will prove useful once this technology does find its way into electronic devices.
“If we want to test the properties of microelectronics,” he said, “the smaller beam that the upgraded APS will give us will be essential.”
In addition to the experiments performed at Purdue and Argonne, a team at Rutgers University performed detailed theory calculations to understand what was happening within nickel oxide at a microscopic level to mimic the sea slug’s intelligence features. The University of Georgia measured conductivity to further analyze the material’s behavior.
A version of this story was originally published by Purdue University
About the Advanced Photon Source
The U. S. Department of Energy Office of Science’s Advanced Photon Source (APS) at Argonne National Laboratory is one of the world’s most productive X-ray light source facilities. The APS provides high-brightness X-ray beams to a diverse community of researchers in materials science, chemistry, condensed matter physics, the life and environmental sciences, and applied research. These X-rays are ideally suited for explorations of materials and biological structures; elemental distribution; chemical, magnetic, electronic states; and a wide range of technologically important engineering systems from batteries to fuel injector sprays, all of which are the foundations of our nation’s economic, technological, and physical well-being. Each year, more than 5,000 researchers use the APS to produce over 2,000 publications detailing impactful discoveries, and solve more vital biological protein structures than users of any other X-ray light source research facility. APS scientists and engineers innovate technology that is at the heart of advancing accelerator and light-source operations. This includes the insertion devices that produce extreme-brightness X-rays prized by researchers, lenses that focus the X-rays down to a few nanometers, instrumentation that maximizes the way the X-rays interact with samples being studied, and software that gathers and manages the massive quantity of data resulting from discovery research at the APS.
This research used resources of the Advanced Photon Source, a U.S. DOE Office of Science User Facility operated for the DOE Office of Science by Argonne National Laboratory under Contract No. DE-AC02-06CH11357.
Argonne National Laboratory seeks solutions to pressing national problems in science and technology. The nation’s first national laboratory, Argonne conducts leading-edge basic and applied scientific research in virtually every scientific discipline. Argonne researchers work closely with researchers from hundreds of companies, universities, and federal, state and municipal agencies to help them solve their specific problems, advance America’s scientific leadership and prepare the nation for a better future. With employees from more than 60 nations, Argonne is managed by UChicago Argonne, LLC for the U.S. Department of Energy’s Office of Science.
The U.S. Department of Energy’s Office of Science is the single largest supporter of basic research in the physical sciences in the United States and is working to address some of the most pressing challenges of our time. For more information, visit https://energy.gov/science.
Researchers at Tohoku University and the University of Gothenburg have established a new spintronic technology for brain-inspired computing.
Their achievement was published in the journal Nature Materials (“Memristive control of mutual SHNO synchronization for neuromorphic computing”).
Sophisticated cognitive tasks, such as image and speech recognition, have seen recent breakthroughs thanks to deep learning. Even so, the human brain still executes these tasks without exerting much energy and with greater efficiency than any computer. The development of energy-efficient artificial neurons capable of emulating brain-inspired processes has therefore been a major research goal for decades.
Researchers demonstrated the first integration of a cognitive computing nano-element – the memristor – into another – a spintronic oscillator. Arrays of these memristor-controlled oscillators combine the non-volatile local storage of the memristor function with the microwave frequency computation of the nano-oscillator networks and can closely imitate the non-linear oscillatory neural networks of the human brain.
Resistance of the memristor changed with the voltage hysteresis applied to the top Ti/Cu electrode. Upon voltage application to the electrode, an electric field was applied at the high-resistance state, compared to electric current flows for the low-resistance state. The effects of electric field and current on the oscillator differed from each other, offering various controls of oscillation and synchronization properties.
Professor Johan Åkerman of the University of Gothenburg and leader of the study expressed his hopes for the future and the significance of the finding. “We are particularly interested in emerging quantum-inspired computing schemes, such as Ising Machines. The results also highlight the productive collaboration that we have established in neuromorphic spintronics between the University of Gothenburg and Tohoku University, something that is also part of the Sweden-Japan collaborative network MIRAI 2.0.”
“So far, artificial neurons and synapses have been developed separately in many fields; this work marks an important milestone: two functional elements have been combined into one,” said professor Shunsuke Fukami, who led the project on the Tohoku University side. Dr. Mohammad Zahedinejad of the University of Gothenburg and first author of the study adds, “Using the memristor-controlled spintronic oscillator arrays, we could tune the synaptic interactions between adjacent neurons and program them into mutually different and partially synchronized states.”
To put into practice their discovery, the researchers examined the operation of a test device comprising one oscillator and one memristor. The constricted region of W/CoFeB stack served as an oscillator, i.e., the neuron, whereas the MgO/AlOx/SiNx stack acted as a memristor, i.e., the synapse.
Resistance of the memristor changed with the voltage hysteresis applied to the top Ti/Cu electrode. Upon voltage application to the electrode, an electric field was applied at the high-resistance state, compared to electric current flows for the low-resistance state. The effects of electric field and current on the oscillator differed from each other, offering various controls of oscillation and synchronization properties.
Professor Johan Åkerman of the University of Gothenburg and leader of the study expressed his hopes for the future and the significance of the finding. “We are particularly interested in emerging quantum-inspired computing schemes, such as Ising Machines. The results also highlight the productive collaboration that we have established in neuromorphic spintronics between the University of Gothenburg and Tohoku University, something that is also part of the Sweden-Japan collaborative network MIRAI 2.0.” [sic]
A September 1, 2021 news item on ScienceDaily announces a new type of memristor from Texas A&M University (Texas A&M or TAMU) and the National University of Singapore (NUS)
In a discovery published in the journal Nature, an international team of researchers has described a novel molecular device with exceptional computing prowess.
Reminiscent of the plasticity of connections in the human brain, the device can be reconfigured on the fly for different computational tasks by simply changing applied voltages. Furthermore, like nerve cells can store memories, the same device can also retain information for future retrieval and processing.
“The brain has the remarkable ability to change its wiring around by making and breaking connections between nerve cells. Achieving something comparable in a physical system has been extremely challenging,” said Dr. R. Stanley Williams [emphasis mine], professor in the Department of Electrical and Computer Engineering at Texas A&M University. “We have now created a molecular device with dramatic reconfigurability, which is achieved not by changing physical connections like in the brain, but by reprogramming its logic.”
Dr. T. Venkatesan, director of the Center for Quantum Research and Technology (CQRT) at the University of Oklahoma, Scientific Affiliate at National Institute of Standards and Technology, Gaithersburg, and adjunct professor of electrical and computer engineering at the National University of Singapore, added that their molecular device might in the future help design next-generation processing chips with enhanced computational power and speed, but consuming significantly reduced energy.
Whether it is the familiar laptop or a sophisticated supercomputer, digital technologies face a common nemesis, the von Neumann bottleneck. This delay in computational processing is a consequence of current computer architectures, wherein the memory, containing data and programs, is physically separated from the processor. As a result, computers spend a significant amount of time shuttling information between the two systems, causing the bottleneck. Also, despite extremely fast processor speeds, these units can be idling for extended amounts of time during periods of information exchange.
As an alternative to conventional electronic parts used for designing memory units and processors, devices called memristors offer a way to circumvent the von Neumann bottleneck. Memristors, such as those made of niobium dioxide and vanadium dioxide, transition from being an insulator to a conductor at a set temperature. This property gives these types of memristors the ability to perform computations and store data.
However, despite their many advantages, these metal oxide memristors are made of rare-earth elements and can operate only in restrictive temperature regimes. Hence, there has been an ongoing search for promising organic molecules that can perform a comparable memristive function, said Williams.
Dr. Sreebrata Goswami, a professor at the Indian Association for the Cultivation of Science, designed the material used in this work. The compound has a central metal atom (iron) bound to three phenyl azo pyridine organic molecules called ligands.
“This behaves like an electron sponge that can absorb as many as six electrons reversibly, resulting in seven different redox states,” said Sreebrata. “The interconnectivity between these states is the key behind the reconfigurability shown in this work.”
Dr. Sreetosh Goswami, a researcher at the National University of Singapore, devised this project by creating a tiny electrical circuit consisting of a 40-nanometer layer of molecular film sandwiched between a layer of gold on top and gold-infused nanodisc and indium tin oxide at the bottom.
On applying a negative voltage on the device, Sreetosh witnessed a current-voltage profile that was nothing like anyone had seen before. Unlike metal-oxide memristors that can switch from metal to insulator at only one fixed voltage, the organic molecular devices could switch back and forth from insulator to conductor at several discrete sequential voltages.
“So, if you think of the device as an on-off switch, as we were sweeping the voltage more negative, the device first switched from on to off, then off to on, then on to off and then back to on. I’ll say that we were just blown out of our seat,” said Venkatesan. “We had to convince ourselves that what we were seeing was real.”
Sreetosh and Sreebrata investigated the molecular mechanisms underlying the curious switching behavior using an imaging technique called Raman spectroscopy. In particular, they looked for spectral signatures in the vibrational motion of the organic molecule that could explain the multiple transitions. Their investigation revealed that sweeping the voltage negative triggered the ligands on the molecule to undergo a series of reduction, or electron-gaining, events that caused the molecule to transition between off state and on states.
Next, to describe the extremely complex current-voltage profile of the molecular device mathematically, Williams deviated from the conventional approach of basic physics-based equations. Instead, he described the behavior of the molecules using a decision tree algorithm with “if-then-else” statements, a commonplace line of code in several computer programs, particularly digital games.
“Video games have a structure where you have a character that does something, and then something occurs as a result. And so, if you write that out in a computer algorithm, they are if-then-else statements,” said Williams. “Here, the molecule is switching from on to off as a consequence of applied voltage, and that’s when I had the eureka moment to use decision trees to describe these devices, and it worked very well.”
But the researchers went a step further to exploit these molecular devices to run programs for different real-world computational tasks. Sreetosh showed experimentally that their devices could perform fairly complex computations in a single time step and then be reprogrammed to perform another task in the next instant.
“It was quite extraordinary; our device was doing something like what the brain does, but in a very different way,” said Sreetosh. “When you’re learning something new or when you’re deciding, the brain can actually reconfigure and change physical wiring around. Similarly, we can logically reprogram or reconfigure our devices by giving them a different voltage pulse then they’ve seen before.”
Venkatesan noted that it would take thousands of transistors to perform the same computational functions as one of their molecular devices with its different decision trees. Hence, he said their technology might first be used in handheld devices, like cell phones and sensors, and other applications where power is limited.
Other contributors to the research include Dr. Abhijeet Patra and Dr. Ariando from the National University of Singapore; Dr. Rajib Pramanick and Dr. Santi Prasad Rath from the Indian Association for the Cultivation of Science; Dr. Martin Foltin from Hewlett Packard Enterprise, Colorado; and Dr. Damien Thompson from the University of Limerick, Ireland.
Venkatesan said that this research is indicative of the future discoveries from this collaborative team, which will include the center of nanoscience and engineering at the Indian Institute of Science and the Microsystems and Nanotechnology Division at the NIST.
I’ve highlighted R. Stanley Williams because he and his team at HP [Hewlett Packard] Labs helped to kick off current memristor research in 2008 with the publication of two papers as per my April 5, 2010 posting,
In 2008, two memristor papers were published in Nature and Nature Nanotechnology, respectively. In the first (Nature, May 2008 [article still behind a paywall], a team at HP Labs claimed they had proved the existence of memristors (a fourth member of electrical engineering’s ‘Holy Trinity of the capacitor, resistor, and inductor’). In the second paper (Nature Nanotechnology, July 2008 [article still behind a paywall]) the team reported that they had achieved engineering control.
Many electronic devices today are dependent on semiconductor logic circuits based on switches hard-wired to perform predefined logic functions. Physicists from the National University of Singapore (NUS), together with an international team of researchers, have developed a novel molecular memristor, or an electronic memory device, that has exceptional memory reconfigurability.
Unlike hard-wired standard circuits, the molecular device can be reconfigured using voltage to embed different computational tasks. The energy-efficient new technology, which is capable of enhanced computational power and speed, can potentially be used in edge computing, as well as handheld devices and applications with limited power resource.
“This work is a significant breakthrough in our quest to design low-energy computing. The idea of using multiple switching in a single element draws inspiration from how the brain works and fundamentally reimagines the design strategy of a logic circuit,” said Associate Professor Ariando from the NUS Department of Physics who led the research.
The research was first published in the journal Nature on 1 September 2021, and carried out in collaboration with the Indian Association for the Cultivation of Science, Hewlett Packard Enterprise, the University of Limerick, the University of Oklahoma, and Texas A&M University.
“This new discovery can contribute to developments in edge computing as a sophisticated in-memory computing approach to overcome the von Neumann bottleneck, a delay in computational processing seen in many digital technologies due to the physical separation of memory storage from a device’s processor,” said Assoc Prof Ariando. The new molecular device also has the potential to contribute to designing next generation processing chips with enhanced computational power and speed.
“Similar to the flexibility and adaptability of connections in the human brain, our memory device can be reconfigured on the fly for different computational tasks by simply changing applied voltages. Furthermore, like how nerve cells can store memories, the same device can also retain information for future retrieval and processing,” said first author Dr Sreetosh Goswami, Research Fellow from the Department of Physics at NUS.
Research team member Dr Sreebrata Goswami, who was a Senior Research Scientist at NUS and previously Professor at the Indian Association for the Cultivation of Science, conceptualised and designed a molecular system belonging to the chemical family of phenyl azo pyridines that have a central metal atom bound to organic molecules called ligands. “These molecules are like electron sponges that can offer as many as six electron transfers resulting in five different molecular states. The interconnectivity between these states is the key behind the device’s reconfigurability,” explained Dr Sreebrata Goswami.
Dr Sreetosh Goswami created a tiny electrical circuit consisting a 40-nanometer layer of molecular film sandwiched between a top layer of gold, and a bottom layer of gold-infused nanodisc and indium tin oxide. He observed an unprecedented current-voltage profile upon applying a negative voltage to the device. Unlike conventional metal-oxide memristors that are switched on and off at only one fixed voltage, these organic molecular devices could switch between on-off states at several discrete sequential voltages.
Using an imaging technique called Raman spectroscopy, spectral signatures in the vibrational motion of the organic molecule were observed to explain the multiple transitions. Dr Sreebrata Goswami explained, “Sweeping the negative voltage triggered the ligands on the molecule to undergo a series of reduction, or electron-gaining which caused the molecule to transition between off and on states.”
The researchers described the behavior of the molecules using a decision tree algorithm with “if-then-else” statements, which is used in the coding of several computer programs, particularly digital games, as compared to the conventional approach of using basic physics-based equations.
New possibilities for energy-efficient devices
Building on their research, the team used the molecular memory devices to run programs for different real-world computational tasks. As a proof of concept, the team demonstrated that their technology could perform complex computations in a single step, and could be reprogrammed to perform another task in the next instant. An individual molecular memory device could perform the same computational functions as thousands of transistors, making the technology a more powerful and energy-efficient memory option.
“The technology might first be used in handheld devices, like cell phones and sensors, and other applications where power is limited,” added Assoc Prof Ariando.
The team in the midst of building new electronic devices incorporating their innovation, and working with collaborators to conduct simulation and benchmarking relating to existing technologies.
Other contributors to the research paper include Abhijeet Patra and Santi Prasad Rath from NUS, Rajib Pramanick from the Indian Association for the Cultivation of Science, Martin Foltin from Hewlett Packard Enterprise, Damien Thompson from the University of Limerick, T. Venkatesan from the University of Oklahoma, and R. Stanley Williams from Texas A&M University.
Here’s a link to and a citation for the paper,
Decision trees within a molecular memristor by Sreetosh Goswami, Rajib Pramanick, Abhijeet Patra, Santi Prasad Rath, Martin Foltin, A. Ariando, Damien Thompson, T. Venkatesan, Sreebrata Goswami & R. Stanley Williams. Nature volume 597, pages 51–56 (2021) DOI: https://doi.org/10.1038/s41586-021-03748-0 Published 01 September 2021 Issue Date 02 September 2021
This breakthrough in neuromorphic (brainlike) computing is being attributed to the pandemic (COVID-19) according to a September 3, 2021 news item on phys.org,
Isaac Newton’s groundbreaking scientific productivity while isolated from the spread of bubonic plague is legendary. University of California San Diego physicists can now claim a stake in the annals of pandemic-driven science.
A team of UC San Diego [University of California San Diego] researchers and colleagues at Purdue University have now simulated the foundation of new types of artificial intelligence computing devices that mimic brain functions, an achievement that resulted from the COVID-19 pandemic lockdown. By combining new supercomputing materials with specialized oxides, the researchers successfully demonstrated the backbone of networks of circuits and devices that mirror the connectivity of neurons and synapses in biologically based neural networks.
As bandwidth demands on today’s computers and other devices reach their technological limit, scientists are working towards a future in which new materials can be orchestrated to mimic the speed and precision of animal-like nervous systems. Neuromorphic computing based on quantum materials, which display quantum-mechanics-based properties, allow scientists the ability to move beyond the limits of traditional semiconductor materials. This advanced versatility opens the door to new-age devices that are far more flexible with lower energy demands than today’s devices. Some of these efforts are being led by Department of Physics Assistant Professor Alex Frañó and other researchers in UC San Diego’s Quantum Materials for Energy Efficient Neuromorphic Computing (Q-MEEN-C), a Department of Energy-supported Energy Frontier Research Center.
“In the past 50 years we’ve seen incredible technological achievements that resulted in computers that were progressively smaller and faster—but even these devices have limits for data storage and energy consumption,” said Frañó, who served as one of the PNAS paper’s authors, along with former UC San Diego chancellor, UC president and physicist Robert Dynes. “Neuromorphic computing is inspired by the emergent processes of the millions of neurons, axons and dendrites that are connected all over our body in an extremely complex nervous system.”
As experimental physicists, Frañó and Dynes are typically busy in their laboratories using state-of-the-art instruments to explore new materials. But with the onset of the pandemic, Frañó and his colleagues were forced into isolation with concerns about how they would keep their research moving forward. They eventually came to the realization that they could advance their science from the perspective of simulations of quantum materials.
“This is a pandemic paper,” said Frañó. “My co-authors and I decided to study this issue from a more theoretical perspective so we sat down and started having weekly (Zoom-based) meetings. Eventually the idea developed and took off.”
The researchers’ innovation was based on joining two types of quantum substances—superconducting materials based on copper oxide and metal insulator transition materials that are based on nickel oxide. They created basic “loop devices” that could be precisely controlled at the nano-scale with helium and hydrogen, reflecting the way neurons and synapses are connected. Adding more of these devices that link and exchange information with each other, the simulations showed that eventually they would allow the creation of an array of networked devices that display emergent properties like an animal’s brain.
Like the brain, neuromorphic devices are being designed to enhance connections that are more important than others, similar to the way synapses weigh more important messages than others.
“It’s surprising that when you start to put in more loops, you start to see behavior that you did not expect,” said Frañó. “From this paper we can imagine doing this with six, 20 or a hundred of these devices—then it gets exponentially rich from there. Ultimately the goal is to create a very large and complex network of these devices that will have the ability to learn and adapt.”
With eased pandemic restrictions, Frañó and his colleagues are back in the laboratory, testing the theoretical simulations described in the PNAS [Proceedings of the National Academy of Sciences] paper with real-world instruments.
This work comes from Korea (or South Korea, if you prefer). An August 5, 2021 news item on ScienceDaily announces a step forward in the future production of neuromorphic hardware,
KAIST [The Korea Advanced Institute of Science and Technology] researchers fabricated a brain-inspired highly scalable neuromorphic hardware by co-integrating single transistor neurons and synapses. Using standard silicon complementary metal-oxide-semiconductor (CMOS) technology, the neuromorphic hardware is expected to reduce chip cost and simplify fabrication procedures.
The research team led by Yang-Kyu Choi and Sung-Yool Choi produced a [sic] neurons and synapses based on single transistor for highly scalable neuromorphic hardware and showed the ability to recognize text and face images. This research was featured in Science Advances on August 4 .
Neuromorphic hardware has attracted a great deal of attention because of its artificial intelligence functions, but consuming ultra-low power of less than 20 watts by mimicking the human brain. To make neuromorphic hardware work, a neuron that generates a spike when integrating a certain signal, and a synapse remembering the connection between two neurons are necessary, just like the biological brain. However, since neurons and synapses constructed on digital or analog circuits occupy a large space, there is a limit in terms of hardware efficiency and costs. Since the human brain consists of about 1011 neurons and 1014 synapses, it is necessary to improve the hardware cost in order to apply it to mobile and IoT devices.
To solve the problem, the research team mimicked the behavior of biological neurons and synapses with a single transistor, and co-integrated them onto an 8-inch wafer. The manufactured neuromorphic transistors have the same structure as the transistors for memory and logic that are currently mass-produced. In addition, the neuromorphic transistors proved for the first time that they can be implemented with a ‘Janus structure’ that functions as both neuron and synapse, just like coins have heads and tails.
Professor Yang-Kyu Choi said that this work can dramatically reduce the hardware cost by replacing the neurons and synapses that were based on complex digital and analog circuits with a single transistor. “We have demonstrated that neurons and synapses can be implemented using a single transistor,” said Joon-Kyu Han, the first author. “By co-integrating single transistor neurons and synapses on the same wafer using a standard CMOS process, the hardware cost of the neuromorphic hardware has been improved, which will accelerate the commercialization of neuromorphic hardware,” Han added.This research was supported by the National Research Foundation (NRF) and IC Design Education Center (IDEC).
I have one research announcement from China and another from the Netherlands, both of which concern memristors and oxides.
A May 17, 2021 news item on Nanowerk announces work, which suggests that memristors may not need to rely solely on oxides but could instead utilize light more gainfully,
Scientists are getting better at making neuron-like junctions for computers that mimic the human brain’s random information processing, storage and recall. Fei Zhuge of the Chinese Academy of Sciences and colleagues reviewed the latest developments in the design of these ‘memristors’ for the journal Science and Technology of Advanced Materials …
Computers apply artificial intelligence programs to recall previously learned information and make predictions. These programs are extremely energy- and time-intensive: typically, vast volumes of data must be transferred between separate memory and processing units. To solve this issue, researchers have been developing computer hardware that allows for more random and simultaneous information transfer and storage, much like the human brain.
Electronic circuits in these ‘neuromorphic’ computers include memristors that resemble the junctions between neurons called synapses. Energy flows through a material from one electrode to another, much like a neuron firing a signal across the synapse to the next neuron. Scientists are now finding ways to better tune this intermediate material so the information flow is more stable and reliable.
I had no success locating the original news release, which originated the news item, but have found this May 17, 2021 news item on eedesignit.com, which provides the remaining portion of the news release.
“Oxides are the most widely used materials in memristors,” said Zhuge. “But oxide memristors have unsatisfactory stability and reliability. Oxide-based hybrid structures can effectively improve this.”
Memristors are usually made of an oxide-based material sandwiched between two electrodes. Researchers are getting better results when they combine two or more layers of different oxide-based materials between the electrodes. When an electrical current flows through the network, it induces ions to drift within the layers. The ions’ movements ultimately change the memristor’s resistance, which is necessary to send or stop a signal through the junction.
Memristors can be tuned further by changing the compounds used for electrodes or by adjusting the intermediate oxide-based materials. Zhuge and his team are currently developing optoelectronic neuromorphic computers based on optically-controlled oxide memristors. Compared to electronic memristors, photonic ones are expected to have higher operation speeds and lower energy consumption. They could be used to construct next generation artificial visual systems with high computing efficiency.
Now for a picture that accompanied the news release, which follows,
A research group led by Prof. ZHUGE Fei at the Ningbo Institute of Materials Technology and Engineering (NIMTE) of the Chinese Academy of Sciences (CAS) developed an all-optically controlled (AOC) analog memristor, whose memconductance can be reversibly tuned by varying only the wavelength of the controlling light.
As the next generation of artificial intelligence (AI), neuromorphic computing (NC) emulates the neural structure and operation of the human brain at the physical level, and thus can efficiently perform multiple advanced computing tasks such as learning, recognition and cognition.
Memristors are promising candidates for NC thanks to the feasibility of high-density 3D integration and low energy consumption. Among them, the emerging optoelectronic memristors are competitive by virtue of combining the advantages of both photonics and electronics. However, the reversible tuning of memconductance depends highly on the electric excitation, which have severely limited the development and application of optoelectronic NC.
To address this issue, researchers at NIMTE proposed a bilayered oxide AOC memristor, based on the relatively mature semiconductor material InGaZnO and a memconductance tuning mechanism of light-induced electron trapping and detrapping.
The traditional electrical memristors require strong electrical stimuli to tune their memconductance, leading to high power consumption, a large amount of Joule heat, microstructural change triggered by the Joule heat, and even high crosstalk in memristor crossbars.
On the contrary, the developed AOC memristor does not involve microstructure changes, and can operate upon weak light irradiation with light power density of only 20 μW cm-2, which has provided a new approach to overcome the instability of the memristor.
Specifically, the AOC memristor can serve as an excellent synaptic emulator and thus mimic spike-timing-dependent plasticity (STDP) which is an important learning rule in the brain, indicating its potential applications in AOC spiking neural networks for high-efficiency optoelectronic NC.
Moreover, compared to purely optical computing, the optoelectronic computing using our AOC memristor showed higher practical feasibility, on account of the simple structure and fabrication process of the device.
The study may shed light on the in-depth research and practical application of optoelectronic NC, and thus promote the development of the new generation of AI.
This work was supported by the National Natural Science Foundation of China (No. 61674156 and 61874125), the Strategic Priority Research Program of Chinese Academy of Sciences (No. XDB32050204), and the Zhejiang Provincial Natural Science Foundation of China (No. LD19E020001).
Classic computers use binary values (0/1) to perform. By contrast, our brain cells can use more values to operate, making them more energy-efficient than computers. This is why scientists are interested in neuromorphic (brain-like) computing.
Physicists from the University of Groningen (the Netherlands) have used a complex oxide to create elements comparable to the neurons and synapses in the brain using spins, a magnetic property of electrons.
The press release, which follows, was accompanied by this image illustrating the work,
Although computers can do straightforward calculations much faster than humans, our brains outperform silicon machines in tasks like object recognition. Furthermore, our brain uses less energy than computers. Part of this can be explained by the way our brain operates: whereas a computer uses a binary system (with values 0 or 1), brain cells can provide more analogue signals with a range of values.
The operation of our brains can be simulated in computers, but the basic architecture still relies on a binary system. That is why scientist look for ways to expand this, creating hardware that is more brain-like, but will also interface with normal computers. ‘One idea is to create magnetic bits that can have intermediate states’, says Tamalika Banerjee, Professor of Spintronics of Functional Materials at the Zernike Institute for Advanced Materials, University of Groningen. She works on spintronics, which uses a magnetic property of electrons called ‘spin’ to transport, manipulate and store information.
In this study, her PhD student Anouk Goossens, first author of the paper, created thin films of a ferromagnetic metal (strontium-ruthenate oxide, SRO) grown on a substrate of strontium titanate oxide. The resulting thin film contained magnetic domains that were perpendicular to the plane of the film. ‘These can be switched more efficiently than in-plane magnetic domains’, explains Goossens. By adapting the growth conditions, it is possible to control the crystal orientation in the SRO. Previously, out-of-plane magnetic domains have been made using other techniques, but these typically require complex layer structures.
The magnetic domains can be switched using a current through a platinum electrode on top of the SRO. Goossens: ‘When the magnetic domains are oriented perfectly perpendicular to the film, this switching is deterministic: the entire domain will switch.’ However, when the magnetic domains are slightly tilted, the response is probabilistic: not all the domains are the same, and intermediate values occur when only part of the crystals in the domain have switched.
By choosing variants of the substrate on which the SRO is grown, the scientists can control its magnetic anisotropy. This allows them to produce two different spintronic devices. ‘This magnetic anisotropy is exactly what we wanted’, says Goossens. ‘Probabilistic switching compares to how neurons function, while the deterministic switching is more like a synapse.’
The scientists expect that in the future, brain-like computer hardware can be created by combining these different domains in a spintronic device that can be connected to standard silicon-based circuits. Furthermore, probabilistic switching would also allow for stochastic computing, a promising technology which represents continuous values by streams of random bits. Banerjee: ‘We have found a way to control intermediate states, not just for memory but also for computing.’
An April 30, 2021 news item on Nanowerk announced research from a joint team at Northwestern University (located in Chicago, Illinois, US) and University of Hong Kong of researchers in the field of neuromorphic (brainlike) computing,
Researchers have developed a brain-like computing device that is capable of learning by association.
Similar to how famed physiologist Ivan Pavlov conditioned dogs to associate a bell with food, researchers at Northwestern University and the University of Hong Kong successfully conditioned their circuit to associate light with pressure.
The device’s secret lies within its novel organic, electrochemical “synaptic transistors,” which simultaneously process and store information just like the human brain. The researchers demonstrated that the transistor can mimic the short-term and long-term plasticity of synapses in the human brain, building on memories to learn over time.
With its brain-like ability, the novel transistor and circuit could potentially overcome the limitations of traditional computing, including their energy-sapping hardware and limited ability to perform multiple tasks at the same time. The brain-like device also has higher fault tolerance, continuing to operate smoothly even when some components fail.
“Although the modern computer is outstanding, the human brain can easily outperform it in some complex and unstructured tasks, such as pattern recognition, motor control and multisensory integration,” said Northwestern’s Jonathan Rivnay, a senior author of the study. “This is thanks to the plasticity of the synapse, which is the basic building block of the brain’s computational power. These synapses enable the brain to work in a highly parallel, fault tolerant and energy-efficient manner. In our work, we demonstrate an organic, plastic transistor that mimics key functions of a biological synapse.”
Rivnay is an assistant professor of biomedical engineering at Northwestern’s McCormick School of Engineering. He co-led the study with Paddy Chan, an associate professor of mechanical engineering at the University of Hong Kong. Xudong Ji, a postdoctoral researcher in Rivnay’s group, is the paper’s first author.
Conventional, digital computing systems have separate processing and storage units, causing data-intensive tasks to consume large amounts of energy. Inspired by the combined computing and storage process in the human brain, researchers, in recent years, have sought to develop computers that operate more like the human brain, with arrays of devices that function like a network of neurons.
“The way our current computer systems work is that memory and logic are physically separated,” Ji said. “You perform computation and send that information to a memory unit. Then every time you want to retrieve that information, you have to recall it. If we can bring those two separate functions together, we can save space and save on energy costs.”
Currently, the memory resistor, or “memristor,” is the most well-developed technology that can perform combined processing and memory function, but memristors suffer from energy-costly switching and less biocompatibility. These drawbacks led researchers to the synaptic transistor — especially the organic electrochemical synaptic transistor, which operates with low voltages, continuously tunable memory and high compatibility for biological applications. Still, challenges exist.
“Even high-performing organic electrochemical synaptic transistors require the write operation to be decoupled from the read operation,” Rivnay said. “So if you want to retain memory, you have to disconnect it from the write process, which can further complicate integration into circuits or systems.”
How the synaptic transistor works
To overcome these challenges, the Northwestern and University of Hong Kong team optimized a conductive, plastic material within the organic, electrochemical transistor that can trap ions. In the brain, a synapse is a structure through which a neuron can transmit signals to another neuron, using small molecules called neurotransmitters. In the synaptic transistor, ions behave similarly to neurotransmitters, sending signals between terminals to form an artificial synapse. By retaining stored data from trapped ions, the transistor remembers previous activities, developing long-term plasticity.
The researchers demonstrated their device’s synaptic behavior by connecting single synaptic transistors into a neuromorphic circuit to simulate associative learning. They integrated pressure and light sensors into the circuit and trained the circuit to associate the two unrelated physical inputs (pressure and light) with one another.
Perhaps the most famous example of associative learning is Pavlov’s dog, which naturally drooled when it encountered food. After conditioning the dog to associate a bell ring with food, the dog also began drooling when it heard the sound of a bell. For the neuromorphic circuit, the researchers activated a voltage by applying pressure with a finger press. To condition the circuit to associate light with pressure, the researchers first applied pulsed light from an LED lightbulb and then immediately applied pressure. In this scenario, the pressure is the food and the light is the bell. The device’s corresponding sensors detected both inputs.
After one training cycle, the circuit made an initial connection between light and pressure. After five training cycles, the circuit significantly associated light with pressure. Light, alone, was able to trigger a signal, or “unconditioned response.”
Because the synaptic circuit is made of soft polymers, like a plastic, it can be readily fabricated on flexible sheets and easily integrated into soft, wearable electronics, smart robotics and implantable devices that directly interface with living tissue and even the brain [emphasis mine].
“While our application is a proof of concept, our proposed circuit can be further extended to include more sensory inputs and integrated with other electronics to enable on-site, low-power computation,” Rivnay said. “Because it is compatible with biological environments, the device can directly interface with living tissue, which is critical for next-generation bioelectronics.”
The word ‘memristor’ usually pops up when there’s research into artificial synapses but not in this new piece of research. I didn’t see any mention of the memristor in the paper’s references either but I did find James Gimzewski from the University of California at Los Angeles (UCLA) whose research into brainlike computing (neuromorphic computing) is running parallel but separately to the memristor research.
Dr. Thamarasee Jeewandara has written a March 25, 2021 article for phys.org about the latest neuromorphic computing research (Note: Links have been removed)
Multifunctional and diverse artificial neural systems can incorporate multimodal plasticity, memory and supervised learning functions to assist neuromorphic computation. In a new report, Jinran Yu and a research team in nanoenergy, nanoscience and materials science in China and the US., presented a bioinspired mechano-photonic artificial synapse with synergistic mechanical and optical plasticity. The team used an optoelectronic transistor made of graphene/molybdenum disulphide (MoS2) heterostructure and an integrated triboelectric nanogenerator to compose the artificial synapse. They controlled the charge transfer/exchange in the heterostructure with triboelectric potential and modulated the optoelectronic synapse behaviors readily, including postsynaptic photocurrents, photosensitivity and photoconductivity. The mechano-photonic artificial synapse is a promising implementation to mimic the complex biological nervous system and promote the development of interactive artificial intelligence. The work is now published on Science Advances.
The human brain can integrate cognition, learning and memory tasks via auditory, visual, olfactory and somatosensory interactions. This process is difficult to be mimicked using conventional von Neumann architectures that require additional sophisticated functions. Brain-inspired neural networks are made of various synaptic devices to transmit information and process using the synaptic weight. Emerging photonic synapse combine the optical and electric neuromorphic modulation and computation to offer a favorable option with high bandwidth, fast speed and low cross-talk to significantly reduce power consumption. Biomechanical motions including touch, eye blinking and arm waving are other ubiquitous triggers or interactive signals to operate electronics during artificial synapse plasticization. In this work, Yu et al. presented a mechano-photonic artificial synapse with synergistic mechanical and optical plasticity. The device contained an optoelectronic transistor and an integrated triboelectric nanogenerator (TENG) in contact-separation mode. The mechano-optical artificial synapses have huge functional potential as interactive optoelectronic interfaces, synthetic retinas and intelligent robots. [emphasis mine]
As you can see Jeewandara has written quite a technical summary of the work. Here’s an image from the Science Advances paper,
I’m pretty sure that Professor Hongsik Jeong is the one on the right. He seems more relaxed, like he’s accustomed to posing for pictures highlighting his work.
Now on to the latest memristor news, which features the number 8.
For anyone unfamiliar with the term memristor, it’s a device (of sorts) which scientists, involved in neuromorphic computing (computers that operate like human brains), are researching as they attempt to replicate brainlike processes for computers.
An international team of researchers, affiliated with UNIST has unveiled a novel technology that could improve the learning ability of artificial neural networks (ANNs).
Professor Hongsik Jeong and his research team in the Department of Materials Science and Engineering at UNIST, in collaboration with researchers from Tsinghua University in China, proposed a new learning method to improve the learning ability of ANN chips by challenging its instability.
Artificial neural network chips are capable of mimicking the structural, functional and biological features of human neural networks, and thus have been considered the technology of the future. In this study, the research team demonstrated the effectiveness of the proposed learning method by building phase change memory (PCM) memristor arrays that operate like ANNs. This learning method is also advantageous in that its learning ability can be improved without additional power consumption, since PCM undergoes a spontaneous resistance increase due to the structural relaxation after amorphization.
ANNs, like human brains, use less energy even when performing computation and memory tasks, simultaneously. However, the artificial neural network chip in which a large number of physical devices are integrated has a disadvantage that there is an error. The existing artificial neural network learning method assumes a perfect artificial neural network chip with no errors, so the learning ability of the artificial neural network is poor.
The research team developed a memristor artificial neural network learning method based on a phase-change memory, conceiving that the real human brain does not require near-perfect motion. This learning method reflects the “resistance drift” (increased electrical resistance) of the phase change material in the memory semiconductor in learning. During the learning process, since the information update pattern is recorded in the form of increasing electrical resistance in the memristor, which serves as a synapse, the synapse additionally learns the association between the pattern it changes and the data it is learning.
The research team showed that the learning method developed through an experiment to classify handwriting composed of numbers 0-9 has an effect of improving learning ability by about 3%. In particular, the accuracy of the number 8, which is difficult to classify handwriting, has improved significantly. [emphasis mine] The learning ability improved thanks to the synaptic update pattern that changes differently according to the difficulty of handwriting classification.
Researchers expect that their findings are expected to promote the learning algorithms with the intrinsic properties of memristor devices, opening a new direction for development of neuromorphic computing chips.