Researchers at Tohoku University and the University of Gothenburg have established a new spintronic technology for brain-inspired computing.
Their achievement was published in the journal Nature Materials (“Memristive control of mutual SHNO synchronization for neuromorphic computing”).
Sophisticated cognitive tasks, such as image and speech recognition, have seen recent breakthroughs thanks to deep learning. Even so, the human brain still executes these tasks without exerting much energy and with greater efficiency than any computer. The development of energy-efficient artificial neurons capable of emulating brain-inspired processes has therefore been a major research goal for decades.
Researchers demonstrated the first integration of a cognitive computing nano-element – the memristor – into another – a spintronic oscillator. Arrays of these memristor-controlled oscillators combine the non-volatile local storage of the memristor function with the microwave frequency computation of the nano-oscillator networks and can closely imitate the non-linear oscillatory neural networks of the human brain.
Resistance of the memristor changed with the voltage hysteresis applied to the top Ti/Cu electrode. Upon voltage application to the electrode, an electric field was applied at the high-resistance state, compared to electric current flows for the low-resistance state. The effects of electric field and current on the oscillator differed from each other, offering various controls of oscillation and synchronization properties.
Professor Johan Åkerman of the University of Gothenburg and leader of the study expressed his hopes for the future and the significance of the finding. “We are particularly interested in emerging quantum-inspired computing schemes, such as Ising Machines. The results also highlight the productive collaboration that we have established in neuromorphic spintronics between the University of Gothenburg and Tohoku University, something that is also part of the Sweden-Japan collaborative network MIRAI 2.0.”
“So far, artificial neurons and synapses have been developed separately in many fields; this work marks an important milestone: two functional elements have been combined into one,” said professor Shunsuke Fukami, who led the project on the Tohoku University side. Dr. Mohammad Zahedinejad of the University of Gothenburg and first author of the study adds, “Using the memristor-controlled spintronic oscillator arrays, we could tune the synaptic interactions between adjacent neurons and program them into mutually different and partially synchronized states.”
To put into practice their discovery, the researchers examined the operation of a test device comprising one oscillator and one memristor. The constricted region of W/CoFeB stack served as an oscillator, i.e., the neuron, whereas the MgO/AlOx/SiNx stack acted as a memristor, i.e., the synapse.
Resistance of the memristor changed with the voltage hysteresis applied to the top Ti/Cu electrode. Upon voltage application to the electrode, an electric field was applied at the high-resistance state, compared to electric current flows for the low-resistance state. The effects of electric field and current on the oscillator differed from each other, offering various controls of oscillation and synchronization properties.
Professor Johan Åkerman of the University of Gothenburg and leader of the study expressed his hopes for the future and the significance of the finding. “We are particularly interested in emerging quantum-inspired computing schemes, such as Ising Machines. The results also highlight the productive collaboration that we have established in neuromorphic spintronics between the University of Gothenburg and Tohoku University, something that is also part of the Sweden-Japan collaborative network MIRAI 2.0.” [sic]
This venture into brain-like (neuromorphic) computing comes from France according to an August 17, 2021 news item on Nanowerk (Note: A link has been removed),
Brain-inspired electronics are the subject of intense research. Scientists from CNRS (Centre national de la recherche scientifique; French National Centre for Scientific Research) and the Ecole Normale Supérieure – PSL have theorized how to develop artificial neurons using, as nerve cells, ions to carry the information.
Their work, published in Science (“Modeling of emergent memory and voltage spiking in ionic transport through angstrom-scale slits”), reports that devices made of a single layer of water transporting ions within graphene nanoslits have the same transmission capacity as a neuron.
Au August 16, 2021 CNRS press release (also on EurekAlert but published August 6, 2021), which originated the news item, provides insight into the international interest in neuromorphic computing along with a few technical details about this latest research,
With an energy consumption equivalent to two bananas per day, the human brain can perform many complex tasks. Its high energy efficiency depends in particular on its base unit, the neuron, which has a membrane with nanometric pores called ion channels, which open and close according to the stimuli received. The resulting ion flows create an electric current responsible for the emission of action potentials, signals that allow neurons to communicate with each other.
Artificial intelligence can do all of these tasks but only at the cost of energy consumption tens of thousands of times that of the human brain. So the entire research challenge today is to design electronic systems that are as energy efficient as the human brain, for example, by using ions, not electrons, to carry the information. For this, nanofluidics, the study of how fluids behave in channels less than 100 nanometers wide, offer many perspectives. In a new study, a team from the ENS Laboratoire de Physique (CNRS/ENS-PSL/Sorbonne Université/Université de Paris) shows how to construct a prototype of an artificial neuron formed of extremely thin graphene slits containing a single layer of water molecules1. The scientists have shown that, under the effect of an electric field, the ions from this layer of water assemble into elongated clusters and develop a property known as the memristor effect: these clusters retain some of the stimuli that have been received in the past. To repeat the comparison with the brain, the graphene slits reproduce the ion channels, clusters and ion flows. And, using theoretical and digital tools, scientists have shown how to assemble these clusters to reproduce the physical mechanism of emission of action potentials, and thus the transmission of information.
This theoretical work continues experimentally within the French team, in collaboration with scientists from the University of Manchester (UK). The goal now is to prove experimentally that such systems can implement simple learning algorithms that can serve as the basis for tomorrow’s electronic memories.
1 Recently invented in Manchester by the group of André Geim (Nobel Prize in Physics 2010)
An April 30, 2021 news item on Nanowerk announced research from a joint team at Northwestern University (located in Chicago, Illinois, US) and University of Hong Kong of researchers in the field of neuromorphic (brainlike) computing,
Researchers have developed a brain-like computing device that is capable of learning by association.
Similar to how famed physiologist Ivan Pavlov conditioned dogs to associate a bell with food, researchers at Northwestern University and the University of Hong Kong successfully conditioned their circuit to associate light with pressure.
The device’s secret lies within its novel organic, electrochemical “synaptic transistors,” which simultaneously process and store information just like the human brain. The researchers demonstrated that the transistor can mimic the short-term and long-term plasticity of synapses in the human brain, building on memories to learn over time.
With its brain-like ability, the novel transistor and circuit could potentially overcome the limitations of traditional computing, including their energy-sapping hardware and limited ability to perform multiple tasks at the same time. The brain-like device also has higher fault tolerance, continuing to operate smoothly even when some components fail.
“Although the modern computer is outstanding, the human brain can easily outperform it in some complex and unstructured tasks, such as pattern recognition, motor control and multisensory integration,” said Northwestern’s Jonathan Rivnay, a senior author of the study. “This is thanks to the plasticity of the synapse, which is the basic building block of the brain’s computational power. These synapses enable the brain to work in a highly parallel, fault tolerant and energy-efficient manner. In our work, we demonstrate an organic, plastic transistor that mimics key functions of a biological synapse.”
Rivnay is an assistant professor of biomedical engineering at Northwestern’s McCormick School of Engineering. He co-led the study with Paddy Chan, an associate professor of mechanical engineering at the University of Hong Kong. Xudong Ji, a postdoctoral researcher in Rivnay’s group, is the paper’s first author.
Conventional, digital computing systems have separate processing and storage units, causing data-intensive tasks to consume large amounts of energy. Inspired by the combined computing and storage process in the human brain, researchers, in recent years, have sought to develop computers that operate more like the human brain, with arrays of devices that function like a network of neurons.
“The way our current computer systems work is that memory and logic are physically separated,” Ji said. “You perform computation and send that information to a memory unit. Then every time you want to retrieve that information, you have to recall it. If we can bring those two separate functions together, we can save space and save on energy costs.”
Currently, the memory resistor, or “memristor,” is the most well-developed technology that can perform combined processing and memory function, but memristors suffer from energy-costly switching and less biocompatibility. These drawbacks led researchers to the synaptic transistor — especially the organic electrochemical synaptic transistor, which operates with low voltages, continuously tunable memory and high compatibility for biological applications. Still, challenges exist.
“Even high-performing organic electrochemical synaptic transistors require the write operation to be decoupled from the read operation,” Rivnay said. “So if you want to retain memory, you have to disconnect it from the write process, which can further complicate integration into circuits or systems.”
How the synaptic transistor works
To overcome these challenges, the Northwestern and University of Hong Kong team optimized a conductive, plastic material within the organic, electrochemical transistor that can trap ions. In the brain, a synapse is a structure through which a neuron can transmit signals to another neuron, using small molecules called neurotransmitters. In the synaptic transistor, ions behave similarly to neurotransmitters, sending signals between terminals to form an artificial synapse. By retaining stored data from trapped ions, the transistor remembers previous activities, developing long-term plasticity.
The researchers demonstrated their device’s synaptic behavior by connecting single synaptic transistors into a neuromorphic circuit to simulate associative learning. They integrated pressure and light sensors into the circuit and trained the circuit to associate the two unrelated physical inputs (pressure and light) with one another.
Perhaps the most famous example of associative learning is Pavlov’s dog, which naturally drooled when it encountered food. After conditioning the dog to associate a bell ring with food, the dog also began drooling when it heard the sound of a bell. For the neuromorphic circuit, the researchers activated a voltage by applying pressure with a finger press. To condition the circuit to associate light with pressure, the researchers first applied pulsed light from an LED lightbulb and then immediately applied pressure. In this scenario, the pressure is the food and the light is the bell. The device’s corresponding sensors detected both inputs.
After one training cycle, the circuit made an initial connection between light and pressure. After five training cycles, the circuit significantly associated light with pressure. Light, alone, was able to trigger a signal, or “unconditioned response.”
Because the synaptic circuit is made of soft polymers, like a plastic, it can be readily fabricated on flexible sheets and easily integrated into soft, wearable electronics, smart robotics and implantable devices that directly interface with living tissue and even the brain [emphasis mine].
“While our application is a proof of concept, our proposed circuit can be further extended to include more sensory inputs and integrated with other electronics to enable on-site, low-power computation,” Rivnay said. “Because it is compatible with biological environments, the device can directly interface with living tissue, which is critical for next-generation bioelectronics.”
I’m pretty sure that Professor Hongsik Jeong is the one on the right. He seems more relaxed, like he’s accustomed to posing for pictures highlighting his work.
Now on to the latest memristor news, which features the number 8.
For anyone unfamiliar with the term memristor, it’s a device (of sorts) which scientists, involved in neuromorphic computing (computers that operate like human brains), are researching as they attempt to replicate brainlike processes for computers.
An international team of researchers, affiliated with UNIST has unveiled a novel technology that could improve the learning ability of artificial neural networks (ANNs).
Professor Hongsik Jeong and his research team in the Department of Materials Science and Engineering at UNIST, in collaboration with researchers from Tsinghua University in China, proposed a new learning method to improve the learning ability of ANN chips by challenging its instability.
Artificial neural network chips are capable of mimicking the structural, functional and biological features of human neural networks, and thus have been considered the technology of the future. In this study, the research team demonstrated the effectiveness of the proposed learning method by building phase change memory (PCM) memristor arrays that operate like ANNs. This learning method is also advantageous in that its learning ability can be improved without additional power consumption, since PCM undergoes a spontaneous resistance increase due to the structural relaxation after amorphization.
ANNs, like human brains, use less energy even when performing computation and memory tasks, simultaneously. However, the artificial neural network chip in which a large number of physical devices are integrated has a disadvantage that there is an error. The existing artificial neural network learning method assumes a perfect artificial neural network chip with no errors, so the learning ability of the artificial neural network is poor.
The research team developed a memristor artificial neural network learning method based on a phase-change memory, conceiving that the real human brain does not require near-perfect motion. This learning method reflects the “resistance drift” (increased electrical resistance) of the phase change material in the memory semiconductor in learning. During the learning process, since the information update pattern is recorded in the form of increasing electrical resistance in the memristor, which serves as a synapse, the synapse additionally learns the association between the pattern it changes and the data it is learning.
The research team showed that the learning method developed through an experiment to classify handwriting composed of numbers 0-9 has an effect of improving learning ability by about 3%. In particular, the accuracy of the number 8, which is difficult to classify handwriting, has improved significantly. [emphasis mine] The learning ability improved thanks to the synaptic update pattern that changes differently according to the difficulty of handwriting classification.
Researchers expect that their findings are expected to promote the learning algorithms with the intrinsic properties of memristor devices, opening a new direction for development of neuromorphic computing chips.
I’ve been meaning to get to this news item from late 2019 as it features work from a team that I’ve been following for a number of years now. First mentioned here in an October 17, 2011 posting, James Gimzewski has been working with researchers at the University of California at Los Angeles (UCLA) and researchers at Japan’s National Institute for Materials Science (NIMS) on neuromorphic computing.
This particular research had a protracted rollout with the paper being published in October 2019 and the last news item about it being published in mid-December 2019.
UCLA scientists James Gimzewski and Adam Stieg are part of an international research team that has taken a significant stride toward the goal of creating thinking machines.
Led by researchers at Japan’s National Institute for Materials Science, the team created an experimental device that exhibited characteristics analogous to certain behaviors of the brain — learning, memorization, forgetting, wakefulness and sleep. The paper, published in Scientific Reports (“Emergent dynamics of neuromorphic nanowire networks”), describes a network in a state of continuous flux.
“This is a system between order and chaos, on the edge of chaos,” said Gimzewski, a UCLA distinguished professor of chemistry and biochemistry, a member of the California NanoSystems Institute at UCLA and a co-author of the study. “The way that the device constantly evolves and shifts mimics the human brain. It can come up with different types of behavior patterns that don’t repeat themselves.”
The research is one early step along a path that could eventually lead to computers that physically and functionally resemble the brain — machines that may be capable of solving problems that contemporary computers struggle with, and that may require much less power than today’s computers do.
The device the researchers studied is made of a tangle of silver nanowires — with an average diameter of just 360 nanometers. (A nanometer is one-billionth of a meter.) The nanowires were coated in an insulating polymer about 1 nanometer thick. Overall, the device itself measured about 10 square millimeters — so small that it would take 25 of them to cover a dime.
Allowed to randomly self-assemble on a silicon wafer, the nanowires formed highly interconnected structures that are remarkably similar to those that form the neocortex, the part of the brain involved with higher functions such as language, perception and cognition.
One trait that differentiates the nanowire network from conventional electronic circuits is that electrons flowing through them cause the physical configuration of the network to change. In the study, electrical current caused silver atoms to migrate from within the polymer coating and form connections where two nanowires overlap. The system had about 10 million of these junctions, which are analogous to the synapses where brain cells connect and communicate.
The researchers attached two electrodes to the brain-like mesh to profile how the network performed. They observed “emergent behavior,” meaning that the network displayed characteristics as a whole that could not be attributed to the individual parts that make it up. This is another trait that makes the network resemble the brain and sets it apart from conventional computers.
After current flowed through the network, the connections between nanowires persisted for as much as one minute in some cases, which resembled the process of learning and memorization in the brain. Other times, the connections shut down abruptly after the charge ended, mimicking the brain’s process of forgetting.
In other experiments, the research team found that with less power flowing in, the device exhibited behavior that corresponds to what neuroscientists see when they use functional MRI scanning to take images of the brain of a sleeping person. With more power, the nanowire network’s behavior corresponded to that of the wakeful brain.
The paper is the latest in a series of publications examining nanowire networks as a brain-inspired system, an area of research that Gimzewski helped pioneer along with Stieg, a UCLA research scientist and an associate director of CNSI.
“Our approach may be useful for generating new types of hardware that are both energy-efficient and capable of processing complex datasets that challenge the limits of modern computers,” said Stieg, a co-author of the study.
The borderline-chaotic activity of the nanowire network resembles not only signaling within the brain but also other natural systems such as weather patterns. That could mean that, with further development, future versions of the device could help model such complex systems.
In other experiments, Gimzewski and Stieg already have coaxed a silver nanowire device to successfully predict statistical trends in Los Angeles traffic patterns based on previous years’ traffic data.
Because of their similarities to the inner workings of the brain, future devices based on nanowire technology could also demonstrate energy efficiency like the brain’s own processing. The human brain operates on power roughly equivalent to what’s used by a 20-watt incandescent bulb. By contrast, computer servers where work-intensive tasks take place — from training for machine learning to executing internet searches — can use the equivalent of many households’ worth of energy, with the attendant carbon footprint.
“In our studies, we have a broader mission than just reprogramming existing computers,” Gimzewski said. “Our vision is a system that will eventually be able to handle tasks that are closer to the way the human being operates.”
The study’s first author, Adrian Diaz-Alvarez, is from the International Center for Material Nanoarchitectonics at Japan’s National Institute for Materials Science. Co-authors include Tomonobu Nakayama and Rintaro Higuchi, also of NIMS; and Zdenka Kuncic at the University of Sydney in Australia.
An international joint research team led by NIMS succeeded in fabricating a neuromorphic network composed of numerous metallic nanowires. Using this network, the team was able to generate electrical characteristics similar to those associated with higher order brain functions unique to humans, such as memorization, learning, forgetting, becoming alert and returning to calm. The team then clarified the mechanisms that induced these electrical characteristics.
The development of artificial intelligence (AI) techniques has been rapidly advancing in recent years and has begun impacting our lives in various ways. Although AI processes information in a manner similar to the human brain, the mechanisms by which human brains operate are still largely unknown. Fundamental brain components, such as neurons and the junctions between them (synapses), have been studied in detail. However, many questions concerning the brain as a collective whole need to be answered. For example, we still do not fully understand how the brain performs such functions as memorization, learning and forgetting, and how the brain becomes alert and returns to calm. In addition, live brains are difficult to manipulate in experimental research. For these reasons, the brain remains a “mysterious organ.” A different approach to brain research?in which materials and systems capable of performing brain-like functions are created and their mechanisms are investigated?may be effective in identifying new applications of brain-like information processing and advancing brain science.
The joint research team recently built a complex brain-like network by integrating numerous silver (Ag) nanowires coated with a polymer (PVP) insulating layer approximately 1 nanometer in thickness. A junction between two nanowires forms a variable resistive element (i.e., a synaptic element) that behaves like a neuronal synapse. This nanowire network, which contains a large number of intricately interacting synaptic elements, forms a “neuromorphic network”. When a voltage was applied to the neuromorphic network, it appeared to “struggle” to find optimal current pathways (i.e., the most electrically efficient pathways). The research team measured the processes of current pathway formation, retention and deactivation while electric current was flowing through the network and found that these processes always fluctuate as they progress, similar to the human brain’s memorization, learning, and forgetting processes. The observed temporal fluctuations also resemble the processes by which the brain becomes alert or returns to calm. Brain-like functions simulated by the neuromorphic network were found to occur as the huge number of synaptic elements in the network collectively work to optimize current transport, in the other words, as a result of self-organized and emerging dynamic processes..
The research team is currently developing a brain-like memory device using the neuromorphic network material. The team intends to design the memory device to operate using fundamentally different principles than those used in current computers. For example, while computers are currently designed to spend as much time and electricity as necessary in pursuit of absolutely optimum solutions, the new memory device is intended to make a quick decision within particular limits even though the solution generated may not be absolutely optimum. The team also hopes that this research will facilitate understanding of the brain’s information processing mechanisms.
This project was carried out by an international joint research team led by Tomonobu Nakayama (Deputy Director, International Center for Materials Nanoarchitectonics (WPI-MANA), NIMS), Adrian Diaz Alvarez (Postdoctoral Researcher, WPI-MANA, NIMS), Zdenka Kuncic (Professor, School of Physics, University of Sydney, Australia) and James K. Gimzewski (Professor, California NanoSystems Institute, University of California Los Angeles, USA).
Here at last is a link to and a citation for the paper,
Emergent dynamics of neuromorphic nanowire networks by Adrian Diaz-Alvarez, Rintaro Higuchi, Paula Sanz-Leon, Ido Marcus, Yoshitaka Shingaya, Adam Z. Stieg, James K. Gimzewski, Zdenka Kuncic & Tomonobu Nakayama. Scientific Reports volume 9, Article number: 14920 (2019) DOI: https://doi.org/10.1038/s41598-019-51330-6 Published: 17 October 2019
The last time I wrote about memcapacitors (June 30, 2014 posting: Memristors, memcapacitors, and meminductors for faster computers), the ideas were largely theoretical; I believe this work is the first research I’ve seen on the topic. From an October 17, 2019 news item on ScienceDaily,
Researchers at the Department of Energy’s Oak Ridge National Laboratory ]ORNL], the University of Tennessee and Texas A&M University demonstrated bio-inspired devices that accelerate routes to neuromorphic, or brain-like, computing.
Results published in Nature Communications report the first example of a lipid-based “memcapacitor,” a charge storage component with memory that processes information much like synapses do in the brain. Their discovery could support the emergence of computing networks modeled on biology for a sensory approach to machine learning.
“Our goal is to develop materials and computing elements that work like biological synapses and neurons—with vast interconnectivity and flexibility—to enable autonomous systems that operate differently than current computing devices and offer new functionality and learning capabilities,” said Joseph Najem, a recent postdoctoral researcher at ORNL’s Center for Nanophase Materials Sciences, a DOE Office of Science User Facility, and current assistant professor of mechanical engineering at Penn State.
The novel approach uses soft materials to mimic biomembranes and simulate the way nerve cells communicate with one another.
The team designed an artificial cell membrane, formed at the interface of two lipid-coated water droplets in oil, to explore the material’s dynamic, electrophysiological properties. At applied voltages, charges build up on both sides of the membrane as stored energy, analogous to the way capacitors work in traditional electric circuits.
But unlike regular capacitors, the memcapacitor can “remember” a previously applied voltage and—literally—shape how information is processed. The synthetic membranes change surface area and thickness depending on electrical activity. These shapeshifting membranes could be tuned as adaptive filters for specific biophysical and biochemical signals.
“The novel functionality opens avenues for nondigital signal processing and machine learning modeled on nature,” said ORNL’s Pat Collier, a CNMS staff research scientist.
A distinct feature of all digital computers is the separation of processing and memory. Information is transferred back and forth from the hard drive and the central processor, creating an inherent bottleneck in the architecture no matter how small or fast the hardware can be.
Neuromorphic computing, modeled on the nervous system, employs architectures that are fundamentally different in that memory and signal processing are co-located in memory elements—memristors, memcapacitors and meminductors.
These “memelements” make up the synaptic hardware of systems that mimic natural information processing, learning and memory.
Systems designed with memelements offer advantages in scalability and low power consumption, but the real goal is to carve out an alternative path to artificial intelligence, said Collier.
Tapping into biology could enable new computing possibilities, especially in the area of “edge computing,” such as wearable and embedded technologies that are not connected to a cloud but instead make on-the-fly decisions based on sensory input and past experience.
Biological sensing has evolved over billions of years into a highly sensitive system with receptors in cell membranes that are able to pick out a single molecule of a specific odor or taste. “This is not something we can match digitally,” Collier said.
Digital computation is built around digital information, the binary language of ones and zeros coursing through electronic circuits. It can emulate the human brain, but its solid-state components do not compute sensory data the way a brain does.
“The brain computes sensory information pushed through synapses in a neural network that is reconfigurable and shaped by learning,” said Collier. “Incorporating biology—using biomembranes that sense bioelectrochemical information—is key to developing the functionality of neuromorphic computing.”
While numerous solid-state versions of memelements have been demonstrated, the team’s biomimetic elements represent new opportunities for potential “spiking” neural networks that can compute natural data in natural ways.
Spiking neural networks are intended to simulate the way neurons spike with electrical potential and, if the signal is strong enough, pass it on to their neighbors through synapses, carving out learning pathways that are pruned over time for efficiency.
A bio-inspired version with analog data processing is a distant aim. Current early-stage research focuses on developing the components of bio-circuitry.
“We started with the basics, a memristor that can weigh information via conductance to determine if a spike is strong enough to be broadcast through a network of synapses connecting neurons,” said Collier. “Our memcapacitor goes further in that it can actually store energy as an electric charge in the membrane, enabling the complex ‘integrate and fire’ activity of neurons needed to achieve dense networks capable of brain-like computation.”
The team’s next steps are to explore new biomaterials and study simple networks to achieve more complex brain-like functionalities with memelements.
Here’s a link to and a citation for the paper,
Dynamical nonlinear memory capacitance in biomimetic membranes by Joseph S. Najem, Md Sakib Hasan, R. Stanley Williams, Ryan J. Weiss, Garrett S. Rose, Graham J. Taylor, Stephen A. Sarles & C. Patrick Collier. Nature Communications volume 10, Article number: 3239 (2019) DOI: DOIhttps://doi.org/10.1038/s41467-019-11223-8 Published July 19, 2019
This paper is open access.
One final comment, you might recognize one of the authors (R. Stanley Williams) who in 2008 helped launch ‘memristor’ research.
It seems to me it’s been quite a while since I’ve stumbled across a memristor story from the University of Micihigan but it was worth waiting for. (Much of the research around memristors has to do with their potential application in neuromorphic (brainlike) computers.) From a December 17, 2018 news item on ScienceDaily,
A new electronic device developed at the University of Michigan can directly model the behaviors of a synapse, which is a connection between two neurons.
For the first time, the way that neurons share or compete for resources can be explored in hardware without the need for complicated circuits.
“Neuroscientists have argued that competition and cooperation behaviors among synapses are very important. Our new memristive devices allow us to implement a faithful model of these behaviors in a solid-state system,” said Wei Lu, U-M professor of electrical and computer engineering and senior author of the study in Nature Materials.
Memristors are electrical resistors with memory–advanced electronic devices that regulate current based on the history of the voltages applied to them. They can store and process data simultaneously, which makes them a lot more efficient than traditional systems. They could enable new platforms that process a vast number of signals in parallel and are capable of advanced machine learning.
The memristor is a good model for a synapse. It mimics the way that the connections between neurons strengthen or weaken when signals pass through them. But the changes in conductance typically come from changes in the shape of the channels of conductive material within the memristor. These channels–and the memristor’s ability to conduct electricity–could not be precisely controlled in previous devices.
Now, the U-M team has made a memristor in which they have better command of the conducting pathways.They developed a new material out of the semiconductor molybdenum disulfide–a “two-dimensional” material that can be peeled into layers just a few atoms thick. Lu’s team injected lithium ions into the gaps between molybdenum disulfide layers. They found that if there are enough lithium ions present, the molybdenum sulfide transforms its lattice structure, enabling electrons to run through the film easily as if it were a metal. But in areas with too few lithium ions, the molybdenum sulfide restores its original lattice structure and becomes a semiconductor, and electrical signals have a hard time getting through.
The lithium ions are easy to rearrange within the layer by sliding them with an electric field. This changes the size of the regions that conduct electricity little by little and thereby controls the device’s conductance.
“Because we change the ‘bulk’ properties of the film, the conductance change is much more gradual and much more controllable,” Lu said.
In addition to making the devices behave better, the layered structure enabled Lu’s team to link multiple memristors together through shared lithium ions–creating a kind of connection that is also found in brains. A single neuron’s dendrite, or its signal-receiving end, may have several synapses connecting it to the signaling arms of other neurons. Lu compares the availability of lithium ions to that of a protein that enables synapses to grow.
If the growth of one synapse releases these proteins, called plasticity-related proteins, other synapses nearby can also grow–this is cooperation. Neuroscientists have argued that cooperation between synapses helps to rapidly form vivid memories that last for decades and create associative memories, like a scent that reminds you of your grandmother’s house, for example. If the protein is scarce, one synapse will grow at the expense of the other–and this competition pares down our brains’ connections and keeps them from exploding with signals. Lu’s team was able to show these phenomena directly using their memristor devices. In the competition scenario, lithium ions were drained away from one side of the device. The side with the lithium ions increased its conductance, emulating the growth, and the conductance of the device with little lithium was stunted.
In a cooperation scenario, they made a memristor network with four devices that can exchange lithium ions, and then siphoned some lithium ions from one device out to the others. In this case, not only could the lithium donor increase its conductance–the other three devices could too, although their signals weren’t as strong.
Lu’s team is currently building networks of memristors like these to explore their potential for neuromorphic computing, which mimics the circuitry of the brain.
This research comes from Purdue University (US) and the December announcement seemed particularly timely since battery-powered gifts are popular at Christmas but since it could be many years before this work is commercialized, you may want to tuck it away for future reference. Also, readers familiar with memristors might see a resemblance to the memory cells mentioned in the following excerpt. From a December 13, 2018 news item on Nanowerk,
The more objects we make “smart,” from watches to entire buildings, the greater the need for these devices to store and retrieve massive amounts of data quickly without consuming too much power.
Millions of new memory cells could be part of a computer chip and provide that speed and energy savings, thanks to the discovery of a previously unobserved functionality in a material called molybdenum ditelluride.
The two-dimensional material stacks into multiple layers to build a memory cell. Researchers at Purdue University engineered this device in collaboration with the National Institute of Standards and Technology (NIST) and Theiss Research Inc.
Chip-maker companies have long called for better memory technologies to enable a growing network of smart devices. One of these next-generation possibilities is resistive random access memory, or RRAM for short.
In RRAM, an electrical current is typically driven through a memory cell made up of stacked materials, creating a change in resistance that records data as 0s and 1s in memory. The sequence of 0s and 1s among memory cells identifies pieces of information that a computer reads to perform a function and then store into memory again.
A material would need to be robust enough for storing and retrieving data at least trillions of times, but materials currently used have been too unreliable. So RRAM hasn’t been available yet for widescale use on computer chips.
Molybdenum ditelluride could potentially last through all those cycles. “We haven’t yet explored system fatigue using this new material, but our hope is that it is both faster and more reliable than other approaches due to the unique switching mechanism we’ve observed,” Joerg Appenzeller, Purdue University’s Barry M. and Patricia L. Epstein Professor of Electrical and Computer Engineering and the scientific director of nanoelectronics at the Birck Nanotechnology Center.
Molybdenum ditelluride allows a system to switch more quickly between 0 and 1, potentially increasing the rate of storing and retrieving information. This is because when an electric field is applied to the cell, atoms are displaced by a tiny distance, resulting in a state of high resistance, noted as 0, or a state of low resistance, noted as 1, which can occur much faster than switching in conventional RRAM devices.
“Because less power is needed for these resistive states to change, a battery could last longer,” Appenzeller said.
In a computer chip, each memory cell would be located at the intersection of wires, forming a memory array called cross-point RRAM.
Appenzeller’s lab wants to explore building a stacked memory cell that also incorporates the other main components of a computer chip: “logic,” which processes data, and “interconnects,” wires that transfer electrical signals, by utilizing a library of novel electronic materials fabricated at NIST.
“Logic and interconnects drain battery too, so the advantage of an entirely two-dimensional architecture is more functionality within a small space and better communication between memory and logic,” Appenzeller said.
The work received financial support from the Semiconductor Research Corporation through the NEW LIMITS Center (led by Purdue University), NIST, the U.S. Department of Commerce and the Material Genome Initiative.
Caption: Image captured by an electron microscope of a single nanowire memristor (highlighted in colour to distinguish it from other nanowires in the background image). Blue: silver electrode, orange: nanowire, yellow: platinum electrode. Blue bubbles are dispersed over the nanowire. They are made up of silver ions and form a bridge between the electrodes which increases the resistance. Credit: Forschungszentrum Jülich
Not a popsicle but a representation of a device (memristor) scientists claim mimics a biological nerve cell according to a December 5, 2018 news item on ScienceDaily,
Scientists from Jülich [Germany] together with colleagues from Aachen [Germany] and Turin [Italy] have produced a memristive element made from nanowires that functions in much the same way as a biological nerve cell. The component is able to both save and process information, as well as receive numerous signals in parallel. The resistive switching cell made from oxide crystal nanowires is thus proving to be the ideal candidate for use in building bioinspired “neuromorphic” processors, able to take over the diverse functions of biological synapses and neurons.
Computers have learned a lot in recent years. Thanks to rapid progress in artificial intelligence they are now able to drive cars, translate texts, defeat world champions at chess, and much more besides. In doing so, one of the greatest challenges lies in the attempt to artificially reproduce the signal processing in the human brain. In neural networks, data are stored and processed to a high degree in parallel. Traditional computers on the other hand rapidly work through tasks in succession and clearly distinguish between the storing and processing of information. As a rule, neural networks can only be simulated in a very cumbersome and inefficient way using conventional hardware.
Systems with neuromorphic chips that imitate the way the human brain works offer significant advantages. Experts in the field describe this type of bioinspired computer as being able to work in a decentralised way, having at its disposal a multitude of processors, which, like neurons in the brain, are connected to each other by networks. If a processor breaks down, another can take over its function. What is more, just like in the brain, where practice leads to improved signal transfer, a bioinspired processor should have the capacity to learn.
“With today’s semiconductor technology, these functions are to some extent already achievable. These systems are however suitable for particular applications and require a lot of space and energy,” says Dr. Ilia Valov from Forschungszentrum Jülich. “Our nanowire devices made from zinc oxide crystals can inherently process and even store information, as well as being extremely small and energy efficient,” explains the researcher from Jülich’s Peter Grünberg Institute.
For years memristive cells have been ascribed the best chances of being capable of taking over the function of neurons and synapses in bioinspired computers. They alter their electrical resistance depending on the intensity and direction of the electric current flowing through them. In contrast to conventional transistors, their last resistance value remains intact even when the electric current is switched off. Memristors are thus fundamentally capable of learning.
In order to create these properties, scientists at Forschungszentrum Jülich and RWTH Aachen University used a single zinc oxide nanowire, produced by their colleagues from the polytechnic university in Turin. Measuring approximately one ten-thousandth of a millimeter in size, this type of nanowire is over a thousand times thinner than a human hair. The resulting memristive component not only takes up a tiny amount of space, but also is able to switch much faster than flash memory.
Nanowires offer promising novel physical properties compared to other solids and are used among other things in the development of new types of solar cells, sensors, batteries and computer chips. Their manufacture is comparatively simple. Nanowires result from the evaporation deposition of specified materials onto a suitable substrate, where they practically grow of their own accord.
In order to create a functioning cell, both ends of the nanowire must be attached to suitable metals, in this case platinum and silver. The metals function as electrodes, and in addition, release ions triggered by an appropriate electric current. The metal ions are able to spread over the surface of the wire and build a bridge to alter its conductivity.
Components made from single nanowires are, however, still too isolated to be of practical use in chips. Consequently, the next step being planned by the Jülich and Turin researchers is to produce and study a memristive element, composed of a larger, relatively easy to generate group of several hundred nanowires offering more exciting functionalities.
The Italians have also written about the work in a December 4, 2018 news item for the Polytecnico di Torino’s inhouse magazine, PoliFlash’. I like the image they’ve used better as it offers a bit more detail and looks less like a popsicle. First, the image,
Courtesy: Polytecnico di Torino
Now, the news item, which includes some historical information about the memristor (Note: There is some repetition and links have been removed),
Emulating and understanding the human brain is one of the most important challenges for modern technology: on the one hand, the ability to artificially reproduce the processing of brain signals is one of the cornerstones for the development of artificial intelligence, while on the other the understanding of the cognitive processes at the base of the human mind is still far away.
And the research published in the prestigious journal Nature Communications by Gianluca Milano and Carlo Ricciardi, PhD student and professor, respectively, of the Applied Science and Technology Department of the Politecnico di Torino, represents a step forward in these directions. In fact, the study entitled “Self-limited single nanowire systems combining all-in-one memristive and neuromorphic functionalities” shows how it is possible to artificially emulate the activity of synapses, i.e. the connections between neurons that regulate the learning processes in our brain, in a single “nanowire” with a diameter thousands of times smaller than that of a hair.
It is a crystalline nanowire that takes the “memristor”, the electronic device able to artificially reproduce the functions of biological synapses, to a more performing level. Thanks to the use of nanotechnologies, which allow the manipulation of matter at the atomic level, it was for the first time possible to combine into one single device the synaptic functions that were individually emulated through specific devices. For this reason, the nanowire allows an extreme miniaturisation of the “memristor”, significantly reducing the complexity and energy consumption of the electronic circuits necessary for the implementation of learning algorithms.
Starting from the theorisation of the “memristor” in 1971 by Prof. Leon Chua – now visiting professor at the Politecnico di Torino, who was conferred an honorary degree by the University in 2015 – this new technology will not only allow smaller and more performing devices to be created for the implementation of increasingly “intelligent” computers, but is also a significant step forward for the emulation and understanding of the functioning of the brain.
“The nanowire memristor – said Carlo Ricciardi – represents a model system for the study of physical and electrochemical phenomena that govern biological synapses at the nanoscale. The work is the result of the collaboration between our research team and the RWTH University of Aachen in Germany, supported by INRiM, the National Institute of Metrological Research, and IIT, the Italian Institute of Technology.”
This memristor story comes from South Korea as we progress on the way to neuromorphic computing (brainlike computing). A Sept. 7, 2018 news item on ScienceDaily makes the announcement,
A research team led by Director Myoung-Jae Lee from the Intelligent Devices and Systems Research Group at DGIST (Daegu Gyeongbuk Institute of Science and Technology) has succeeded in developing an artificial synaptic device that mimics the function of the nerve cells (neurons) and synapses that are response for memory in human brains. [sic]
Synapses are where axons and dendrites meet so that neurons in the human brain can send and receive nerve signals; there are known to be hundreds of trillions of synapses in the human brain.
This chemical synapse information transfer system, which transfers information from the brain, can handle high-level parallel arithmetic with very little energy, so research on artificial synaptic devices, which mimic the biological function of a synapse, is under way worldwide.
Dr. Lee’s research team, through joint research with teams led by Professor Gyeong-Su Park from Seoul National University; Professor Sung Kyu Park from Chung-ang University; and Professor Hyunsang Hwang from Pohang University of Science and Technology (POSTEC), developed a high-reliability artificial synaptic device with multiple values by structuring tantalum oxide — a trans-metallic material — into two layers of Ta2O5-x and TaO2-x and by controlling its surface.
The artificial synaptic device developed by the research team is an electrical synaptic device that simulates the function of synapses in the brain as the resistance of the tantalum oxide layer gradually increases or decreases depending on the strength of the electric signals. It has succeeded in overcoming durability limitations of current devices by allowing current control only on one layer of Ta2O5-x.
In addition, the research team successfully implemented an experiment that realized synapse plasticity [or synaptic plasticity], which is the process of creating, storing, and deleting memories, such as long-term strengthening of memory and long-term suppression of memory deleting by adjusting the strength of the synapse connection between neurons.
The non-volatile multiple-value data storage method applied by the research team has the technological advantage of having a small area of an artificial synaptic device system, reducing circuit connection complexity, and reducing power consumption by more than one-thousandth compared to data storage methods based on digital signals using 0 and 1 such as volatile CMOS (Complementary Metal Oxide Semiconductor).
The high-reliability artificial synaptic device developed by the research team can be used in ultra-low-power devices or circuits for processing massive amounts of big data due to its capability of low-power parallel arithmetic. It is expected to be applied to next-generation intelligent semiconductor device technologies such as development of artificial intelligence (AI) including machine learning and deep learning and brain-mimicking semiconductors.
Dr. Lee said, “This research secured the reliability of existing artificial synaptic devices and improved the areas pointed out as disadvantages. We expect to contribute to the development of AI based on the neuromorphic system that mimics the human brain by creating a circuit that imitates the function of neurons.”
You can find other memristor and neuromorphic computing stories here by using the search terms I’ve highlighted, My latest (more or less) is an April 19, 2018 posting titled, New path to viable memristor/neuristor?
Finally, here’s an image from the Korean researchers that accompanied their work,