Tag Archives: brainlike computing

Energy-efficient artificial synapse

This is the second neuromorphic computing chip story from MIT this summer in what has turned out to be a bumper crop of research announcements in this field. The first MIT synapse story was featured in a June 16, 2020 posting. Now, there’s a second and completely different team announcing results for their artificial brain synapse work in a June 19, 2020 news item on Nanowerk (Note: A link has been removed),

Teams around the world are building ever more sophisticated artificial intelligence systems of a type called neural networks, designed in some ways to mimic the wiring of the brain, for carrying out tasks such as computer vision and natural language processing.

Using state-of-the-art semiconductor circuits to simulate neural networks requires large amounts of memory and high power consumption. Now, an MIT [Massachusetts Institute of Technology] team has made strides toward an alternative system, which uses physical, analog devices that can much more efficiently mimic brain processes.

The findings are described in the journal Nature Communications (“Protonic solid-state electrochemical synapse for physical neural networks”), in a paper by MIT professors Bilge Yildiz, Ju Li, and Jesús del Alamo, and nine others at MIT and Brookhaven National Laboratory. The first author of the paper is Xiahui Yao, a former MIT postdoc now working on energy storage at GRU Energy Lab.

That description of the work is one pretty much every team working on developing memristive (neuromorphic) chips could use.

On other fronts, the team has produced a very attractive illustration accompanying this research (aside: Is it my imagination or has there been a serious investment in the colour pink and other pastels for science illustrations?),

A new system developed at MIT and Brookhaven National Lab could provide a faster, more reliable and much more energy efficient approach to physical neural networks, by using analog ionic-electronic devices to mimic synapses.. Courtesy of the researchers

A June 19, 2020 MIT news release, which originated the news item, provides more insight into this specific piece of research (hint: it’s about energy use and repeatability),

Neural networks attempt to simulate the way learning takes place in the brain, which is based on the gradual strengthening or weakening of the connections between neurons, known as synapses. The core component of this physical neural network is the resistive switch, whose electronic conductance can be controlled electrically. This control, or modulation, emulates the strengthening and weakening of synapses in the brain.

In neural networks using conventional silicon microchip technology, the simulation of these synapses is a very energy-intensive process. To improve efficiency and enable more ambitious neural network goals, researchers in recent years have been exploring a number of physical devices that could more directly mimic the way synapses gradually strengthen and weaken during learning and forgetting.

Most candidate analog resistive devices so far for such simulated synapses have either been very inefficient, in terms of energy use, or performed inconsistently from one device to another or one cycle to the next. The new system, the researchers say, overcomes both of these challenges. “We’re addressing not only the energy challenge, but also the repeatability-related challenge that is pervasive in some of the existing concepts out there,” says Yildiz, who is a professor of nuclear science and engineering and of materials science and engineering.

“I think the bottleneck today for building [neural network] applications is energy efficiency. It just takes too much energy to train these systems, particularly for applications on the edge, like autonomous cars,” says del Alamo, who is the Donner Professor in the Department of Electrical Engineering and Computer Science. Many such demanding applications are simply not feasible with today’s technology, he adds.

The resistive switch in this work is an electrochemical device, which is made of tungsten trioxide (WO3) and works in a way similar to the charging and discharging of batteries. Ions, in this case protons, can migrate into or out of the crystalline lattice of the material,  explains Yildiz, depending on the polarity and strength of an applied voltage. These changes remain in place until altered by a reverse applied voltage — just as the strengthening or weakening of synapses does.

The mechanism is similar to the doping of semiconductors,” says Li, who is also a professor of nuclear science and engineering and of materials science and engineering. In that process, the conductivity of silicon can be changed by many orders of magnitude by introducing foreign ions into the silicon lattice. “Traditionally those ions were implanted at the factory,” he says, but with the new device, the ions are pumped in and out of the lattice in a dynamic, ongoing process. The researchers can control how much of the “dopant” ions go in or out by controlling the voltage, and “we’ve demonstrated a very good repeatability and energy efficiency,” he says.

Yildiz adds that this process is “very similar to how the synapses of the biological brain work. There, we’re not working with protons, but with other ions such as calcium, potassium, magnesium, etc., and by moving those ions you actually change the resistance of the synapses, and that is an element of learning.” The process taking place in the tungsten trioxide in their device is similar to the resistance modulation taking place in biological synapses, she says.

“What we have demonstrated here,” Yildiz says, “even though it’s not an optimized device, gets to the order of energy consumption per unit area per unit change in conductance that’s close to that in the brain.” Trying to accomplish the same task with conventional CMOS type semiconductors would take a million times more energy, she says.

The materials used in the demonstration of the new device were chosen for their compatibility with present semiconductor manufacturing systems, according to Li. But they include a polymer material that limits the device’s tolerance for heat, so the team is still searching for other variations of the device’s proton-conducting membrane and better ways of encapsulating its hydrogen source for long-term operations.

“There’s a lot of fundamental research to be done at the materials level for this device,” Yildiz says. Ongoing research will include “work on how to integrate these devices with existing CMOS transistors” adds del Alamo. “All that takes time,” he says, “and it presents tremendous opportunities for innovation, great opportunities for our students to launch their careers.”

Coincidentally or not a University of Massachusetts at Amherst team announced memristor voltage use comparable to human brain voltage use (see my June 15, 2020 posting), plus, there’s a team at Stanford University touting their low-energy biohybrid synapse in a XXX posting. (June 2020 has been a particularly busy month here for ‘artificial brain’ or ‘memristor’ stories.)

Getting back to this latest MIT research, here’s a link to and a citation for the paper,

Protonic solid-state electrochemical synapse for physical neural networks by Xiahui Yao, Konstantin Klyukin, Wenjie Lu, Murat Onen, Seungchan Ryu, Dongha Kim, Nicolas Emond, Iradwikanari Waluyo, Adrian Hunt, Jesús A. del Alamo, Ju Li & Bilge Yildiz. Nature Communications volume 11, Article number: 3134 (2020) DOI: https://doi.org/10.1038/s41467-020-16866-6 Published: 19 June 2020

This paper is open access.

A biohybrid artificial synapse that can communicate with living cells

As I noted in my June 16, 2020 posting, we may have more than one kind of artificial brain in our future. This latest work features a biohybrid. From a June 15, 2020 news item on ScienceDaily,

In 2017, Stanford University researchers presented a new device that mimics the brain’s efficient and low-energy neural learning process [see my March 8, 2017 posting for more]. It was an artificial version of a synapse — the gap across which neurotransmitters travel to communicate between neurons — made from organic materials. In 2019, the researchers assembled nine of their artificial synapses together in an array, showing that they could be simultaneously programmed to mimic the parallel operation of the brain [see my Sept. 17, 2019 posting].

Now, in a paper published June 15 [2020] in Nature Materials, they have tested the first biohybrid version of their artificial synapse and demonstrated that it can communicate with living cells. Future technologies stemming from this device could function by responding directly to chemical signals from the brain. The research was conducted in collaboration with researchers at Istituto Italiano di Tecnologia (Italian Institute of Technology — IIT) in Italy and at Eindhoven University of Technology (Netherlands).

“This paper really highlights the unique strength of the materials that we use in being able to interact with living matter,” said Alberto Salleo, professor of materials science and engineering at Stanford and co-senior author of the paper. “The cells are happy sitting on the soft polymer. But the compatibility goes deeper: These materials work with the same molecules neurons use naturally.”

While other brain-integrated devices require an electrical signal to detect and process the brain’s messages, the communications between this device and living cells occur through electrochemistry — as though the material were just another neuron receiving messages from its neighbor.

A June 15, 2020 Stanford University news release (also on EurekAlert) by Taylor Kubota, which originated the news item, delves further into this recent work,

How neurons learn

The biohybrid artificial synapse consists of two soft polymer electrodes, separated by a trench filled with electrolyte solution – which plays the part of the synaptic cleft that separates communicating neurons in the brain. When living cells are placed on top of one electrode, neurotransmitters that those cells release can react with that electrode to produce ions. Those ions travel across the trench to the second electrode and modulate the conductive state of this electrode. Some of that change is preserved, simulating the learning process occurring in nature.

“In a biological synapse, essentially everything is controlled by chemical interactions at the synaptic junction. Whenever the cells communicate with one another, they’re using chemistry,” said Scott Keene, a graduate student at Stanford and co-lead author of the paper. “Being able to interact with the brain’s natural chemistry gives the device added utility.”

This process mimics the same kind of learning seen in biological synapses, which is highly efficient in terms of energy because computing and memory storage happen in one action. In more traditional computer systems, the data is processed first and then later moved to storage.

To test their device, the researchers used rat neuroendocrine cells that release the neurotransmitter dopamine. Before they ran their experiment, they were unsure how the dopamine would interact with their material – but they saw a permanent change in the state of their device upon the first reaction.

“We knew the reaction is irreversible, so it makes sense that it would cause a permanent change in the device’s conductive state,” said Keene. “But, it was hard to know whether we’d achieve the outcome we predicted on paper until we saw it happen in the lab. That was when we realized the potential this has for emulating the long-term learning process of a synapse.”

A first step

This biohybrid design is in such early stages that the main focus of the current research was simply to make it work.

“It’s a demonstration that this communication melding chemistry and electricity is possible,” said Salleo. “You could say it’s a first step toward a brain-machine interface, but it’s a tiny, tiny very first step.”

Now that the researchers have successfully tested their design, they are figuring out the best paths for future research, which could include work on brain-inspired computers, brain-machine interfaces, medical devices or new research tools for neuroscience. Already, they are working on how to make the device function better in more complex biological settings that contain different kinds of cells and neurotransmitters.

Here’s a link to and a citation for the paper,

A biohybrid synapse with neurotransmitter-mediated plasticity by Scott T. Keene, Claudia Lubrano, Setareh Kazemzadeh, Armantas Melianas, Yaakov Tuchman, Giuseppina Polino, Paola Scognamiglio, Lucio Cinà, Alberto Salleo, Yoeri van de Burgt & Francesca Santoro. Nature Materials (2020) DOI: https://doi.org/10.1038/s41563-020-0703-y Published: 15 June 2020

This paper is behind a paywall.

A tangle of silver nanowires for brain-like action

I’ve been meaning to get to this news item from late 2019 as it features work from a team that I’ve been following for a number of years now. First mentioned here in an October 17, 2011 posting, James Gimzewski has been working with researchers at the University of California at Los Angeles (UCLA) and researchers at Japan’s National Institute for Materials Science (NIMS) on neuromorphic computing.

This particular research had a protracted rollout with the paper being published in October 2019 and the last news item about it being published in mid-December 2019.

A December 17, 2029 news item on Nanowerk was the first to alert me to this new work (Note: A link has been removed),

UCLA scientists James Gimzewski and Adam Stieg are part of an international research team that has taken a significant stride toward the goal of creating thinking machines.

Led by researchers at Japan’s National Institute for Materials Science, the team created an experimental device that exhibited characteristics analogous to certain behaviors of the brain — learning, memorization, forgetting, wakefulness and sleep. The paper, published in Scientific Reports (“Emergent dynamics of neuromorphic nanowire networks”), describes a network in a state of continuous flux.

A December 16, 2019 UCLA news release, which originated the news item, offers more detail (Note: A link has been removed),

“This is a system between order and chaos, on the edge of chaos,” said Gimzewski, a UCLA distinguished professor of chemistry and biochemistry, a member of the California NanoSystems Institute at UCLA and a co-author of the study. “The way that the device constantly evolves and shifts mimics the human brain. It can come up with different types of behavior patterns that don’t repeat themselves.”

The research is one early step along a path that could eventually lead to computers that physically and functionally resemble the brain — machines that may be capable of solving problems that contemporary computers struggle with, and that may require much less power than today’s computers do.

The device the researchers studied is made of a tangle of silver nanowires — with an average diameter of just 360 nanometers. (A nanometer is one-billionth of a meter.) The nanowires were coated in an insulating polymer about 1 nanometer thick. Overall, the device itself measured about 10 square millimeters — so small that it would take 25 of them to cover a dime.

Allowed to randomly self-assemble on a silicon wafer, the nanowires formed highly interconnected structures that are remarkably similar to those that form the neocortex, the part of the brain involved with higher functions such as language, perception and cognition.

One trait that differentiates the nanowire network from conventional electronic circuits is that electrons flowing through them cause the physical configuration of the network to change. In the study, electrical current caused silver atoms to migrate from within the polymer coating and form connections where two nanowires overlap. The system had about 10 million of these junctions, which are analogous to the synapses where brain cells connect and communicate.

The researchers attached two electrodes to the brain-like mesh to profile how the network performed. They observed “emergent behavior,” meaning that the network displayed characteristics as a whole that could not be attributed to the individual parts that make it up. This is another trait that makes the network resemble the brain and sets it apart from conventional computers.

After current flowed through the network, the connections between nanowires persisted for as much as one minute in some cases, which resembled the process of learning and memorization in the brain. Other times, the connections shut down abruptly after the charge ended, mimicking the brain’s process of forgetting.

In other experiments, the research team found that with less power flowing in, the device exhibited behavior that corresponds to what neuroscientists see when they use functional MRI scanning to take images of the brain of a sleeping person. With more power, the nanowire network’s behavior corresponded to that of the wakeful brain.

The paper is the latest in a series of publications examining nanowire networks as a brain-inspired system, an area of research that Gimzewski helped pioneer along with Stieg, a UCLA research scientist and an associate director of CNSI.

“Our approach may be useful for generating new types of hardware that are both energy-efficient and capable of processing complex datasets that challenge the limits of modern computers,” said Stieg, a co-author of the study.

The borderline-chaotic activity of the nanowire network resembles not only signaling within the brain but also other natural systems such as weather patterns. That could mean that, with further development, future versions of the device could help model such complex systems.

In other experiments, Gimzewski and Stieg already have coaxed a silver nanowire device to successfully predict statistical trends in Los Angeles traffic patterns based on previous years’ traffic data.

Because of their similarities to the inner workings of the brain, future devices based on nanowire technology could also demonstrate energy efficiency like the brain’s own processing. The human brain operates on power roughly equivalent to what’s used by a 20-watt incandescent bulb. By contrast, computer servers where work-intensive tasks take place — from training for machine learning to executing internet searches — can use the equivalent of many households’ worth of energy, with the attendant carbon footprint.

“In our studies, we have a broader mission than just reprogramming existing computers,” Gimzewski said. “Our vision is a system that will eventually be able to handle tasks that are closer to the way the human being operates.”

The study’s first author, Adrian Diaz-Alvarez, is from the International Center for Material Nanoarchitectonics at Japan’s National Institute for Materials Science. Co-authors include Tomonobu Nakayama and Rintaro Higuchi, also of NIMS; and Zdenka Kuncic at the University of Sydney in Australia.

Caption: (a) Micrograph of the neuromorphic network fabricated by this research team. The network contains of numerous junctions between nanowires, which operate as synaptic elements. When voltage is applied to the network (between the green probes), current pathways (orange) are formed in the network. (b) A Human brain and one of its neuronal networks. The brain is known to have a complex network structure and to operate by means of electrical signal propagation across the network. Credit: NIMS

A November 11, 2019 National Institute for Materials Science (Japan) press release (also on EurekAlert but dated December 25, 2019) first announced the news,

An international joint research team led by NIMS succeeded in fabricating a neuromorphic network composed of numerous metallic nanowires. Using this network, the team was able to generate electrical characteristics similar to those associated with higher order brain functions unique to humans, such as memorization, learning, forgetting, becoming alert and returning to calm. The team then clarified the mechanisms that induced these electrical characteristics.

The development of artificial intelligence (AI) techniques has been rapidly advancing in recent years and has begun impacting our lives in various ways. Although AI processes information in a manner similar to the human brain, the mechanisms by which human brains operate are still largely unknown. Fundamental brain components, such as neurons and the junctions between them (synapses), have been studied in detail. However, many questions concerning the brain as a collective whole need to be answered. For example, we still do not fully understand how the brain performs such functions as memorization, learning and forgetting, and how the brain becomes alert and returns to calm. In addition, live brains are difficult to manipulate in experimental research. For these reasons, the brain remains a “mysterious organ.” A different approach to brain research?in which materials and systems capable of performing brain-like functions are created and their mechanisms are investigated?may be effective in identifying new applications of brain-like information processing and advancing brain science.

The joint research team recently built a complex brain-like network by integrating numerous silver (Ag) nanowires coated with a polymer (PVP) insulating layer approximately 1 nanometer in thickness. A junction between two nanowires forms a variable resistive element (i.e., a synaptic element) that behaves like a neuronal synapse. This nanowire network, which contains a large number of intricately interacting synaptic elements, forms a “neuromorphic network”. When a voltage was applied to the neuromorphic network, it appeared to “struggle” to find optimal current pathways (i.e., the most electrically efficient pathways). The research team measured the processes of current pathway formation, retention and deactivation while electric current was flowing through the network and found that these processes always fluctuate as they progress, similar to the human brain’s memorization, learning, and forgetting processes. The observed temporal fluctuations also resemble the processes by which the brain becomes alert or returns to calm. Brain-like functions simulated by the neuromorphic network were found to occur as the huge number of synaptic elements in the network collectively work to optimize current transport, in the other words, as a result of self-organized and emerging dynamic processes..

The research team is currently developing a brain-like memory device using the neuromorphic network material. The team intends to design the memory device to operate using fundamentally different principles than those used in current computers. For example, while computers are currently designed to spend as much time and electricity as necessary in pursuit of absolutely optimum solutions, the new memory device is intended to make a quick decision within particular limits even though the solution generated may not be absolutely optimum. The team also hopes that this research will facilitate understanding of the brain’s information processing mechanisms.

This project was carried out by an international joint research team led by Tomonobu Nakayama (Deputy Director, International Center for Materials Nanoarchitectonics (WPI-MANA), NIMS), Adrian Diaz Alvarez (Postdoctoral Researcher, WPI-MANA, NIMS), Zdenka Kuncic (Professor, School of Physics, University of Sydney, Australia) and James K. Gimzewski (Professor, California NanoSystems Institute, University of California Los Angeles, USA).

Here at last is a link to and a citation for the paper,

Emergent dynamics of neuromorphic nanowire networks by Adrian Diaz-Alvarez, Rintaro Higuchi, Paula Sanz-Leon, Ido Marcus, Yoshitaka Shingaya, Adam Z. Stieg, James K. Gimzewski, Zdenka Kuncic & Tomonobu Nakayama. Scientific Reports volume 9, Article number: 14920 (2019) DOI: https://doi.org/10.1038/s41598-019-51330-6 Published: 17 October 2019

This paper is open access.

Brain-inspired electronics with organic memristors for wearable computing

I went down a rabbit hole while trying to figure out the difference between ‘organic’ memristors and standard memristors. I have put the results of my investigation at the end of this post. First, there’s the news.

An April 21, 2020 news item on ScienceDaily explains why researchers are so focused on memristors and brainlike computing,

The advent of artificial intelligence, machine learning and the internet of things is expected to change modern electronics and bring forth the fourth Industrial Revolution. The pressing question for many researchers is how to handle this technological revolution.

“It is important for us to understand that the computing platforms of today will not be able to sustain at-scale implementations of AI algorithms on massive datasets,” said Thirumalai Venkatesan, one of the authors of a paper published in Applied Physics Reviews, from AIP Publishing.

“Today’s computing is way too energy-intensive to handle big data. We need to rethink our approaches to computation on all levels: materials, devices and architecture that can enable ultralow energy computing.”

An April 21, 2020 American Institute of Physics (AIP) news release (also on EurekAlert), which originated the news item, describes the authors’ approach to the problems with organic memristors,

Brain-inspired electronics with organic memristors could offer a functionally promising and cost- effective platform, according to Venkatesan. Memristive devices are electronic devices with an inherent memory that are capable of both storing data and performing computation. Since memristors are functionally analogous to the operation of neurons, the computing units in the brain, they are optimal candidates for brain-inspired computing platforms.

Until now, oxides have been the leading candidate as the optimum material for memristors. Different material systems have been proposed but none have been successful so far.

“Over the last 20 years, there have been several attempts to come up with organic memristors, but none of those have shown any promise,” said Sreetosh Goswami, lead author on the paper. “The primary reason behind this failure is their lack of stability, reproducibility and ambiguity in mechanistic understanding. At a device level, we are now able to solve most of these problems,”

This new generation of organic memristors is developed based on metal azo complex devices, which are the brainchild of Sreebata Goswami, a professor at the Indian Association for the Cultivation of Science in Kolkata and another author on the paper.

“In thin films, the molecules are so robust and stable that these devices can eventually be the right choice for many wearable and implantable technologies or a body net, because these could be bendable and stretchable,” said Sreebata Goswami. A body net is a series of wireless sensors that stick to the skin and track health.

The next challenge will be to produce these organic memristors at scale, said Venkatesan.

“Now we are making individual devices in the laboratory. We need to make circuits for large-scale functional implementation of these devices.”

Caption: The device structure at a molecular level. The gold nanoparticles on the bottom electrode enhance the field enabling an ultra-low energy operation of the molecular device. Credit Sreetosh Goswami, Sreebrata Goswami and Thirumalai Venky Venkatesan

Here’s a link to and a citation for the paper,

An organic approach to low energy memory and brain inspired electronics by Sreetosh Goswami, Sreebrata Goswami, and T. Venkatesan. Applied Physics Reviews 7, 021303 (2020) DOI: https://doi.org/10.1063/1.5124155

This paper is open access.

Basics about memristors and organic memristors

This undated article on Nanowerk provides a relatively complete and technical description of memristors in general (Note: A link has been removed),

A memristor (named as a portmanteau of memory and resistor) is a non-volatile electronic memory device that was first theorized by Leon Ong Chua in 1971 as the fourth fundamental two-terminal circuit element following the resistor, the capacitor, and the inductor (IEEE Transactions on Circuit Theory, “Memristor-The missing circuit element”).

Its special property is that its resistance can be programmed (resistor function) and subsequently remains stored (memory function). Unlike other memories that exist today in modern electronics, memristors are stable and remember their state even if the device loses power.

However, it was only almost 40 years later that the first practical device was fabricated. This was in 2008, when a group led by Stanley Williams at HP Research Labs realized that switching of the resistance between a conducting and less conducting state in metal-oxide thin-film devices was showing Leon Chua’s memristor behavior. …

The article on Nanowerk includes an embedded video presentation on memristors given by Stanley Williams (also known as R. Stanley Williams).

Mention of an ‘organic’memristor can be found in an October 31, 2017 article by Ryan Whitwam,

The memristor is composed of the transition metal ruthenium complexed with “azo-aromatic ligands.” [emphasis mine] The theoretical work enabling this material was performed at Yale, and the organic molecules were synthesized at the Indian Association for the Cultivation of Sciences. …

I highlighted ‘ligands’ because that appears to be the difference. However, there is more than one type of ligand on Wikipedia.

First, there’s the Ligand (biochemistry) entry (Note: Links have been removed),

In biochemistry and pharmacology, a ligand is a substance that forms a complex with a biomolecule to serve a biological purpose. …

Then, there’s the Ligand entry,

In coordination chemistry, a ligand[help 1] is an ion or molecule (functional group) that binds to a central metal atom to form a coordination complex …

Finally, there’s the Ligand (disambiguation) entry (Note: Links have been removed),

  • Ligand, an atom, ion, or functional group that donates one or more of its electrons through a coordinate covalent bond to one or more central atoms or ions
  • Ligand (biochemistry), a substance that binds to a protein
  • a ‘guest’ in host–guest chemistry

I did take a look at the paper and did not see any references to proteins or other biomolecules that I could recognize as such. I’m not sure why the researchers are describing their device as an ‘organic’ memristor but this may reflect a shortcoming in the definitions I have found or shortcomings in my reading of the paper rather than an error on their parts.

Hopefully, more research will be forthcoming and it will be possible to better understand the terminology.

New design directions to increase variety, efficiency, selectivity and reliability for memristive devices

A May 11, 2020 news item on ScienceDaily provides a description of the current ‘memristor scene’ along with an announcement about a piece of recent research,

Scientists around the world are intensively working on memristive devices, which are capable in extremely low power operation and behave similarly to neurons in the brain. Researchers from the Jülich Aachen Research Alliance (JARA) and the German technology group Heraeus have now discovered how to systematically control the functional behaviour of these elements. The smallest differences in material composition are found crucial: differences so small that until now experts had failed to notice them. The researchers’ design directions could help to increase variety, efficiency, selectivity and reliability for memristive technology-based applications, for example for energy-efficient, non-volatile storage devices or neuro-inspired computers.

Memristors are considered a highly promising alternative to conventional nanoelectronic elements in computer Chips [sic]. Because of the advantageous functionalities, their development is being eagerly pursued by many companies and research institutions around the world. The Japanese corporation NEC installed already the first prototypes in space satellites back in 2017. Many other leading companies such as Hewlett Packard, Intel, IBM, and Samsung are working to bring innovative types of computer and storage devices based on memristive elements to market.

Fundamentally, memristors are simply “resistors with memory,” in which high resistance can be switched to low resistance and back again. This means in principle that the devices are adaptive, similar to a synapse in a biological nervous system. “Memristive elements are considered ideal candidates for neuro-inspired computers modelled on the brain, which are attracting a great deal of interest in connection with deep learning and artificial intelligence,” says Dr. Ilia Valov of the Peter Grünberg Institute (PGI-7) at Forschungszentrum Jülich.

In the latest issue of the open access journal Science Advances, he and his team describe how the switching and neuromorphic behaviour of memristive elements can be selectively controlled. According to their findings, the crucial factor is the purity of the switching oxide layer. “Depending on whether you use a material that is 99.999999 % pure, and whether you introduce one foreign atom into ten million atoms of pure material or into one hundred atoms, the properties of the memristive elements vary substantially” says Valov.

A May 11, 2020 Forschungszentrum Juelich press release (also on EurekAlert), which originated the news item, delves into the theme of increasing control over memristive systems,

This effect had so far been overlooked by experts. It can be used very specifically for designing memristive systems, in a similar way to doping semiconductors in information technology. “The introduction of foreign atoms allows us to control the solubility and transport properties of the thin oxide layers,” explains Dr. Christian Neumann of the technology group Heraeus. He has been contributing his materials expertise to the project ever since the initial idea was conceived in 2015.

“In recent years there has been remarkable progress in the development and use of memristive devices, however that progress has often been achieved on a purely empirical basis,” according to Valov. Using the insights that his team has gained, manufacturers could now methodically develop memristive elements selecting the functions they need. The higher the doping concentration, the slower the resistance of the elements changes as the number of incoming voltage pulses increases and decreases, and the more stable the resistance remains. “This means that we have found a way for designing types of artificial synapses with differing excitability,” explains Valov.

Design specification for artificial synapses

The brain’s ability to learn and retain information can largely be attributed to the fact that the connections between neurons are strengthened when they are frequently used. Memristive devices, of which there are different types such as electrochemical metallization cells (ECMs) or valence change memory cells (VCMs), behave similarly. When these components are used, the conductivity increases as the number of incoming voltage pulses increases. The changes can also be reversed by applying voltage pulses of the opposite polarity.

The JARA researchers conducted their systematic experiments on ECMs, which consist of a copper electrode, a platinum electrode, and a layer of silicon dioxide between them. Thanks to the cooperation with Heraeus researchers, the JARA scientists had access to different types of silicon dioxide: one with a purity of 99.999999 % – also called 8N silicon dioxide – and others containing 100 to 10,000 ppm (parts per million) of foreign atoms. The precisely doped glass used in their experiments was specially developed and manufactured by quartz glass specialist Heraeus Conamic, which also holds the patent for the procedure. Copper and protons acted as mobile doping agents, while aluminium and gallium were used as non-volatile doping.

Synapses, the connections between neurons, have the ability to transmit signals with varying degrees of strength when they are excited by a quick succession of electrical impulses. One effect of this repeated activity is to increase the concentration of calcium ions, with the result that more neurotransmitters are emitted. Depending on the activity, other effects cause long-term structural changes, which impact the strength of the transmission for several hours, or potentially even for the rest of the person’s life. Memristive elements allow the strength of the electrical transmission to be changed in a similar way to synaptic connections, by applying a voltage. In electrochemical metallization cells (ECMs), a metallic filament develops between the two metal electrodes, thus increasing conductivity. Applying voltage pulses with reversed polarity causes the filament to shrink again until the cell reaches its initial high resistance state. Copyright: Forschungszentrum Jülich / Tobias Schlößer

Record switching time confirms theory

Based on their series of experiments, the researchers were able to show that the ECMs’ switching times change as the amount of doping atoms changes. If the switching layer is made of 8N silicon dioxide, the memristive component switches in only 1.4 nanoseconds. To date, the fastest value ever measured for ECMs had been around 10 nanoseconds. By doping the oxide layer of the components with up to 10,000 ppm of foreign atoms, the switching time was prolonged into the range of milliseconds. “We can also theoretically explain our results. This is helping us to understand the physico-chemical processes on the nanoscale and apply this knowledge in the practice” says Valov. Based on generally applicable theoretical considerations, supported by experimental results, some also documented in the literature, he is convinced that the doping/impurity effect occurs and can be employed in all types memristive elements.

Top: In memristive elements (ECMs) with an undoped, high-purity switching layer of silicon oxide (SiO2), copper ions can move very fast. A filament of copper atoms forms correspondingly fast on the platinum electrode. This increases the total device conductivity respectively the capacity. Due to the high mobility of the ions, however, this filament is unstable at low forming voltages. Center: Gallium ions (Ga3+), which are introduced into the cell (non-volatile doping), bind copper ions (Cu2+) in the switching layer. The movement of the ions slows down, leading to lower switching times, but the filament, once formed remains longer stable. Bottom: Doping with aluminium ions (Al3+) slows down the process even more, since aluminium ions bind copper ions even stronger than gallium ions. Filament growth is even slower, while at the same time the stability of the filament is further increased. Depending on the chemical properties of the introduced doping elements, memristive cells – the artificial synapses – can be created with tailor-made switching and neuromorphic properties. Copyright: Forschungszentrum Jülich / Tobias Schloesser

Here’s a link to and a citation for the paper,

Design of defect-chemical properties and device performance in memristive systems by M. Lübben, F. Cüppers, J. Mohr, M. von Witzleben, U. Breuer, R. Waser, C. Neumann, and I. Valov. Science Advances 08 May 2020: Vol. 6, no. 19, eaaz9079 DOI: 10.1126/sciadv.aaz9079

This paper is open access.

For anyone curious about the German technology group, Heraeus, there’s a fascinating history in its Wikipedia entry. The technology company was formally founded in 1851 but it can be traced back to the 17th century and the founding family’s apothecary.

Second order memristor

I think this is my first encounter with a second-order memristor. An August 28, 2019 news item on Nanowerk announces the research (Note: A link has been removed),

Researchers from the Moscow Institute of Physics and Technology [MIPT} have created a device that acts like a synapse in the living brain, storing information and gradually forgetting it when not accessed for a long time. Known as a second-order memristor, the new device is based on hafnium oxide and offers prospects for designing analog neurocomputers imitating the way a biological brain learns.

An August 28, 2019 MIPT press release (also on EurekAlert), which originated the news item, provides an explanation for neuromorphic computing (analog neurocomputers; brainlike computing), the difference between a first-order and second-order memristor, and an in depth view of the research,

Neurocomputers, which enable artificial intelligence, emulate the way the brain works. It stores data in the form of synapses, a network of connections between the nerve cells, or neurons. Most neurocomputers have a conventional digital architecture and use mathematical models to invoke virtual neurons and synapses.

Alternatively, an actual on-chip electronic component could stand for each neuron and synapse in the network. This so-called analog approach has the potential to drastically speed up computations and reduce energy costs.

The core component of a hypothetical analog neurocomputer is the memristor. The word is a portmanteau of “memory” and “resistor,” which pretty much sums up what it is: a memory cell acting as a resistor. Loosely speaking, a high resistance encodes a zero, and a low resistance encodes a one. This is analogous to how a synapse conducts a signal between two neurons (one), while the absence of a synapse results in no signal, a zero.

But there is a catch: In an actual brain, the active synapses tend to strengthen over time, while the opposite is true for inactive ones. This phenomenon known as synaptic plasticity is one of the foundations of natural learning and memory. It explains the biology of cramming for an exam and why our seldom accessed memories fade.

Proposed in 2015, the second-order memristor is an attempt to reproduce natural memory, complete with synaptic plasticity. The first mechanism for implementing this involves forming nanosized conductive bridges across the memristor. While initially decreasing resistance, they naturally decay with time, emulating forgetfulness.

“The problem with this solution is that the device tends to change its behavior over time and breaks down after prolonged operation,” said the study’s lead author Anastasia Chouprik from MIPT’s Neurocomputing Systems Lab. “The mechanism we used to implement synaptic plasticity is more robust. In fact, after switching the state of the system 100 billion times, it was still operating normally, so my colleagues stopped the endurance test.”

Instead of nanobridges, the MIPT team relied on hafnium oxide to imitate natural memory. This material is ferroelectric: Its internal bound charge distribution — electric polarization — changes in response to an external electric field. If the field is then removed, the material retains its acquired polarization, the way a ferromagnet remains magnetized.

The physicists implemented their second-order memristor as a ferroelectric tunnel junction — two electrodes interlaid with a thin hafnium oxide film (fig. 1, right). The device can be switched between its low and high resistance states by means of electric pulses, which change the ferroelectric film’s polarization and thus its resistance.

“The main challenge that we faced was figuring out the right ferroelectric layer thickness,” Chouprik added. “Four nanometers proved to be ideal. Make it just one nanometer thinner, and the ferroelectric properties are gone, while a thicker film is too wide a barrier for the electrons to tunnel through. And it is only the tunneling current that we can modulate by switching polarization.”

What gives hafnium oxide an edge over other ferroelectric materials, such as barium titanate, is that it is already used by current silicon technology. For example, Intel has been manufacturing microchips based on a hafnium compound since 2007. This makes introducing hafnium-based devices like the memristor reported in this story far easier and cheaper than those using a brand-new material.

In a feat of ingenuity, the researchers implemented “forgetfulness” by leveraging the defects at the interface between silicon and hafnium oxide. Those very imperfections used to be seen as a detriment to hafnium-based microprocessors, and engineers had to find a way around them by incorporating other elements into the compound. Instead, the MIPT team exploited the defects, which make memristor conductivity die down with time, just like natural memories.

Vitalii Mikheev, the first author of the paper, shared the team’s future plans: “We are going to look into the interplay between the various mechanisms switching the resistance in our memristor. It turns out that the ferroelectric effect may not be the only one involved. To further improve the devices, we will need to distinguish between the mechanisms and learn to combine them.”

According to the physicists, they will move on with the fundamental research on the properties of hafnium oxide to make the nonvolatile random access memory cells more reliable. The team is also investigating the possibility of transferring their devices onto a flexible substrate, for use in flexible electronics.

Last year, the researchers offered a detailed description of how applying an electric field to hafnium oxide films affects their polarization. It is this very process that enables reducing ferroelectric memristor resistance, which emulates synapse strengthening in a biological brain. The team also works on neuromorphic computing systems with a digital architecture.

MIPT has provided this image illustrating the research,

Caption: The left image shows a synapse from a biological brain, the inspiration behind its artificial analogue (right). The latter is a memristor device implemented as a ferroelectric tunnel junction — that is, a thin hafnium oxide film (pink) interlaid between a titanium nitride electrode (blue cable) and a silicon substrate (marine blue), which doubles up as the second electrode. Electric pulses switch the memristor between its high and low resistance states by changing hafnium oxide polarization, and therefore its conductivity. Credit: Elena Khavina/MIPT Press Office

Here’s a link to and a citation for the paper,

Ferroelectric Second-Order Memristor by Vitalii Mikheev, Anastasia Chouprik, Yury Lebedinskii, Sergei Zarubin, Yury Matveyev, Ekaterina Kondratyuk, Maxim G. Kozodaev, Andrey M. Markeev, Andrei Zenkevich, Dmitrii Negrov. ACS Appl. Mater. Interfaces 2019113532108-32114 DOI: https://doi.org/10.1021/acsami.9b08189 Publication Date:August 12, 2019 Copyright © 2019 American Chemical Society

This paper is behind a paywall.

Bad battery, good synapse from Stanford University

A May 4, 2019 news item on ScienceDaily announces the latest advance made by Stanford University and Sandia National Laboratories in the field of neuromorphic (brainlike) computing,

The brain’s capacity for simultaneously learning and memorizing large amounts of information while requiring little energy has inspired an entire field to pursue brain-like — or neuromorphic — computers. Researchers at Stanford University and Sandia National Laboratories previously developed one portion of such a computer: a device that acts as an artificial synapse, mimicking the way neurons communicate in the brain.

In a paper published online by the journal Science on April 25 [2019], the team reports that a prototype array of nine of these devices performed even better than expected in processing speed, energy efficiency, reproducibility and durability.

Looking forward, the team members want to combine their artificial synapse with traditional electronics, which they hope could be a step toward supporting artificially intelligent learning on small devices.

“If you have a memory system that can learn with the energy efficiency and speed that we’ve presented, then you can put that in a smartphone or laptop,” said Scott Keene, co-author of the paper and a graduate student in the lab of Alberto Salleo, professor of materials science and engineering at Stanford who is co-senior author. “That would open up access to the ability to train our own networks and solve problems locally on our own devices without relying on data transfer to do so.”

An April 25, 2019 Stanford University news release (also on EurekAlert but published May 3, 2019) by Taylor Kubota, which originated the news item, expands on the theme,

A bad battery, a good synapse

The team’s artificial synapse is similar to a battery, modified so that the researchers can dial up or down the flow of electricity between the two terminals. That flow of electricity emulates how learning is wired in the brain. This is an especially efficient design because data processing and memory storage happen in one action, rather than a more traditional computer system where the data is processed first and then later moved to storage.

Seeing how these devices perform in an array is a crucial step because it allows the researchers to program several artificial synapses simultaneously. This is far less time consuming than having to program each synapse one-by-one and is comparable to how the brain actually works.

In previous tests of an earlier version of this device, the researchers found their processing and memory action requires about one-tenth as much energy as a state-of-the-art computing system needs in order to carry out specific tasks. Still, the researchers worried that the sum of all these devices working together in larger arrays could risk drawing too much power. So, they retooled each device to conduct less electrical current – making them much worse batteries but making the array even more energy efficient.

The 3-by-3 array relied on a second type of device – developed by Joshua Yang at the University of Massachusetts, Amherst, who is co-author of the paper – that acts as a switch for programming synapses within the array.

“Wiring everything up took a lot of troubleshooting and a lot of wires. We had to ensure all of the array components were working in concert,” said Armantas Melianas, a postdoctoral scholar in the Salleo lab. “But when we saw everything light up, it was like a Christmas tree. That was the most exciting moment.”

During testing, the array outperformed the researchers’ expectations. It performed with such speed that the team predicts the next version of these devices will need to be tested with special high-speed electronics. After measuring high energy efficiency in the 3-by-3 array, the researchers ran computer simulations of a larger 1024-by-1024 synapse array and estimated that it could be powered by the same batteries currently used in smartphones or small drones. The researchers were also able to switch the devices over a billion times – another testament to its speed – without seeing any degradation in its behavior.

“It turns out that polymer devices, if you treat them well, can be as resilient as traditional counterparts made of silicon. That was maybe the most surprising aspect from my point of view,” Salleo said. “For me, it changes how I think about these polymer devices in terms of reliability and how we might be able to use them.”

Room for creativity

The researchers haven’t yet submitted their array to tests that determine how well it learns but that is something they plan to study. The team also wants to see how their device weathers different conditions – such as high temperatures – and to work on integrating it with electronics. There are also many fundamental questions left to answer that could help the researchers understand exactly why their device performs so well.

“We hope that more people will start working on this type of device because there are not many groups focusing on this particular architecture, but we think it’s very promising,” Melianas said. “There’s still a lot of room for improvement and creativity. We only barely touched the surface.”

Here’s a link to and a citation for the paper,

Parallel programming of an ionic floating-gate memory array for scalable neuromorphic computing by Elliot J. Fuller, Scott T. Keene, Armantas Melianas, Zhongrui Wang, Sapan Agarwal, Yiyang Li, Yaakov Tuchman, Conrad D. James, Matthew J. Marinella, J. Joshua Yang3, Alberto Salleo, A. Alec Talin1. Science 25 Apr 2019: eaaw5581 DOI: 10.1126/science.aaw5581

This paper is behind a paywall.

For anyone interested in more about brainlike/brain-like/neuromorphic computing/neuromorphic engineering/memristors, use any or all of those terms in this blog’s search engine.

Brainlike computing with spintronic devices

Adding to the body of ‘memristor’ research I have here, there’s an April 17, 2019 news item on Nanowerk announcing the development of ‘memristor’ hardware by Japanese researchers (Note: A link has been removed),

A research group from Tohoku University has developed spintronics devices which are promising for future energy-efficient and adoptive computing systems, as they behave like neurons and synapses in the human brain (Advanced Materials, “Artificial Neuron and Synapse Realized in an Antiferromagnet/Ferromagnet Heterostructure Using Dynamics of Spin–Orbit Torque Switching”).

Just because this ‘synapse’ is pretty,

Courtesy: Tohoku University

An April 16, 2019 Tohoku University press release, which originated the news item, expands on the theme,

Today’s information society is built on digital computers that have evolved drastically for half a century and are capable of executing complicated tasks reliably. The human brain, by contrast, operates under very limited power and is capable of executing complex tasks efficiently using an architecture that is vastly different from that of digital computers.

So the development of computing schemes or hardware inspired by the processing of information in the brain is of broad interest to scientists in fields ranging from physics, chemistry, material science and mathematics, to electronics and computer science.

In computing, there are various ways to implement the processing of information by a brain. Spiking neural network is a kind of implementation method which closely mimics the brain’s architecture and temporal information processing. Successful implementation of spiking neural network requires dedicated hardware with artificial neurons and synapses that are designed to exhibit the dynamics of biological neurons and synapses.

Here, the artificial neuron and synapse would ideally be made of the same material system and operated under the same working principle. However, this has been a challenging issue due to the fundamentally different nature of the neuron and synapse in biological neural networks.

The research group – which includes Professor Hideo Ohno (currently the university president), Associate Professor Shunsuke Fukami, Dr. Aleksandr Kurenkov and Professor Yoshihiko Horio – created an artificial neuron and synapse by using spintronics technology. Spintronics is an academic field that aims to simultaneously use an electron’s electric (charge) and magnetic (spin) properties.

The research group had previously developed a functional material system consisting of antiferromagnetic and ferromagnetic materials. This time, they prepared artificial neuronal and synaptic devices microfabricated from the material system, which demonstrated fundamental behavior of biological neuron and synapse – leaky integrate-and-fire and spike-timing-dependent plasticity, respectively – based on the same concept of spintronics.

The spiking neural network is known to be advantageous over today’s artificial intelligence for the processing and prediction of temporal information. Expansion of the developed technology to unit-circuit, block and system levels is expected to lead to computers that can process time-varying information such as voice and video with a small amount of power or edge devices that have the an ability to adopt users and the environment through usage.

Here’s a link to and a citation for the paper,

Artificial Neuron and Synapse Realized in an Antiferromagnet/Ferromagnet Heterostructure Using Dynamics of Spin–Orbit Torque Switching by Aleksandr Kurenkov, Samik DuttaGupta, Chaoliang Zhang, Shunsuke Fukami, Yoshihiko Horio, Hideo Ohno. Advanced Materials https://doi.org/10.1002/adma.201900636 First published: 16 April 2019

This paper is behind a paywall.

Mimicking the brain with an evolvable organic electrochemical transistor

Simone Fabiano and Jennifer Gerasimov have developed a learning transistor that mimics the way synapses function. Credit: Thor Balkhed

At a guess, this was originally a photograph which has been passed through some sort of programme to give it a paintinglike quality.

Moving onto the research, I don’t see any reference to memristors (another of the ‘devices’ that mimics the human brain) so perhaps this is an entirely different way to mimic human brains? A February 5, 2019 news item on ScienceDaily announces the work from Linkoping University (Sweden),

A new transistor based on organic materials has been developed by scientists at Linköping University. It has the ability to learn, and is equipped with both short-term and long-term memory. The work is a major step on the way to creating technology that mimics the human brain.

A February 5, 2019 Linkoping University press release (also on EurekAlert), which originated the news item, describes this ‘nonmemristor’ research into brainlike computing in more detail,

Until now, brains have been unique in being able to create connections where there were none before. In a scientific article in Advanced Science, researchers from Linköping University describe a transistor that can create a new connection between an input and an output. They have incorporated the transistor into an electronic circuit that learns how to link a certain stimulus with an output signal, in the same way that a dog learns that the sound of a food bowl being prepared means that dinner is on the way.

A normal transistor acts as a valve that amplifies or dampens the output signal, depending on the characteristics of the input signal. In the organic electrochemical transistor that the researchers have developed, the channel in the transistor consists of an electropolymerised conducting polymer. The channel can be formed, grown or shrunk, or completely eliminated during operation. It can also be trained to react to a certain stimulus, a certain input signal, such that the transistor channel becomes more conductive and the output signal larger.

“It is the first time that real time formation of new electronic components is shown in neuromorphic devices”, says Simone Fabiano, principal investigator in organic nanoelectronics at the Laboratory of Organic Electronics, Campus Norrköping.

The channel is grown by increasing the degree of polymerisation of the material in the transistor channel, thereby increasing the number of polymer chains that conduct the signal. Alternatively, the material may be overoxidised (by applying a high voltage) and the channel becomes inactive. Temporary changes of the conductivity can also be achieved by doping or dedoping the material.

“We have shown that we can induce both short-term and permanent changes to how the transistor processes information, which is vital if one wants to mimic the ways that brain cells communicate with each other”, says Jennifer Gerasimov, postdoc in organic nanoelectronics and one of the authors of the article.

By changing the input signal, the strength of the transistor response can be modulated across a wide range, and connections can be created where none previously existed. This gives the transistor a behaviour that is comparable with that of the synapse, or the communication interface between two brain cells.

It is also a major step towards machine learning using organic electronics. Software-based artificial neural networks are currently used in machine learning to achieve what is known as “deep learning”. Software requires that the signals are transmitted between a huge number of nodes to simulate a single synapse, which takes considerable computing power and thus consumes considerable energy.

“We have developed hardware that does the same thing, using a single electronic component”, says Jennifer Gerasimov.

“Our organic electrochemical transistor can therefore carry out the work of thousands of normal transistors with an energy consumption that approaches the energy consumed when a human brain transmits signals between two cells”, confirms Simone Fabiano.

The transistor channel has not been constructed using the most common polymer used in organic electronics, PEDOT, but instead using a polymer of a newly-developed monomer, ETE-S, produced by Roger Gabrielsson, who also works at the Laboratory of Organic Electronics and is one of the authors of the article. ETE-S has several unique properties that make it perfectly suited for this application – it forms sufficiently long polymer chains, is water-soluble while the polymer form is not, and it produces polymers with an intermediate level of doping. The polymer PETE-S is produced in its doped form with an intrinsic negative charge to balance the positive charge carriers (it is p-doped).

Here’s a link to and a citation for the paper,

An Evolvable Organic Electrochemical Transistor for Neuromorphic Applications by Jennifer Y. Gerasimov, Roger Gabrielsson, Robert Forchheimer, Eleni Stavrinidou, Daniel T. Simon, Magnus Berggren, Simone Fabiano. Advanced Science DOI: https://doi.org/10.1002/advs.201801339 First published: 04 February 2019

This paper is open access.

There’s one other image associated this work that I want to include here,

Synaptic transistor. Sketch of the organic electrochemical transistor, formed by electropolymerization of ETE‐S in the transistor channel. The electrolyte solution is confined by a PDMS well (not shown). In this work, we define the input at the gate as the presynaptic signal and the response at the drain as the postsynaptic terminal. During operation, the drain voltage is kept constant while the gate is pulsed. Synaptic weight is defined as the amplitude of the current response to a standard gate voltage characterization pulse of −0.1 V. Different memory functionalities are accessible by applying gate voltage Courtesy: Linkoping University Researchers

Memristors with better mimicry of synapses

It seems to me it’s been quite a while since I’ve stumbled across a memristor story from the University of Micihigan but it was worth waiting for. (Much of the research around memristors has to do with their potential application in neuromorphic (brainlike) computers.) From a December 17, 2018 news item on ScienceDaily,

A new electronic device developed at the University of Michigan can directly model the behaviors of a synapse, which is a connection between two neurons.

For the first time, the way that neurons share or compete for resources can be explored in hardware without the need for complicated circuits.

“Neuroscientists have argued that competition and cooperation behaviors among synapses are very important. Our new memristive devices allow us to implement a faithful model of these behaviors in a solid-state system,” said Wei Lu, U-M professor of electrical and computer engineering and senior author of the study in Nature Materials.

A December 17, 2018 University of Michigan news release (also on EurekAlert), which originated the news item, provides an explanation of memristors and their ‘similarity’ to synapses while providing more details about this latest research,

Memristors are electrical resistors with memory–advanced electronic devices that regulate current based on the history of the voltages applied to them. They can store and process data simultaneously, which makes them a lot more efficient than traditional systems. They could enable new platforms that process a vast number of signals in parallel and are capable of advanced machine learning.

The memristor is a good model for a synapse. It mimics the way that the connections between neurons strengthen or weaken when signals pass through them. But the changes in conductance typically come from changes in the shape of the channels of conductive material within the memristor. These channels–and the memristor’s ability to conduct electricity–could not be precisely controlled in previous devices.

Now, the U-M team has made a memristor in which they have better command of the conducting pathways.They developed a new material out of the semiconductor molybdenum disulfide–a “two-dimensional” material that can be peeled into layers just a few atoms thick. Lu’s team injected lithium ions into the gaps between molybdenum disulfide layers.
They found that if there are enough lithium ions present, the molybdenum sulfide transforms its lattice structure, enabling electrons to run through the film easily as if it were a metal. But in areas with too few lithium ions, the molybdenum sulfide restores its original lattice structure and becomes a semiconductor, and electrical signals have a hard time getting through.

The lithium ions are easy to rearrange within the layer by sliding them with an electric field. This changes the size of the regions that conduct electricity little by little and thereby controls the device’s conductance.

“Because we change the ‘bulk’ properties of the film, the conductance change is much more gradual and much more controllable,” Lu said.

In addition to making the devices behave better, the layered structure enabled Lu’s team to link multiple memristors together through shared lithium ions–creating a kind of connection that is also found in brains. A single neuron’s dendrite, or its signal-receiving end, may have several synapses connecting it to the signaling arms of other neurons. Lu compares the availability of lithium ions to that of a protein that enables synapses to grow.

If the growth of one synapse releases these proteins, called plasticity-related proteins, other synapses nearby can also grow–this is cooperation. Neuroscientists have argued that cooperation between synapses helps to rapidly form vivid memories that last for decades and create associative memories, like a scent that reminds you of your grandmother’s house, for example. If the protein is scarce, one synapse will grow at the expense of the other–and this competition pares down our brains’ connections and keeps them from exploding with signals.
Lu’s team was able to show these phenomena directly using their memristor devices. In the competition scenario, lithium ions were drained away from one side of the device. The side with the lithium ions increased its conductance, emulating the growth, and the conductance of the device with little lithium was stunted.

In a cooperation scenario, they made a memristor network with four devices that can exchange lithium ions, and then siphoned some lithium ions from one device out to the others. In this case, not only could the lithium donor increase its conductance–the other three devices could too, although their signals weren’t as strong.

Lu’s team is currently building networks of memristors like these to explore their potential for neuromorphic computing, which mimics the circuitry of the brain.

Here’s a link to and a citation for the paper,

Ionic modulation and ionic coupling effects in MoS2 devices for neuromorphic computing by Xiaojian Zhu, Da Li, Xiaogan Liang, & Wei D. Lu. Nature Materials (2018) DOI: https://doi.org/10.1038/s41563-018-0248-5 Published 17 December 2018

This paper is behind a paywall.

The researchers have made images illustrating their work available,

A schematic of the molybdenum disulfide layers with lithium ions between them. On the right, the simplified inset shows how the molybdenum disulfide changes its atom arrangements in the presence and absence of the lithium atoms, between a metal (1T’ phase) and semiconductor (2H phase), respectively. Image credit: Xiaojian Zhu, Nanoelectronics Group, University of Michigan.

A diagram of a synapse receiving a signal from one of the connecting neurons. This signal activates the generation of plasticity-related proteins (PRPs), which help a synapse to grow. They can migrate to other synapses, which enables multiple synapses to grow at once. The new device is the first to mimic this process directly, without the need for software or complicated circuits. Image credit: Xiaojian Zhu, Nanoelectronics Group, University of Michigan.
An electron microscope image showing the rectangular gold (Au) electrodes representing signalling neurons and the rounded electrode representing the receiving neuron. The material of molybdenum disulfide layered with lithium connects the electrodes, enabling the simulation of cooperative growth among synapses. Image credit: Xiaojian Zhu, Nanoelectronics Group, University of Michigan.

That’s all folks.