Tag Archives: Yiyang Li

How memristors retain information without a power source? A mystery solved

A September 10, 2024 news item on ScienceDaily provides a technical explanation of how memristors, without a power source, can retain information,

Phase separation, when molecules part like oil and water, works alongside oxygen diffusion to help memristors — electrical components that store information using electrical resistance — retain information even after the power is shut off, according to a University of Michigan led study recently published in Matter.

A September 11, 2024 University of Michigan press release (also on EurekAltert but published September 10, 2024), which originated the news item, delves further into the research,

Up to this point, explanations have not fully grasped how memristors retain information without a power source, known as nonvolatile memory, because models and experiments do not match up.

“While experiments have shown devices can retain information for over 10 years, the models used in the community show that information can only be retained for a few hours,” said Jingxian Li, U-M doctoral graduate of materials science and engineering and first author of the study.

To better understand the underlying phenomenon driving nonvolatile memristor memory, the researchers focused on a device known as resistive random access memory or RRAM, an alternative to the volatile RAM used in classical computing, and are particularly promising for energy-efficient artificial intelligence applications. 

The specific RRAM studied, a filament-type valence change memory (VCM), sandwiches an insulating tantalum oxide layer between two platinum electrodes. When a certain voltage is applied to the platinum electrodes, a conductive filament forms a tantalum ion bridge passing through the insulator to the electrodes, which allows electricity to flow, putting the cell in a low resistance state representing a “1” in binary code. If a different voltage is applied, the filament is dissolved as returning oxygen atoms react with the tantalum ions, “rusting” the conductive bridge and returning to a high resistance state, representing a binary code of “0”. 

It was once thought that RRAM retains information over time because oxygen is too slow to diffuse back. However, a series of experiments revealed that previous models have neglected the role of phase separation. 

“In these devices, oxygen ions prefer to be away from the filament and will never diffuse back, even after an indefinite period of time. This process is analogous to how a mixture of water and oil will not mix, no matter how much time we wait, because they have lower energy in a de-mixed state,” said Yiyang Li, U-M assistant professor of materials science and engineering and senior author of the study.

To test retention time, the researchers sped up experiments by increasing the temperature. One hour at 250°C is equivalent to about 100 years at 85°C—the typical temperature of a computer chip.

Using the extremely high-resolution imaging of atomic force microscopy, the researchers imaged filaments, which measure only about five nanometers or 20 atoms wide, forming within the one micron wide RRAM device.  

“We were surprised that we could find the filament in the device. It’s like finding a needle in a haystack,” Li said. 

The research team found that different sized filaments yielded different retention behavior. Filaments smaller than about 5 nanometers dissolved over time, whereas filaments larger than 5 nanometers strengthened over time. The size-based difference cannot be explained by diffusion alone.

Together, experimental results and models incorporating thermodynamic principles showed the formation and stability of conductive filaments depend on phase separation. 

The research team leveraged phase separation to extend memory retention from one day to well over 10 years in a rad-hard memory chip—a memory device built to withstand radiation exposure for use in space exploration. 

Other applications include in-memory computing for more energy efficient AI applications or memory devices for electronic skin—a stretchable electronic interface designed to mimic the sensory capabilities of human skin. Also known as e-skin, this material could be used to provide sensory feedback to prosthetic limbs, create new wearable fitness trackers or help robots develop tactile sensing for delicate tasks.

“We hope that our findings can inspire new ways to use phase separation to create information storage devices,” Li said.

Researchers at Ford Research, Dearborn; Oak Ridge National Laboratory; University at Albany; NY CREATES; Sandia National Laboratories; and Arizona State University, Tempe contributed to this study.

Here’s a link to and a citation for the paper,

Thermodynamic origin of nonvolatility in resistive memory by Jingxian Li, Anirudh Appachar, Sabrina L. Peczonczyk, Elisa T. Harrison, Anton V. Ievlev, Ryan Hood, Dongjae Shin, Sangmin Yoo, Brianna Roest, Kai Sun, Karsten Beckmann, Olya Popova, Tony Chiang, William S. Wahby, Robin B. Jacobs-Godrim, Matthew J. Marinella, Petro Maksymovych, John T. Heron, Nathaniel Cady, Wei D. Lu, Suhas Kumar, A. Alec Talin, Wenhao Sun, Yiyang Li. Matter DOI: https://doi.org/10.1016/j.matt.2024.07.018 Published online: August 26, 2024

This paper is behind a paywall.

Bad battery, good synapse from Stanford University

A May 4, 2019 news item on ScienceDaily announces the latest advance made by Stanford University and Sandia National Laboratories in the field of neuromorphic (brainlike) computing,

The brain’s capacity for simultaneously learning and memorizing large amounts of information while requiring little energy has inspired an entire field to pursue brain-like — or neuromorphic — computers. Researchers at Stanford University and Sandia National Laboratories previously developed one portion of such a computer: a device that acts as an artificial synapse, mimicking the way neurons communicate in the brain.

In a paper published online by the journal Science on April 25 [2019], the team reports that a prototype array of nine of these devices performed even better than expected in processing speed, energy efficiency, reproducibility and durability.

Looking forward, the team members want to combine their artificial synapse with traditional electronics, which they hope could be a step toward supporting artificially intelligent learning on small devices.

“If you have a memory system that can learn with the energy efficiency and speed that we’ve presented, then you can put that in a smartphone or laptop,” said Scott Keene, co-author of the paper and a graduate student in the lab of Alberto Salleo, professor of materials science and engineering at Stanford who is co-senior author. “That would open up access to the ability to train our own networks and solve problems locally on our own devices without relying on data transfer to do so.”

An April 25, 2019 Stanford University news release (also on EurekAlert but published May 3, 2019) by Taylor Kubota, which originated the news item, expands on the theme,

A bad battery, a good synapse

The team’s artificial synapse is similar to a battery, modified so that the researchers can dial up or down the flow of electricity between the two terminals. That flow of electricity emulates how learning is wired in the brain. This is an especially efficient design because data processing and memory storage happen in one action, rather than a more traditional computer system where the data is processed first and then later moved to storage.

Seeing how these devices perform in an array is a crucial step because it allows the researchers to program several artificial synapses simultaneously. This is far less time consuming than having to program each synapse one-by-one and is comparable to how the brain actually works.

In previous tests of an earlier version of this device, the researchers found their processing and memory action requires about one-tenth as much energy as a state-of-the-art computing system needs in order to carry out specific tasks. Still, the researchers worried that the sum of all these devices working together in larger arrays could risk drawing too much power. So, they retooled each device to conduct less electrical current – making them much worse batteries but making the array even more energy efficient.

The 3-by-3 array relied on a second type of device – developed by Joshua Yang at the University of Massachusetts, Amherst, who is co-author of the paper – that acts as a switch for programming synapses within the array.

“Wiring everything up took a lot of troubleshooting and a lot of wires. We had to ensure all of the array components were working in concert,” said Armantas Melianas, a postdoctoral scholar in the Salleo lab. “But when we saw everything light up, it was like a Christmas tree. That was the most exciting moment.”

During testing, the array outperformed the researchers’ expectations. It performed with such speed that the team predicts the next version of these devices will need to be tested with special high-speed electronics. After measuring high energy efficiency in the 3-by-3 array, the researchers ran computer simulations of a larger 1024-by-1024 synapse array and estimated that it could be powered by the same batteries currently used in smartphones or small drones. The researchers were also able to switch the devices over a billion times – another testament to its speed – without seeing any degradation in its behavior.

“It turns out that polymer devices, if you treat them well, can be as resilient as traditional counterparts made of silicon. That was maybe the most surprising aspect from my point of view,” Salleo said. “For me, it changes how I think about these polymer devices in terms of reliability and how we might be able to use them.”

Room for creativity

The researchers haven’t yet submitted their array to tests that determine how well it learns but that is something they plan to study. The team also wants to see how their device weathers different conditions – such as high temperatures – and to work on integrating it with electronics. There are also many fundamental questions left to answer that could help the researchers understand exactly why their device performs so well.

“We hope that more people will start working on this type of device because there are not many groups focusing on this particular architecture, but we think it’s very promising,” Melianas said. “There’s still a lot of room for improvement and creativity. We only barely touched the surface.”

Here’s a link to and a citation for the paper,

Parallel programming of an ionic floating-gate memory array for scalable neuromorphic computing by Elliot J. Fuller, Scott T. Keene, Armantas Melianas, Zhongrui Wang, Sapan Agarwal, Yiyang Li, Yaakov Tuchman, Conrad D. James, Matthew J. Marinella, J. Joshua Yang3, Alberto Salleo, A. Alec Talin1. Science 25 Apr 2019: eaaw5581 DOI: 10.1126/science.aaw5581

This paper is behind a paywall.

For anyone interested in more about brainlike/brain-like/neuromorphic computing/neuromorphic engineering/memristors, use any or all of those terms in this blog’s search engine.