Tag Archives: resistive random access memory (RRAM)

How memristors retain information without a power source? A mystery solved

A September 10, 2024 news item on ScienceDaily provides a technical explanation of how memristors, without a power source, can retain information,

Phase separation, when molecules part like oil and water, works alongside oxygen diffusion to help memristors — electrical components that store information using electrical resistance — retain information even after the power is shut off, according to a University of Michigan led study recently published in Matter.

A September 11, 2024 University of Michigan press release (also on EurekAltert but published September 10, 2024), which originated the news item, delves further into the research,

Up to this point, explanations have not fully grasped how memristors retain information without a power source, known as nonvolatile memory, because models and experiments do not match up.

“While experiments have shown devices can retain information for over 10 years, the models used in the community show that information can only be retained for a few hours,” said Jingxian Li, U-M doctoral graduate of materials science and engineering and first author of the study.

To better understand the underlying phenomenon driving nonvolatile memristor memory, the researchers focused on a device known as resistive random access memory or RRAM, an alternative to the volatile RAM used in classical computing, and are particularly promising for energy-efficient artificial intelligence applications. 

The specific RRAM studied, a filament-type valence change memory (VCM), sandwiches an insulating tantalum oxide layer between two platinum electrodes. When a certain voltage is applied to the platinum electrodes, a conductive filament forms a tantalum ion bridge passing through the insulator to the electrodes, which allows electricity to flow, putting the cell in a low resistance state representing a “1” in binary code. If a different voltage is applied, the filament is dissolved as returning oxygen atoms react with the tantalum ions, “rusting” the conductive bridge and returning to a high resistance state, representing a binary code of “0”. 

It was once thought that RRAM retains information over time because oxygen is too slow to diffuse back. However, a series of experiments revealed that previous models have neglected the role of phase separation. 

“In these devices, oxygen ions prefer to be away from the filament and will never diffuse back, even after an indefinite period of time. This process is analogous to how a mixture of water and oil will not mix, no matter how much time we wait, because they have lower energy in a de-mixed state,” said Yiyang Li, U-M assistant professor of materials science and engineering and senior author of the study.

To test retention time, the researchers sped up experiments by increasing the temperature. One hour at 250°C is equivalent to about 100 years at 85°C—the typical temperature of a computer chip.

Using the extremely high-resolution imaging of atomic force microscopy, the researchers imaged filaments, which measure only about five nanometers or 20 atoms wide, forming within the one micron wide RRAM device.  

“We were surprised that we could find the filament in the device. It’s like finding a needle in a haystack,” Li said. 

The research team found that different sized filaments yielded different retention behavior. Filaments smaller than about 5 nanometers dissolved over time, whereas filaments larger than 5 nanometers strengthened over time. The size-based difference cannot be explained by diffusion alone.

Together, experimental results and models incorporating thermodynamic principles showed the formation and stability of conductive filaments depend on phase separation. 

The research team leveraged phase separation to extend memory retention from one day to well over 10 years in a rad-hard memory chip—a memory device built to withstand radiation exposure for use in space exploration. 

Other applications include in-memory computing for more energy efficient AI applications or memory devices for electronic skin—a stretchable electronic interface designed to mimic the sensory capabilities of human skin. Also known as e-skin, this material could be used to provide sensory feedback to prosthetic limbs, create new wearable fitness trackers or help robots develop tactile sensing for delicate tasks.

“We hope that our findings can inspire new ways to use phase separation to create information storage devices,” Li said.

Researchers at Ford Research, Dearborn; Oak Ridge National Laboratory; University at Albany; NY CREATES; Sandia National Laboratories; and Arizona State University, Tempe contributed to this study.

Here’s a link to and a citation for the paper,

Thermodynamic origin of nonvolatility in resistive memory by Jingxian Li, Anirudh Appachar, Sabrina L. Peczonczyk, Elisa T. Harrison, Anton V. Ievlev, Ryan Hood, Dongjae Shin, Sangmin Yoo, Brianna Roest, Kai Sun, Karsten Beckmann, Olya Popova, Tony Chiang, William S. Wahby, Robin B. Jacobs-Godrim, Matthew J. Marinella, Petro Maksymovych, John T. Heron, Nathaniel Cady, Wei D. Lu, Suhas Kumar, A. Alec Talin, Wenhao Sun, Yiyang Li. Matter DOI: https://doi.org/10.1016/j.matt.2024.07.018 Published online: August 26, 2024

This paper is behind a paywall.

Two-dimensional material stacks into multiple layers to build a memory cell for longer lasting batteries

This research comes from Purdue University (US) and the December announcement seemed particularly timely since battery-powered gifts are popular at Christmas but since it could be many years before this work is commercialized, you may want to tuck it away for future reference.  Also, readers familiar with memristors might see a resemblance to the memory cells mentioned in the following excerpt. From a December 13, 2018 news item on Nanowerk,

The more objects we make “smart,” from watches to entire buildings, the greater the need for these devices to store and retrieve massive amounts of data quickly without consuming too much power.

Millions of new memory cells could be part of a computer chip and provide that speed and energy savings, thanks to the discovery of a previously unobserved functionality in a material called molybdenum ditelluride.

The two-dimensional material stacks into multiple layers to build a memory cell. Researchers at Purdue University engineered this device in collaboration with the National Institute of Standards and Technology (NIST) and Theiss Research Inc.

A December 13, 2018 Purdue University news release by Kayla Wiles, which originated the news item,  describes the work in more detail,

Chip-maker companies have long called for better memory technologies to enable a growing network of smart devices. One of these next-generation possibilities is resistive random access memory, or RRAM for short.

In RRAM, an electrical current is typically driven through a memory cell made up of stacked materials, creating a change in resistance that records data as 0s and 1s in memory. The sequence of 0s and 1s among memory cells identifies pieces of information that a computer reads to perform a function and then store into memory again.

A material would need to be robust enough for storing and retrieving data at least trillions of times, but materials currently used have been too unreliable. So RRAM hasn’t been available yet for widescale use on computer chips.

Molybdenum ditelluride could potentially last through all those cycles.
“We haven’t yet explored system fatigue using this new material, but our hope is that it is both faster and more reliable than other approaches due to the unique switching mechanism we’ve observed,” Joerg Appenzeller, Purdue University’s Barry M. and Patricia L. Epstein Professor of Electrical and Computer Engineering and the scientific director of nanoelectronics at the Birck Nanotechnology Center.

Molybdenum ditelluride allows a system to switch more quickly between 0 and 1, potentially increasing the rate of storing and retrieving information. This is because when an electric field is applied to the cell, atoms are displaced by a tiny distance, resulting in a state of high resistance, noted as 0, or a state of low resistance, noted as 1, which can occur much faster than switching in conventional RRAM devices.

“Because less power is needed for these resistive states to change, a battery could last longer,” Appenzeller said.

In a computer chip, each memory cell would be located at the intersection of wires, forming a memory array called cross-point RRAM.

Appenzeller’s lab wants to explore building a stacked memory cell that also incorporates the other main components of a computer chip: “logic,” which processes data, and “interconnects,” wires that transfer electrical signals, by utilizing a library of novel electronic materials fabricated at NIST.

“Logic and interconnects drain battery too, so the advantage of an entirely two-dimensional architecture is more functionality within a small space and better communication between memory and logic,” Appenzeller said.

Two U.S. patent applications have been filed for this technology through the Purdue Office of Technology Commercialization.

The work received financial support from the Semiconductor Research Corporation through the NEW LIMITS Center (led by Purdue University), NIST, the U.S. Department of Commerce and the Material Genome Initiative.

Here’s a link to and a citation for the paper,

Electric-field induced structural transition in vertical MoTe2- and Mo1–xWxTe2-based resistive memories by Feng Zhang, Huairuo Zhang, Sergiy Krylyuk, Cory A. Milligan, Yuqi Zhu, Dmitry Y. Zemlyanov, Leonid A. Bendersky, Benjamin P. Burton, Albert V. Davydov, & Joerg Appenzeller. Nature Materials volume 18, pages 55–61 (2019) Published: 10 December 2018 DOI: https://doi.org/10.1038/s41563-018-0234-y

This paper is behind a paywall.

3-D integration of nanotechnologies on a single computer chip

By integrating nanomaterials , a new technique for a 3D computer chip capable of handling today’s huge amount of data has been developed. Weirdly, the first two paragraphs of a July 5, 2017 news item on Nanowerk do not convey the main point (Note: A link has been removed),

As embedded intelligence is finding its way into ever more areas of our lives, fields ranging from autonomous driving to personalized medicine are generating huge amounts of data. But just as the flood of data is reaching massive proportions, the ability of computer chips to process it into useful information is stalling.

Now, researchers at Stanford University and MIT have built a new chip to overcome this hurdle. The results are published today in the journal Nature (“Three-dimensional integration of nanotechnologies for computing and data storage on a single chip”), by lead author Max Shulaker, an assistant professor of electrical engineering and computer science at MIT. Shulaker began the work as a PhD student alongside H.-S. Philip Wong and his advisor Subhasish Mitra, professors of electrical engineering and computer science at Stanford. The team also included professors Roger Howe and Krishna Saraswat, also from Stanford.

This image helps to convey the main points,

Instead of relying on silicon-based devices, a new chip uses carbon nanotubes and resistive random-access memory (RRAM) cells. The two are built vertically over one another, making a new, dense 3-D computer architecture with interleaving layers of logic and memory. Courtesy MIT

As I hove been quite impressed with their science writing, it was a bit surprising to find that the Massachusetts Institute of Technology (MIT) had issued this news release (news item) as it didn’t follow the ‘rules’, i.e., cover as many of the journalistic questions (Who, What, Where, When, Why, and, sometimes, How) as possible in the first sentence/paragraph. This is written more in the style of a magazine article and so the details take a while to emerge, from a July 5, 2017 MIT news release, which originated the news item,

Computers today comprise different chips cobbled together. There is a chip for computing and a separate chip for data storage, and the connections between the two are limited. As applications analyze increasingly massive volumes of data, the limited rate at which data can be moved between different chips is creating a critical communication “bottleneck.” And with limited real estate on the chip, there is not enough room to place them side-by-side, even as they have been miniaturized (a phenomenon known as Moore’s Law).

To make matters worse, the underlying devices, transistors made from silicon, are no longer improving at the historic rate that they have for decades.

The new prototype chip is a radical change from today’s chips. It uses multiple nanotechnologies, together with a new computer architecture, to reverse both of these trends.

Instead of relying on silicon-based devices, the chip uses carbon nanotubes, which are sheets of 2-D graphene formed into nanocylinders, and resistive random-access memory (RRAM) cells, a type of nonvolatile memory that operates by changing the resistance of a solid dielectric material. The researchers integrated over 1 million RRAM cells and 2 million carbon nanotube field-effect transistors, making the most complex nanoelectronic system ever made with emerging nanotechnologies.

The RRAM and carbon nanotubes are built vertically over one another, making a new, dense 3-D computer architecture with interleaving layers of logic and memory. By inserting ultradense wires between these layers, this 3-D architecture promises to address the communication bottleneck.

However, such an architecture is not possible with existing silicon-based technology, according to the paper’s lead author, Max Shulaker, who is a core member of MIT’s Microsystems Technology Laboratories. “Circuits today are 2-D, since building conventional silicon transistors involves extremely high temperatures of over 1,000 degrees Celsius,” says Shulaker. “If you then build a second layer of silicon circuits on top, that high temperature will damage the bottom layer of circuits.”

The key in this work is that carbon nanotube circuits and RRAM memory can be fabricated at much lower temperatures, below 200 C. “This means they can be built up in layers without harming the circuits beneath,” Shulaker says.

This provides several simultaneous benefits for future computing systems. “The devices are better: Logic made from carbon nanotubes can be an order of magnitude more energy-efficient compared to today’s logic made from silicon, and similarly, RRAM can be denser, faster, and more energy-efficient compared to DRAM,” Wong says, referring to a conventional memory known as dynamic random-access memory.

“In addition to improved devices, 3-D integration can address another key consideration in systems: the interconnects within and between chips,” Saraswat adds.

“The new 3-D computer architecture provides dense and fine-grained integration of computating and data storage, drastically overcoming the bottleneck from moving data between chips,” Mitra says. “As a result, the chip is able to store massive amounts of data and perform on-chip processing to transform a data deluge into useful information.”

To demonstrate the potential of the technology, the researchers took advantage of the ability of carbon nanotubes to also act as sensors. On the top layer of the chip they placed over 1 million carbon nanotube-based sensors, which they used to detect and classify ambient gases.

Due to the layering of sensing, data storage, and computing, the chip was able to measure each of the sensors in parallel, and then write directly into its memory, generating huge bandwidth, Shulaker says.

Three-dimensional integration is the most promising approach to continue the technology scaling path set forth by Moore’s laws, allowing an increasing number of devices to be integrated per unit volume, according to Jan Rabaey, a professor of electrical engineering and computer science at the University of California at Berkeley, who was not involved in the research.

“It leads to a fundamentally different perspective on computing architectures, enabling an intimate interweaving of memory and logic,” Rabaey says. “These structures may be particularly suited for alternative learning-based computational paradigms such as brain-inspired systems and deep neural nets, and the approach presented by the authors is definitely a great first step in that direction.”

“One big advantage of our demonstration is that it is compatible with today’s silicon infrastructure, both in terms of fabrication and design,” says Howe.

“The fact that this strategy is both CMOS [complementary metal-oxide-semiconductor] compatible and viable for a variety of applications suggests that it is a significant step in the continued advancement of Moore’s Law,” says Ken Hansen, president and CEO of the Semiconductor Research Corporation, which supported the research. “To sustain the promise of Moore’s Law economics, innovative heterogeneous approaches are required as dimensional scaling is no longer sufficient. This pioneering work embodies that philosophy.”

The team is working to improve the underlying nanotechnologies, while exploring the new 3-D computer architecture. For Shulaker, the next step is working with Massachusetts-based semiconductor company Analog Devices to develop new versions of the system that take advantage of its ability to carry out sensing and data processing on the same chip.

So, for example, the devices could be used to detect signs of disease by sensing particular compounds in a patient’s breath, says Shulaker.

“The technology could not only improve traditional computing, but it also opens up a whole new range of applications that we can target,” he says. “My students are now investigating how we can produce chips that do more than just computing.”

“This demonstration of the 3-D integration of sensors, memory, and logic is an exceptionally innovative development that leverages current CMOS technology with the new capabilities of carbon nanotube field–effect transistors,” says Sam Fuller, CTO emeritus of Analog Devices, who was not involved in the research. “This has the potential to be the platform for many revolutionary applications in the future.”

This work was funded by the Defense Advanced Research Projects Agency [DARPA], the National Science Foundation, Semiconductor Research Corporation, STARnet SONIC, and member companies of the Stanford SystemX Alliance.

Here’s a link to and a citation for the paper,

Three-dimensional integration of nanotechnologies for computing and data storage on a single chip by Max M. Shulaker, Gage Hills, Rebecca S. Park, Roger T. Howe, Krishna Saraswat, H.-S. Philip Wong, & Subhasish Mitra. Nature 547, 74–78 (06 July 2017) doi:10.1038/nature22994 Published online 05 July 2017

This paper is behind a paywall.

Changing synaptic connectivity with a memristor

The French have announced some research into memristive devices that mimic both short-term and long-term neural plasticity according to a Dec. 6, 2016 news item on Nanowerk,

Leti researchers have demonstrated that memristive devices are excellent candidates to emulate synaptic plasticity, the capability of synapses to enhance or diminish their connectivity between neurons, which is widely believed to be the cellular basis for learning and memory.

The breakthrough was presented today [Dec. 6, 2016] at IEDM [International Electron Devices Meeting] 2016 in San Francisco in the paper, “Experimental Demonstration of Short and Long Term Synaptic Plasticity Using OxRAM Multi k-bit Arrays for Reliable Detection in Highly Noisy Input Data”.

Neural systems such as the human brain exhibit various types and time periods of plasticity, e.g. synaptic modifications can last anywhere from seconds to days or months. However, prior research in utilizing synaptic plasticity using memristive devices relied primarily on simplified rules for plasticity and learning.

The project team, which includes researchers from Leti’s sister institute at CEA Tech, List, along with INSERM and Clinatec, proposed an architecture that implements both short- and long-term plasticity (STP and LTP) using RRAM devices.

A Dec. 6, 2016 Laboratoire d’électronique des technologies de l’information (LETI) press release, which originated the news item, elaborates,

“While implementing a learning rule for permanent modifications – LTP, based on spike-timing-dependent plasticity – we also incorporated the possibility of short-term modifications with STP, based on the Tsodyks/Markram model,” said Elisa Vianello, Leti non-volatile memories and cognitive computing specialist/research engineer. “We showed the benefits of utilizing both kinds of plasticity with visual pattern extraction and decoding of neural signals. LTP allows our artificial neural networks to learn patterns, and STP makes the learning process very robust against environmental noise.”

Resistive random-access memory (RRAM) devices coupled with a spike-coding scheme are key to implementing unsupervised learning with minimal hardware footprint and low power consumption. Embedding neuromorphic learning into low-power devices could enable design of autonomous systems, such as a brain-machine interface that makes decisions based on real-time, on-line processing of in-vivo recorded biological signals. Biological data are intrinsically highly noisy and the proposed combined LTP and STP learning rule is a powerful technique to improve the detection/recognition rate. This approach may enable the design of autonomous implantable devices for rehabilitation purposes

Leti, which has worked on RRAM to develop hardware neuromorphic architectures since 2010, is the coordinator of the H2020 [Horizon 2020] European project NeuRAM3. That project is working on fabricating a chip with architecture that supports state-of-the-art machine-learning algorithms and spike-based learning mechanisms.

That’s it folks.

Artificial synapse rivals biological synapse in energy consumption

How can we make computers be like biological brains which do so much work and use so little power? It’s a question scientists from many countries are trying to answer and it seems South Korean scientists are proposing an answer. From a June 20, 2016 news item on Nanowerk,

News) Creation of an artificial intelligence system that fully emulates the functions of a human brain has long been a dream of scientists. A brain has many superior functions as compared with super computers, even though it has light weight, small volume, and consumes extremely low energy. This is required to construct an artificial neural network, in which a huge amount (1014)) of synapses is needed.

Most recently, great efforts have been made to realize synaptic functions in single electronic devices, such as using resistive random access memory (RRAM), phase change memory (PCM), conductive bridges, and synaptic transistors. Artificial synapses based on highly aligned nanostructures are still desired for the construction of a highly-integrated artificial neural network.

Prof. Tae-Woo Lee, research professor Wentao Xu, and Dr. Sung-Yong Min with the Dept. of Materials Science and Engineering at POSTECH [Pohang University of Science & Technology, South Korea] have succeeded in fabricating an organic nanofiber (ONF) electronic device that emulates not only the important working principles and energy consumption of biological synapses but also the morphology. …

A June 20, 2016 Pohang University of Science & Technology (POSTECH) news release on EurekAlert, which originated the news item, describes the work in more detail,

The morphology of ONFs is very similar to that of nerve fibers, which form crisscrossing grids to enable the high memory density of a human brain. Especially, based on the e-Nanowire printing technique, highly-aligned ONFs can be massively produced with precise control over alignment and dimension. This morphology potentially enables the future construction of high-density memory of a neuromorphic system.

Important working principles of a biological synapse have been emulated, such as paired-pulse facilitation (PPF), short-term plasticity (STP), long-term plasticity (LTP), spike-timing dependent plasticity (STDP), and spike-rate dependent plasticity (SRDP). Most amazingly, energy consumption of the device can be reduced to a femtojoule level per synaptic event, which is a value magnitudes lower than previous reports. It rivals that of a biological synapse. In addition, the organic artificial synapse devices not only provide a new research direction in neuromorphic electronics but even open a new era of organic electronics.

This technology will lead to the leap of brain-inspired electronics in both memory density and energy consumption aspects. The artificial synapse developed by Prof. Lee’s research team will provide important potential applications to neuromorphic computing systems and artificial intelligence systems for autonomous cars (or self-driving cars), analysis of big data, cognitive systems, robot control, medical diagnosis, stock trading analysis, remote sensing, and other smart human-interactive systems and machines in the future.

Here’s a link to and a citation for the paper,

Organic core-sheath nanowire artificial synapses with femtojoule energy consumption by Wentao Xu, Sung-Yong Min, Hyunsang Hwang, and Tae-Woo Lee. Science Advances  17 Jun 2016: Vol. 2, no. 6, e1501326 DOI: 10.1126/sciadv.1501326

This paper is open access.