Tag Archives: A. Alec Talin

How memristors retain information without a power source? A mystery solved

A September 10, 2024 news item on ScienceDaily provides a technical explanation of how memristors, without a power source, can retain information,

Phase separation, when molecules part like oil and water, works alongside oxygen diffusion to help memristors — electrical components that store information using electrical resistance — retain information even after the power is shut off, according to a University of Michigan led study recently published in Matter.

A September 11, 2024 University of Michigan press release (also on EurekAltert but published September 10, 2024), which originated the news item, delves further into the research,

Up to this point, explanations have not fully grasped how memristors retain information without a power source, known as nonvolatile memory, because models and experiments do not match up.

“While experiments have shown devices can retain information for over 10 years, the models used in the community show that information can only be retained for a few hours,” said Jingxian Li, U-M doctoral graduate of materials science and engineering and first author of the study.

To better understand the underlying phenomenon driving nonvolatile memristor memory, the researchers focused on a device known as resistive random access memory or RRAM, an alternative to the volatile RAM used in classical computing, and are particularly promising for energy-efficient artificial intelligence applications. 

The specific RRAM studied, a filament-type valence change memory (VCM), sandwiches an insulating tantalum oxide layer between two platinum electrodes. When a certain voltage is applied to the platinum electrodes, a conductive filament forms a tantalum ion bridge passing through the insulator to the electrodes, which allows electricity to flow, putting the cell in a low resistance state representing a “1” in binary code. If a different voltage is applied, the filament is dissolved as returning oxygen atoms react with the tantalum ions, “rusting” the conductive bridge and returning to a high resistance state, representing a binary code of “0”. 

It was once thought that RRAM retains information over time because oxygen is too slow to diffuse back. However, a series of experiments revealed that previous models have neglected the role of phase separation. 

“In these devices, oxygen ions prefer to be away from the filament and will never diffuse back, even after an indefinite period of time. This process is analogous to how a mixture of water and oil will not mix, no matter how much time we wait, because they have lower energy in a de-mixed state,” said Yiyang Li, U-M assistant professor of materials science and engineering and senior author of the study.

To test retention time, the researchers sped up experiments by increasing the temperature. One hour at 250°C is equivalent to about 100 years at 85°C—the typical temperature of a computer chip.

Using the extremely high-resolution imaging of atomic force microscopy, the researchers imaged filaments, which measure only about five nanometers or 20 atoms wide, forming within the one micron wide RRAM device.  

“We were surprised that we could find the filament in the device. It’s like finding a needle in a haystack,” Li said. 

The research team found that different sized filaments yielded different retention behavior. Filaments smaller than about 5 nanometers dissolved over time, whereas filaments larger than 5 nanometers strengthened over time. The size-based difference cannot be explained by diffusion alone.

Together, experimental results and models incorporating thermodynamic principles showed the formation and stability of conductive filaments depend on phase separation. 

The research team leveraged phase separation to extend memory retention from one day to well over 10 years in a rad-hard memory chip—a memory device built to withstand radiation exposure for use in space exploration. 

Other applications include in-memory computing for more energy efficient AI applications or memory devices for electronic skin—a stretchable electronic interface designed to mimic the sensory capabilities of human skin. Also known as e-skin, this material could be used to provide sensory feedback to prosthetic limbs, create new wearable fitness trackers or help robots develop tactile sensing for delicate tasks.

“We hope that our findings can inspire new ways to use phase separation to create information storage devices,” Li said.

Researchers at Ford Research, Dearborn; Oak Ridge National Laboratory; University at Albany; NY CREATES; Sandia National Laboratories; and Arizona State University, Tempe contributed to this study.

Here’s a link to and a citation for the paper,

Thermodynamic origin of nonvolatility in resistive memory by Jingxian Li, Anirudh Appachar, Sabrina L. Peczonczyk, Elisa T. Harrison, Anton V. Ievlev, Ryan Hood, Dongjae Shin, Sangmin Yoo, Brianna Roest, Kai Sun, Karsten Beckmann, Olya Popova, Tony Chiang, William S. Wahby, Robin B. Jacobs-Godrim, Matthew J. Marinella, Petro Maksymovych, John T. Heron, Nathaniel Cady, Wei D. Lu, Suhas Kumar, A. Alec Talin, Wenhao Sun, Yiyang Li. Matter DOI: https://doi.org/10.1016/j.matt.2024.07.018 Published online: August 26, 2024

This paper is behind a paywall.

High-performance, low-energy artificial synapse for neural network computing

This artificial synapse is apparently an improvement on the standard memristor-based artificial synapse but that doesn’t become clear until reading the abstract for the paper. First, there’s a Feb. 20, 2017 Stanford University news release by Taylor Kubota (dated Feb. 21, 2017 on EurekAlert), Note: Links have been removed,

For all the improvements in computer technology over the years, we still struggle to recreate the low-energy, elegant processing of the human brain. Now, researchers at Stanford University and Sandia National Laboratories have made an advance that could help computers mimic one piece of the brain’s efficient design – an artificial version of the space over which neurons communicate, called a synapse.

“It works like a real synapse but it’s an organic electronic device that can be engineered,” said Alberto Salleo, associate professor of materials science and engineering at Stanford and senior author of the paper. “It’s an entirely new family of devices because this type of architecture has not been shown before. For many key metrics, it also performs better than anything that’s been done before with inorganics.”

The new artificial synapse, reported in the Feb. 20 issue of Nature Materials, mimics the way synapses in the brain learn through the signals that cross them. This is a significant energy savings over traditional computing, which involves separately processing information and then storing it into memory. Here, the processing creates the memory.

This synapse may one day be part of a more brain-like computer, which could be especially beneficial for computing that works with visual and auditory signals. Examples of this are seen in voice-controlled interfaces and driverless cars. Past efforts in this field have produced high-performance neural networks supported by artificially intelligent algorithms but these are still distant imitators of the brain that depend on energy-consuming traditional computer hardware.

Building a brain

When we learn, electrical signals are sent between neurons in our brain. The most energy is needed the first time a synapse is traversed. Every time afterward, the connection requires less energy. This is how synapses efficiently facilitate both learning something new and remembering what we’ve learned. The artificial synapse, unlike most other versions of brain-like computing, also fulfills these two tasks simultaneously, and does so with substantial energy savings.

“Deep learning algorithms are very powerful but they rely on processors to calculate and simulate the electrical states and store them somewhere else, which is inefficient in terms of energy and time,” said Yoeri van de Burgt, former postdoctoral scholar in the Salleo lab and lead author of the paper. “Instead of simulating a neural network, our work is trying to make a neural network.”

The artificial synapse is based off a battery design. It consists of two thin, flexible films with three terminals, connected by an electrolyte of salty water. The device works as a transistor, with one of the terminals controlling the flow of electricity between the other two.

Like a neural path in a brain being reinforced through learning, the researchers program the artificial synapse by discharging and recharging it repeatedly. Through this training, they have been able to predict within 1 percent of uncertainly what voltage will be required to get the synapse to a specific electrical state and, once there, it remains at that state. In other words, unlike a common computer, where you save your work to the hard drive before you turn it off, the artificial synapse can recall its programming without any additional actions or parts.

Testing a network of artificial synapses

Only one artificial synapse has been produced but researchers at Sandia used 15,000 measurements from experiments on that synapse to simulate how an array of them would work in a neural network. They tested the simulated network’s ability to recognize handwriting of digits 0 through 9. Tested on three datasets, the simulated array was able to identify the handwritten digits with an accuracy between 93 to 97 percent.

Although this task would be relatively simple for a person, traditional computers have a difficult time interpreting visual and auditory signals.

“More and more, the kinds of tasks that we expect our computing devices to do require computing that mimics the brain because using traditional computing to perform these tasks is becoming really power hungry,” said A. Alec Talin, distinguished member of technical staff at Sandia National Laboratories in Livermore, California, and senior author of the paper. “We’ve demonstrated a device that’s ideal for running these type of algorithms and that consumes a lot less power.”

This device is extremely well suited for the kind of signal identification and classification that traditional computers struggle to perform. Whereas digital transistors can be in only two states, such as 0 and 1, the researchers successfully programmed 500 states in the artificial synapse, which is useful for neuron-type computation models. In switching from one state to another they used about one-tenth as much energy as a state-of-the-art computing system needs in order to move data from the processing unit to the memory.

This, however, means they are still using about 10,000 times as much energy as the minimum a biological synapse needs in order to fire. The researchers are hopeful that they can attain neuron-level energy efficiency once they test the artificial synapse in smaller devices.

Organic potential

Every part of the device is made of inexpensive organic materials. These aren’t found in nature but they are largely composed of hydrogen and carbon and are compatible with the brain’s chemistry. Cells have been grown on these materials and they have even been used to make artificial pumps for neural transmitters. The voltages applied to train the artificial synapse are also the same as those that move through human neurons.

All this means it’s possible that the artificial synapse could communicate with live neurons, leading to improved brain-machine interfaces. The softness and flexibility of the device also lends itself to being used in biological environments. Before any applications to biology, however, the team plans to build an actual array of artificial synapses for further research and testing.

Additional Stanford co-authors of this work include co-lead author Ewout Lubberman, also of the University of Groningen in the Netherlands, Scott T. Keene and Grégorio C. Faria, also of Universidade de São Paulo, in Brazil. Sandia National Laboratories co-authors include Elliot J. Fuller and Sapan Agarwal in Livermore and Matthew J. Marinella in Albuquerque, New Mexico. Salleo is an affiliate of the Stanford Precourt Institute for Energy and the Stanford Neurosciences Institute. Van de Burgt is now an assistant professor in microsystems and an affiliate of the Institute for Complex Molecular Studies (ICMS) at Eindhoven University of Technology in the Netherlands.

This research was funded by the National Science Foundation, the Keck Faculty Scholar Funds, the Neurofab at Stanford, the Stanford Graduate Fellowship, Sandia’s Laboratory-Directed Research and Development Program, the U.S. Department of Energy, the Holland Scholarship, the University of Groningen Scholarship for Excellent Students, the Hendrik Muller National Fund, the Schuurman Schimmel-van Outeren Foundation, the Foundation of Renswoude (The Hague and Delft), the Marco Polo Fund, the Instituto Nacional de Ciência e Tecnologia/Instituto Nacional de Eletrônica Orgânica in Brazil, the Fundação de Amparo à Pesquisa do Estado de São Paulo and the Brazilian National Council.

Here’s an abstract for the researchers’ paper (link to paper provided after abstract) and it’s where you’ll find the memristor connection explained,

The brain is capable of massively parallel information processing while consuming only ~1–100fJ per synaptic event1, 2. Inspired by the efficiency of the brain, CMOS-based neural architectures3 and memristors4, 5 are being developed for pattern recognition and machine learning. However, the volatility, design complexity and high supply voltages for CMOS architectures, and the stochastic and energy-costly switching of memristors complicate the path to achieve the interconnectivity, information density, and energy efficiency of the brain using either approach. Here we describe an electrochemical neuromorphic organic device (ENODe) operating with a fundamentally different mechanism from existing memristors. ENODe switches at low voltage and energy (<10pJ for 103μm2 devices), displays >500 distinct, non-volatile conductance states within a ~1V range, and achieves high classification accuracy when implemented in neural network simulations. Plastic ENODes are also fabricated on flexible substrates enabling the integration of neuromorphic functionality in stretchable electronic systems6, 7. Mechanical flexibility makes ENODes compatible with three-dimensional architectures, opening a path towards extreme interconnectivity comparable to the human brain.

Here’s a link to and a citation for the paper,

A non-volatile organic electrochemical device as a low-voltage artificial synapse for neuromorphic computing by Yoeri van de Burgt, Ewout Lubberman, Elliot J. Fuller, Scott T. Keene, Grégorio C. Faria, Sapan Agarwal, Matthew J. Marinella, A. Alec Talin, & Alberto Salleo. Nature Materials (2017) doi:10.1038/nmat4856 Published online 20 February 2017

This paper is behind a paywall.

ETA March 8, 2017 10:28 PST: You may find this this piece on ferroelectricity and neuromorphic engineering of interest (March 7, 2017 posting titled: Ferroelectric roadmap to neuromorphic computing).

Casimir and its reins: engineering nanostructures to control quantum effects

Thank you to whomever wrote this headline for the Oct. 22, 2013 US National Institute of Standards and Technology (NIST) news release, also on EurekAlert, titled: The Reins of Casimir: Engineered Nanostructures Could Offer Way to Control Quantum Effect … Once a Mystery Is Solved, for getting the word ‘reins’ correct.

I can no longer hold back my concern over the fact that there are three words that sound the same but have different meanings and one of those words is often mistakenly used in place of the other.

reins

reigns

rains

The first one, reins, refers to narrow leather straps used to control animals (usually horses), as per this picture, It’s also used as a verb to indicate situation where control must be exerted, e.g., the spending must be reined in.

Reining Sliding Stop Mannheim Maimarkt 2007 Date 01.05.2007 Source  Own work Author AllX [downloaded from http://en.wikipedia.org/wiki/File:Reining_slidingstop.jpg]

Reining Sliding Stop Mannheim Maimarkt 2007 Date 01.05.2007 Credit: AllX [downloaded from http://en.wikipedia.org/wiki/File:Reining_slidingstop.jpg]

 This ‘reign’ usually references people like these,

“Queen Elizabeth II greets employees on her walk from NASA’s Goddard Space Flight Center mission control to a reception in the center’s main auditorium in Greenbelt, Maryland where she was presented with a framed Hubble image by Congressman Steny Hoyer and Senator Barbara Mikulski. Queen Elizabeth II and her husband, Prince Philip, Duke of Edinburgh, visited the NASA Goddard Space Flight Center as one of the last stops on their six-day United States visit.” Credit: NASA/Bill Ingalls [downloaded from http://en.wikipedia.org/wiki/File:Elizabeth_II_greets_NASA_GSFC_employees,_May_8,_2007_edit.jpg]

“Queen Elizabeth II greets employees on her walk from NASA’s Goddard Space Flight Center mission control to a reception in the center’s main auditorium in Greenbelt, Maryland where she was presented with a framed Hubble image by Congressman Steny Hoyer and Senator Barbara Mikulski. Queen Elizabeth II and her husband, Prince Philip, Duke of Edinburgh, visited the NASA Goddard Space Flight Center as one of the last stops on their six-day United States visit.” Credit: NASA/Bill Ingalls [downloaded from http://en.wikipedia.org/wiki/File:Elizabeth_II_greets_NASA_GSFC_employees,_May_8,_2007_edit.jpg]

 And,

Thailand's King Bhumibol Adulyadej waves to well-wishers during a concert at Siriraj hospital in Bangkok on September 29, 2010. Credit: Government of Thailand [downloaded from http://en.wikipedia.org/wiki/File:King_Bhumibol_Adulyadej_2010-9-29.jpg]

Thailand’s King Bhumibol Adulyadej waves to well-wishers during a concert at Siriraj hospital in Bangkok on September 29, 2010. Credit: Government of Thailand [downloaded from http://en.wikipedia.org/wiki/File:King_Bhumibol_Adulyadej_2010-9-29.jpg]

Kings, Queens, etc. reign over or rule their subjects or they have reigns, i.e., the period during which they hold the position of queen/king, etc. There are also uses such as this one found in the song title ‘Love Reign O’er Me’ (Pete Townshend)

I’ve lost count of the times I’ve seen ‘reigns’ used in place of ‘reins’, the worst part being? I’ve caught myself making the mistake. So, a heartfelt thank you to the NIST news release writer for getting it right. As for the other ‘rains’, neither I not anyone else seems to make that mistake (so far as I’ve seen).

Now on to the news,

You might think that a pair of parallel plates hanging motionless in a vacuum just a fraction of a micrometer away from each other would be like strangers passing in the night—so close but destined never to meet. Thanks to quantum mechanics, you would be wrong.

Scientists working to engineer nanoscale machines know this only too well as they have to grapple with quantum forces and all the weirdness that comes with them. These quantum forces, most notably the Casimir effect, can play havoc if you need to keep closely spaced surfaces from coming together.

Controlling these effects may also be necessary for making small mechanical parts that never stick to each other, for building certain types of quantum computers, and for studying gravity at the microscale.

In trying to solve the problem of keeping closely spaced surfaces from coming together, the scientists uncovered another problem,

One of the insights of quantum mechanics is that no space, not even outer space, is ever truly empty. It’s full of energy in the form of quantum fluctuations, including fluctuating electromagnetic fields that seemingly come from nowhere and disappear just as fast.

Some of this energy, however, just isn’t able to “fit” in the submicrometer space between a pair of electromechanical contacts. More energy on the outside than on the inside results in a kind of “pressure” called the Casimir force, which can be powerful enough to push the contacts together and stick.

Prevailing theory does a good job describing the Casimir force between featureless, flat surfaces and even between most smoothly curved surfaces. However, according to NIST researcher and co-author of the paper, Vladimir Aksyuk, existing theory fails to predict the interactions they observed in their experiment.

“In our experiment, we measured the Casimir attraction between a gold-coated sphere and flat gold surfaces patterned with rows of periodic, flat-topped ridges, each less than 100 nanometers across, separated by somewhat wider gaps with deep sheer-walled sides,” says Aksyuk. “We wanted to see how a nanostructured metallic surface would affect the Casimir interaction, which had never been attempted with a metal surface before. Naturally, we expected that there would be reduced attraction between our grooved surface and the sphere, regardless of the distance between them, because the top of the grooved surface presents less total surface area and less material. However, we knew the Casimir force’s dependence on the surface shape is not that simple.”

Indeed, what they found was more complicated.

According to Aksyuk, when they increased the separation between the surface of the sphere and the grooved surface, the researchers found that the Casimir attraction decreased much more quickly than expected. When they moved the sphere farther away, the force fell by a factor of two below the theoretically predicted value. When they moved the sphere surface close to the ridge tops, the attraction per unit of ridge top surface area increased.

“Theory can account for the stronger attraction, but not for the too-rapid weakening of the force with increased separation,” says Aksyuk. “So this is new territory, and the physics community is going to need to come up with a new model to describe it.”

For the curious, here’s a link to and a citation for the research paper,

Strong Casimir force reduction through metallic surface nanostructuring by Francesco Intravaia, Stephan Koev, Il Woong Jung, A. Alec Talin, Paul S. Davids, Ricardo S. Decca, Vladimir A. Aksyuk, Diego A. R. Dalvit, & Daniel López. Nature Communications 4, Article number: 2515 doi:10.1038/ncomms3515 Published 27 September 2013.

This article is open access.