Tag Archives: Alberto Salleo

Guide for memristive hardware design

An August 15 ,2022 news item on ScienceDaily announces a type of guide for memristive hardware design,

They are many times faster than flash memory and require significantly less energy: memristive memory cells could revolutionize the energy efficiency of neuromorphic [brainlike] computers. In these computers, which are modeled on the way the human brain works, memristive cells function like artificial synapses. Numerous groups around the world are working on the use of corresponding neuromorphic circuits — but often with a lack of understanding of how they work and with faulty models. Jülich researchers have now summarized the physical principles and models in a comprehensive review article in the renowned journal Advances in Physics.

An August 15, 2022 Forschungszentrum Juelich press release (also on EurekAlert), which originated the news item, describes two papers designed to help researchers better understand and design memristive hardware,

Certain tasks – such as recognizing patterns and language – are performed highly efficiently by a human brain, requiring only about one ten-thousandth of the energy of a conventional, so-called “von Neumann” computer. One of the reasons lies in the structural differences: In a von Neumann architecture, there is a clear separation between memory and processor, which requires constant moving of large amounts of data. This is time and energy consuming – the so-called von Neumann bottleneck. In the brain, the computational operation takes place directly in the data memory and the biological synapses perform the tasks of memory and processor at the same time.

In Jülich, scientists have been working for more than 15 years on special data storage devices and components that can have similar properties to the synapses in the human brain. So-called memristive memory devices, also known as memristors, are considered to be extremely fast, energy-saving and can be miniaturized very well down to the nanometer range. The functioning of memristive cells is based on a very special effect: Their electrical resistance is not constant, but can be changed and reset again by applying an external voltage, theoretically continuously. The change in resistance is controlled by the movement of oxygen ions. If these move out of the semiconducting metal oxide layer, the material becomes more conductive and the electrical resistance drops. This change in resistance can be used to store information.

The processes that can occur in cells are very complex and vary depending on the material system. Three researchers from the Jülich Peter Grünberg Institute – Prof. Regina Dittmann, Dr. Stephan Menzel, and Prof. Rainer Waser – have therefore compiled their research results in a detailed review article, “Nanoionic memristive phenomena in metal oxides: the valence change mechanism”. They explain in detail the various physical and chemical effects in memristors and shed light on the influence of these effects on the switching properties of memristive cells and their reliability.

“If you look at current research activities in the field of neuromorphic memristor circuits, they are often based on empirical approaches to material optimization,” said Rainer Waser, director at the Peter Grünberg Institute. “Our goal with our review article is to give researchers something to work with in order to enable insight-driven material optimization.” The team of authors worked on the approximately 200-page article for ten years and naturally had to keep incorporating advances in knowledge.

“The analogous functioning of memristive cells required for their use as artificial synapses is not the normal case. Usually, there are sudden jumps in resistance, generated by the mutual amplification of ionic motion and Joule heat,” explains Regina Dittmann of the Peter Grünberg Institute. “In our review article, we provide researchers with the necessary understanding of how to change the dynamics of the cells to enable an analog operating mode.”

“You see time and again that groups simulate their memristor circuits with models that don’t take into account high dynamics of the cells at all. These circuits will never work.” said Stephan Menzel, who leads modeling activities at the Peter Grünberg Institute and has developed powerful compact models that are now in the public domain (www.emrl.de/jart.html). “In our review article, we provide the basics that are extremely helpful for a correct use of our compact models.”

Roadmap neuromorphic computing

The “Roadmap of Neuromorphic Computing and Engineering”, which was published in May 2022, shows how neuromorphic computing can help to reduce the enormous energy consumption of IT globally. In it, researchers from the Peter Grünberg Institute (PGI-7), together with leading experts in the field, have compiled the various technological possibilities, computational approaches, learning algorithms and fields of application. 

According to the study, applications in the field of artificial intelligence, such as pattern recognition or speech recognition, are likely to benefit in a special way from the use of neuromorphic hardware. This is because they are based – much more so than classical numerical computing operations – on the shifting of large amounts of data. Memristive cells make it possible to process these gigantic data sets directly in memory without transporting them back and forth between processor and memory. This could reduce the energy efficiency of artificial neural networks by orders of magnitude.

Memristive cells can also be interconnected to form high-density matrices that enable neural networks to learn locally. This so-called edge computing thus shifts computations from the data center to the factory floor, the vehicle, or the home of people in need of care. Thus, monitoring and controlling processes or initiating rescue measures can be done without sending data via a cloud. “This achieves two things at the same time: you save energy, and at the same time, personal data and data relevant to security remain on site,” says Prof. Dittmann, who played a key role in creating the roadmap as editor.

Here’s a link to and a citation for the ‘roadmap’,

2022 roadmap on neuromorphic computing and engineering by Dennis V Christensen, Regina Dittmann, Bernabe Linares-Barranco, Abu Sebastian, Manuel Le Gallo, Andrea Redaelli, Stefan Slesazeck, Thomas Mikolajick, Sabina Spiga, Stephan Menzel, Ilia Valov, Gianluca Milano, Carlo Ricciardi, Shi-Jun Liang, Feng Miao, Mario Lanza, Tyler J Quill, Scott T Keene, Alberto Salleo, Julie Grollier, Danijela Marković, Alice Mizrahi, Peng Yao, J Joshua Yang, Giacomo Indiveri, John Paul Strachan, Suman Datta, Elisa Vianello, Alexandre Valentian, Johannes Feldmann, Xuan Li, Wolfram H P Pernice, Harish Bhaskaran, Steve Furber, Emre Neftci, Franz Scherr, Wolfgang Maass, Srikanth Ramaswamy, Jonathan Tapson, Priyadarshini Panda, Youngeun Kim, Gouhei Tanaka, Simon Thorpe, Chiara Bartolozzi, Thomas A Cleland, Christoph Posch, ShihChii Liu, Gabriella Panuccio, Mufti Mahmud, Arnab Neelim Mazumder, Morteza Hosseini, Tinoosh Mohsenin, Elisa Donati, Silvia Tolu, Roberto Galeazzi, Martin Ejsing Christensen, Sune Holm, Daniele Ielmini and N Pryds. Neuromorphic Computing and Engineering , Volume 2, Number 2 DOI: 10.1088/2634-4386/ac4a83 20 May 2022 • © 2022 The Author(s)

This paper is open access.

Here’s the most recent paper,

Nanoionic memristive phenomena in metal oxides: the valence change mechanism by Regina Dittmann, Stephan Menzel & Rainer Waser. Advances in Physics
Volume 70, 2021 – Issue 2 Pages 155-349 DOI: https://doi.org/10.1080/00018732.2022.2084006 Published online: 06 Aug 2022

This paper is behind a paywall.

2D materials for a computer’s artificial brain synapses

A January 28, 2022 news item on Nanowerk describes for some of the latest work on hardware that could enable neuromorphic (brainlike) computing. Note: A link has been removed,

Researchers from KTH Royal Institute of Technology [Sweden] and Stanford University [US] have fabricated a material for computer components that enable the commercial viability of computers that mimic the human brain (Advanced Functional Materials, “High-Speed Ionic Synaptic Memory Based on 2D Titanium Carbide MXene”).

A January 31, 2022 KTH Royal Institute of Technology press release (also on EurekAlert but published January 28, 2022), which originated the news item, delves further into the research,

Electrochemical random access (ECRAM) memory components made with 2D titanium carbide showed outstanding potential for complementing classical transistor technology, and contributing toward commercialization of powerful computers that are modeled after the brain’s neural network. Such neuromorphic computers can be thousands times more energy efficient than today’s computers.

These advances in computing are possible because of some fundamental differences from the classic computing architecture in use today, and the ECRAM, a component that acts as a sort of synaptic cell in an artificial neural network, says KTH Associate Professor Max Hamedi.

“Instead of transistors that are either on or off, and the need for information to be carried back and forth between the processor and memory—these new computers rely on components that can have multiple states, and perform in-memory computation,” Hamedi says.

The scientists at KTH and Stanford have focused on testing better materials for building an ECRAM, a component in which switching occurs by inserting ions into an oxidation channel, in a sense similar to our brain which also works with ions. What has been needed to make these chips commercially viable are materials that overcome the slow kinetics of metal oxides and the poor temperature stability of plastics.                   

The key material in the ECRAM units that the researchers fabricated is referred to as MXene—a two-dimensional (2D) compound, barely a few atoms thick, consisting of titanium carbide (Ti3C2Tx). The MXene combines the high speed of organic chemistry with the integration compatibility of inorganic materials in a single device operating at the nexus of electrochemistry and electronics, Hamedi says.

Co-author Professor Alberto Salleo at Stanford University, says that MXene ECRAMs combine the speed, linearity, write noise, switching energy, and endurance metrics essential for parallel acceleration of artificial neural networks.

“MXenes are an exciting materials family for this particular application as they combine the temperature stability needed for integration with conventional electronics with the availability of a vast composition space to optimize performance, Salleo says”

While there are many other barriers to overcome before consumers can buy their own neuromorphic computers, Hamedi says the 2D ECRAMs represent a breakthrough at least in the area of neuromorphic materials, potentially leading to artificial intelligence that can adapt to confusing input and nuance, the way the brain does with thousands time smaller energy consumption. This can also enable portable devices capable of much heavier computing tasks without having to rely on the cloud.

Here’s a link to and a citation for the paper,

High-Speed Ionic Synaptic Memory Based on 2D Titanium Carbide MXene by
Armantas Melianas, Min-A Kang, Armin VahidMohammadi, Tyler James Quill, Weiqian Tian, Yury Gogotsi, Alberto Salleo, Mahiar Max Hamedi. Advanced Functional Materials DOI: https://doi.org/10.1002/adfm.202109970 First published: 21 November 2021

This paper is open access.

Organic neuromorphic electronics

A December 13, 2021 news item on ScienceDaily describes some research from Germany’s Max Planck Institute for Polymer Research,

The human brain works differently from a computer – while the brain works with biological cells and electrical impulses, a computer uses silicon-based transistors. Scientists have equipped a toy robot with a smart and adaptive electrical circuit made of soft organic materials, similarly to the biological matter. With this bio-inspired approach, they were able to teach the robot to navigate independently through a maze using visual signs for guidance.

A December 13, 2021 Max Planck Institute for Polymer Research press release (also on EurekAlert), which originated the news item, fills in a few details,

The processor is the brain of a computer – an often-quoted phrase. But processors work fundamentally differently than the human brain. Transistors perform logic operations by means of electronic signals. In contrast, the brain works with nerve cells, so-called neurons, which are connected via biological conductive paths, so-called synapses. At a higher level, this signaling is used by the brain to control the body and perceive the surrounding environment. The reaction of the body/brain system when certain stimuli are perceived – for example, via the eyes, ears or sense of touch – is triggered through a learning process. For example, children learn not to reach twice for a hot stove: one input stimulus leads to a learning process with a clear behavioral outcome.

Scientists working with Paschalis Gkoupidenis, group leader in Paul Blom’s department at the Max Planck Institute for Polymer Research, have now applied this basic principle of learning through experience in a simplified form and steered a robot through a maze using a so-called organic neuromorphic circuit. The work was an extensive collaboration between the Universities of Eindhoven [Eindhoven University of Technology; Netherlands], Stanford [University; California, US], Brescia [University; Italy], Oxford [UK] and KAUST [King Abdullah University of Science and Technology, Saudi Arabia].

“We wanted to use this simple setup to show how powerful such ‘organic neuromorphic devices’ can be in real-world conditions,” says Imke Krauhausen, a doctoral student in Gkoupidenis’ group and at TU Eindhoven (van de Burgt group), and first author of the scientific paper.

To achieve the navigation of the robot inside the maze, the researchers fed the smart adaptive circuit with sensory signals coming from the environment. The path of maze towards the exit is indicated visually at each maze intersects. Initially, the robot often misinterprets the visual signs, thus it makes the wrong “turning” decisions at the maze intersects and loses the way out. When the robot takes these decisions and follows wrong dead-end paths, it is being discouraged to take these wrong decisions by receiving corrective stimuli. The corrective stimuli, for example when the robot hits a wall, are directly applied at the organic circuit via electrical signals induced by a touch sensor attached to the robot. With each subsequent execution of the experiment, the robot gradually learns to make the right “turning” decisions at the intersects, i. e. to avoid receiving corrective stimuli, and after a few trials it finds the way out of the maze. This learning process happens exclusively on the organic adaptive circuit. 

“We were really glad to see that the robot can pass through the maze after some runs by learning on a simple organic circuit. We have shown here a first, very simple setup. In the distant future, however, we hope that organic neuromorphic devices could also be used for local and distributed computing/learning. This will open up entirely new possibilities for applications in real-world robotics, human-machine interfaces and point-of-care diagnostics. Novel platforms for rapid prototyping and education, at the intersection of materials science and robotics, are also expected to emerge.” Gkoupidenis says.

Here’s a link to and a citation for the paper,

Organic neuromorphic electronics for sensorimotor integration and learning in robotics by Imke Krauhausen, Dimitrios A. Koutsouras, Armantas Melianas, Scott T. Keene, Katharina Lieberth, Hadrien Ledanseur, Rajendar Sheelamanthula, Alexander Giovannitti, Fabrizio Torricelli, Iain Mcculloch, Paul W. M. Blom, Alberto Salleo, Yoeri van de Burgt and Paschalis Gkoupidenis. Science Advances • 10 Dec 2021 • Vol 7, Issue 50 • DOI: 10.1126/sciadv.abl5068

This paper is open access.

A biohybrid artificial synapse that can communicate with living cells

As I noted in my June 16, 2020 posting, we may have more than one kind of artificial brain in our future. This latest work features a biohybrid. From a June 15, 2020 news item on ScienceDaily,

In 2017, Stanford University researchers presented a new device that mimics the brain’s efficient and low-energy neural learning process [see my March 8, 2017 posting for more]. It was an artificial version of a synapse — the gap across which neurotransmitters travel to communicate between neurons — made from organic materials. In 2019, the researchers assembled nine of their artificial synapses together in an array, showing that they could be simultaneously programmed to mimic the parallel operation of the brain [see my Sept. 17, 2019 posting].

Now, in a paper published June 15 [2020] in Nature Materials, they have tested the first biohybrid version of their artificial synapse and demonstrated that it can communicate with living cells. Future technologies stemming from this device could function by responding directly to chemical signals from the brain. The research was conducted in collaboration with researchers at Istituto Italiano di Tecnologia (Italian Institute of Technology — IIT) in Italy and at Eindhoven University of Technology (Netherlands).

“This paper really highlights the unique strength of the materials that we use in being able to interact with living matter,” said Alberto Salleo, professor of materials science and engineering at Stanford and co-senior author of the paper. “The cells are happy sitting on the soft polymer. But the compatibility goes deeper: These materials work with the same molecules neurons use naturally.”

While other brain-integrated devices require an electrical signal to detect and process the brain’s messages, the communications between this device and living cells occur through electrochemistry — as though the material were just another neuron receiving messages from its neighbor.

A June 15, 2020 Stanford University news release (also on EurekAlert) by Taylor Kubota, which originated the news item, delves further into this recent work,

How neurons learn

The biohybrid artificial synapse consists of two soft polymer electrodes, separated by a trench filled with electrolyte solution – which plays the part of the synaptic cleft that separates communicating neurons in the brain. When living cells are placed on top of one electrode, neurotransmitters that those cells release can react with that electrode to produce ions. Those ions travel across the trench to the second electrode and modulate the conductive state of this electrode. Some of that change is preserved, simulating the learning process occurring in nature.

“In a biological synapse, essentially everything is controlled by chemical interactions at the synaptic junction. Whenever the cells communicate with one another, they’re using chemistry,” said Scott Keene, a graduate student at Stanford and co-lead author of the paper. “Being able to interact with the brain’s natural chemistry gives the device added utility.”

This process mimics the same kind of learning seen in biological synapses, which is highly efficient in terms of energy because computing and memory storage happen in one action. In more traditional computer systems, the data is processed first and then later moved to storage.

To test their device, the researchers used rat neuroendocrine cells that release the neurotransmitter dopamine. Before they ran their experiment, they were unsure how the dopamine would interact with their material – but they saw a permanent change in the state of their device upon the first reaction.

“We knew the reaction is irreversible, so it makes sense that it would cause a permanent change in the device’s conductive state,” said Keene. “But, it was hard to know whether we’d achieve the outcome we predicted on paper until we saw it happen in the lab. That was when we realized the potential this has for emulating the long-term learning process of a synapse.”

A first step

This biohybrid design is in such early stages that the main focus of the current research was simply to make it work.

“It’s a demonstration that this communication melding chemistry and electricity is possible,” said Salleo. “You could say it’s a first step toward a brain-machine interface, but it’s a tiny, tiny very first step.”

Now that the researchers have successfully tested their design, they are figuring out the best paths for future research, which could include work on brain-inspired computers, brain-machine interfaces, medical devices or new research tools for neuroscience. Already, they are working on how to make the device function better in more complex biological settings that contain different kinds of cells and neurotransmitters.

Here’s a link to and a citation for the paper,

A biohybrid synapse with neurotransmitter-mediated plasticity by Scott T. Keene, Claudia Lubrano, Setareh Kazemzadeh, Armantas Melianas, Yaakov Tuchman, Giuseppina Polino, Paola Scognamiglio, Lucio Cinà, Alberto Salleo, Yoeri van de Burgt & Francesca Santoro. Nature Materials (2020) DOI: https://doi.org/10.1038/s41563-020-0703-y Published: 15 June 2020

This paper is behind a paywall.

Bad battery, good synapse from Stanford University

A May 4, 2019 news item on ScienceDaily announces the latest advance made by Stanford University and Sandia National Laboratories in the field of neuromorphic (brainlike) computing,

The brain’s capacity for simultaneously learning and memorizing large amounts of information while requiring little energy has inspired an entire field to pursue brain-like — or neuromorphic — computers. Researchers at Stanford University and Sandia National Laboratories previously developed one portion of such a computer: a device that acts as an artificial synapse, mimicking the way neurons communicate in the brain.

In a paper published online by the journal Science on April 25 [2019], the team reports that a prototype array of nine of these devices performed even better than expected in processing speed, energy efficiency, reproducibility and durability.

Looking forward, the team members want to combine their artificial synapse with traditional electronics, which they hope could be a step toward supporting artificially intelligent learning on small devices.

“If you have a memory system that can learn with the energy efficiency and speed that we’ve presented, then you can put that in a smartphone or laptop,” said Scott Keene, co-author of the paper and a graduate student in the lab of Alberto Salleo, professor of materials science and engineering at Stanford who is co-senior author. “That would open up access to the ability to train our own networks and solve problems locally on our own devices without relying on data transfer to do so.”

An April 25, 2019 Stanford University news release (also on EurekAlert but published May 3, 2019) by Taylor Kubota, which originated the news item, expands on the theme,

A bad battery, a good synapse

The team’s artificial synapse is similar to a battery, modified so that the researchers can dial up or down the flow of electricity between the two terminals. That flow of electricity emulates how learning is wired in the brain. This is an especially efficient design because data processing and memory storage happen in one action, rather than a more traditional computer system where the data is processed first and then later moved to storage.

Seeing how these devices perform in an array is a crucial step because it allows the researchers to program several artificial synapses simultaneously. This is far less time consuming than having to program each synapse one-by-one and is comparable to how the brain actually works.

In previous tests of an earlier version of this device, the researchers found their processing and memory action requires about one-tenth as much energy as a state-of-the-art computing system needs in order to carry out specific tasks. Still, the researchers worried that the sum of all these devices working together in larger arrays could risk drawing too much power. So, they retooled each device to conduct less electrical current – making them much worse batteries but making the array even more energy efficient.

The 3-by-3 array relied on a second type of device – developed by Joshua Yang at the University of Massachusetts, Amherst, who is co-author of the paper – that acts as a switch for programming synapses within the array.

“Wiring everything up took a lot of troubleshooting and a lot of wires. We had to ensure all of the array components were working in concert,” said Armantas Melianas, a postdoctoral scholar in the Salleo lab. “But when we saw everything light up, it was like a Christmas tree. That was the most exciting moment.”

During testing, the array outperformed the researchers’ expectations. It performed with such speed that the team predicts the next version of these devices will need to be tested with special high-speed electronics. After measuring high energy efficiency in the 3-by-3 array, the researchers ran computer simulations of a larger 1024-by-1024 synapse array and estimated that it could be powered by the same batteries currently used in smartphones or small drones. The researchers were also able to switch the devices over a billion times – another testament to its speed – without seeing any degradation in its behavior.

“It turns out that polymer devices, if you treat them well, can be as resilient as traditional counterparts made of silicon. That was maybe the most surprising aspect from my point of view,” Salleo said. “For me, it changes how I think about these polymer devices in terms of reliability and how we might be able to use them.”

Room for creativity

The researchers haven’t yet submitted their array to tests that determine how well it learns but that is something they plan to study. The team also wants to see how their device weathers different conditions – such as high temperatures – and to work on integrating it with electronics. There are also many fundamental questions left to answer that could help the researchers understand exactly why their device performs so well.

“We hope that more people will start working on this type of device because there are not many groups focusing on this particular architecture, but we think it’s very promising,” Melianas said. “There’s still a lot of room for improvement and creativity. We only barely touched the surface.”

Here’s a link to and a citation for the paper,

Parallel programming of an ionic floating-gate memory array for scalable neuromorphic computing by Elliot J. Fuller, Scott T. Keene, Armantas Melianas, Zhongrui Wang, Sapan Agarwal, Yiyang Li, Yaakov Tuchman, Conrad D. James, Matthew J. Marinella, J. Joshua Yang3, Alberto Salleo, A. Alec Talin1. Science 25 Apr 2019: eaaw5581 DOI: 10.1126/science.aaw5581

This paper is behind a paywall.

For anyone interested in more about brainlike/brain-like/neuromorphic computing/neuromorphic engineering/memristors, use any or all of those terms in this blog’s search engine.

High-performance, low-energy artificial synapse for neural network computing

This artificial synapse is apparently an improvement on the standard memristor-based artificial synapse but that doesn’t become clear until reading the abstract for the paper. First, there’s a Feb. 20, 2017 Stanford University news release by Taylor Kubota (dated Feb. 21, 2017 on EurekAlert), Note: Links have been removed,

For all the improvements in computer technology over the years, we still struggle to recreate the low-energy, elegant processing of the human brain. Now, researchers at Stanford University and Sandia National Laboratories have made an advance that could help computers mimic one piece of the brain’s efficient design – an artificial version of the space over which neurons communicate, called a synapse.

“It works like a real synapse but it’s an organic electronic device that can be engineered,” said Alberto Salleo, associate professor of materials science and engineering at Stanford and senior author of the paper. “It’s an entirely new family of devices because this type of architecture has not been shown before. For many key metrics, it also performs better than anything that’s been done before with inorganics.”

The new artificial synapse, reported in the Feb. 20 issue of Nature Materials, mimics the way synapses in the brain learn through the signals that cross them. This is a significant energy savings over traditional computing, which involves separately processing information and then storing it into memory. Here, the processing creates the memory.

This synapse may one day be part of a more brain-like computer, which could be especially beneficial for computing that works with visual and auditory signals. Examples of this are seen in voice-controlled interfaces and driverless cars. Past efforts in this field have produced high-performance neural networks supported by artificially intelligent algorithms but these are still distant imitators of the brain that depend on energy-consuming traditional computer hardware.

Building a brain

When we learn, electrical signals are sent between neurons in our brain. The most energy is needed the first time a synapse is traversed. Every time afterward, the connection requires less energy. This is how synapses efficiently facilitate both learning something new and remembering what we’ve learned. The artificial synapse, unlike most other versions of brain-like computing, also fulfills these two tasks simultaneously, and does so with substantial energy savings.

“Deep learning algorithms are very powerful but they rely on processors to calculate and simulate the electrical states and store them somewhere else, which is inefficient in terms of energy and time,” said Yoeri van de Burgt, former postdoctoral scholar in the Salleo lab and lead author of the paper. “Instead of simulating a neural network, our work is trying to make a neural network.”

The artificial synapse is based off a battery design. It consists of two thin, flexible films with three terminals, connected by an electrolyte of salty water. The device works as a transistor, with one of the terminals controlling the flow of electricity between the other two.

Like a neural path in a brain being reinforced through learning, the researchers program the artificial synapse by discharging and recharging it repeatedly. Through this training, they have been able to predict within 1 percent of uncertainly what voltage will be required to get the synapse to a specific electrical state and, once there, it remains at that state. In other words, unlike a common computer, where you save your work to the hard drive before you turn it off, the artificial synapse can recall its programming without any additional actions or parts.

Testing a network of artificial synapses

Only one artificial synapse has been produced but researchers at Sandia used 15,000 measurements from experiments on that synapse to simulate how an array of them would work in a neural network. They tested the simulated network’s ability to recognize handwriting of digits 0 through 9. Tested on three datasets, the simulated array was able to identify the handwritten digits with an accuracy between 93 to 97 percent.

Although this task would be relatively simple for a person, traditional computers have a difficult time interpreting visual and auditory signals.

“More and more, the kinds of tasks that we expect our computing devices to do require computing that mimics the brain because using traditional computing to perform these tasks is becoming really power hungry,” said A. Alec Talin, distinguished member of technical staff at Sandia National Laboratories in Livermore, California, and senior author of the paper. “We’ve demonstrated a device that’s ideal for running these type of algorithms and that consumes a lot less power.”

This device is extremely well suited for the kind of signal identification and classification that traditional computers struggle to perform. Whereas digital transistors can be in only two states, such as 0 and 1, the researchers successfully programmed 500 states in the artificial synapse, which is useful for neuron-type computation models. In switching from one state to another they used about one-tenth as much energy as a state-of-the-art computing system needs in order to move data from the processing unit to the memory.

This, however, means they are still using about 10,000 times as much energy as the minimum a biological synapse needs in order to fire. The researchers are hopeful that they can attain neuron-level energy efficiency once they test the artificial synapse in smaller devices.

Organic potential

Every part of the device is made of inexpensive organic materials. These aren’t found in nature but they are largely composed of hydrogen and carbon and are compatible with the brain’s chemistry. Cells have been grown on these materials and they have even been used to make artificial pumps for neural transmitters. The voltages applied to train the artificial synapse are also the same as those that move through human neurons.

All this means it’s possible that the artificial synapse could communicate with live neurons, leading to improved brain-machine interfaces. The softness and flexibility of the device also lends itself to being used in biological environments. Before any applications to biology, however, the team plans to build an actual array of artificial synapses for further research and testing.

Additional Stanford co-authors of this work include co-lead author Ewout Lubberman, also of the University of Groningen in the Netherlands, Scott T. Keene and Grégorio C. Faria, also of Universidade de São Paulo, in Brazil. Sandia National Laboratories co-authors include Elliot J. Fuller and Sapan Agarwal in Livermore and Matthew J. Marinella in Albuquerque, New Mexico. Salleo is an affiliate of the Stanford Precourt Institute for Energy and the Stanford Neurosciences Institute. Van de Burgt is now an assistant professor in microsystems and an affiliate of the Institute for Complex Molecular Studies (ICMS) at Eindhoven University of Technology in the Netherlands.

This research was funded by the National Science Foundation, the Keck Faculty Scholar Funds, the Neurofab at Stanford, the Stanford Graduate Fellowship, Sandia’s Laboratory-Directed Research and Development Program, the U.S. Department of Energy, the Holland Scholarship, the University of Groningen Scholarship for Excellent Students, the Hendrik Muller National Fund, the Schuurman Schimmel-van Outeren Foundation, the Foundation of Renswoude (The Hague and Delft), the Marco Polo Fund, the Instituto Nacional de Ciência e Tecnologia/Instituto Nacional de Eletrônica Orgânica in Brazil, the Fundação de Amparo à Pesquisa do Estado de São Paulo and the Brazilian National Council.

Here’s an abstract for the researchers’ paper (link to paper provided after abstract) and it’s where you’ll find the memristor connection explained,

The brain is capable of massively parallel information processing while consuming only ~1–100fJ per synaptic event1, 2. Inspired by the efficiency of the brain, CMOS-based neural architectures3 and memristors4, 5 are being developed for pattern recognition and machine learning. However, the volatility, design complexity and high supply voltages for CMOS architectures, and the stochastic and energy-costly switching of memristors complicate the path to achieve the interconnectivity, information density, and energy efficiency of the brain using either approach. Here we describe an electrochemical neuromorphic organic device (ENODe) operating with a fundamentally different mechanism from existing memristors. ENODe switches at low voltage and energy (<10pJ for 103μm2 devices), displays >500 distinct, non-volatile conductance states within a ~1V range, and achieves high classification accuracy when implemented in neural network simulations. Plastic ENODes are also fabricated on flexible substrates enabling the integration of neuromorphic functionality in stretchable electronic systems6, 7. Mechanical flexibility makes ENODes compatible with three-dimensional architectures, opening a path towards extreme interconnectivity comparable to the human brain.

Here’s a link to and a citation for the paper,

A non-volatile organic electrochemical device as a low-voltage artificial synapse for neuromorphic computing by Yoeri van de Burgt, Ewout Lubberman, Elliot J. Fuller, Scott T. Keene, Grégorio C. Faria, Sapan Agarwal, Matthew J. Marinella, A. Alec Talin, & Alberto Salleo. Nature Materials (2017) doi:10.1038/nmat4856 Published online 20 February 2017

This paper is behind a paywall.

ETA March 8, 2017 10:28 PST: You may find this this piece on ferroelectricity and neuromorphic engineering of interest (March 7, 2017 posting titled: Ferroelectric roadmap to neuromorphic computing).