Tag Archives: memristors

Device with brainlike plasticity

A September 1, 2021 news item on ScienceDaily announces a new type of memristor from Texas A&M University (Texas A&M or TAMU) and the National University of Singapore (NUS)

In a discovery published in the journal Nature, an international team of researchers has described a novel molecular device with exceptional computing prowess.

Reminiscent of the plasticity of connections in the human brain, the device can be reconfigured on the fly for different computational tasks by simply changing applied voltages. Furthermore, like nerve cells can store memories, the same device can also retain information for future retrieval and processing.

Two of the universities involved in the research have issued news/press releases. I’m going to start with the September 1, 2021 Texas A&M University news release (also on EurekAlert), which originated the news item on ScienceDaily,

“The brain has the remarkable ability to change its wiring around by making and breaking connections between nerve cells. Achieving something comparable in a physical system has been extremely challenging,” said Dr. R. Stanley Williams [emphasis mine], professor in the Department of Electrical and Computer Engineering at Texas A&M University. “We have now created a molecular device with dramatic reconfigurability, which is achieved not by changing physical connections like in the brain, but by reprogramming its logic.”

Dr. T. Venkatesan, director of the Center for Quantum Research and Technology (CQRT) at the University of Oklahoma, Scientific Affiliate at National Institute of Standards and Technology, Gaithersburg, and adjunct professor of electrical and computer engineering at the National University of Singapore, added that their molecular device might in the future help design next-generation processing chips with enhanced computational power and speed, but consuming significantly reduced energy.

Whether it is the familiar laptop or a sophisticated supercomputer, digital technologies face a common nemesis, the von Neumann bottleneck. This delay in computational processing is a consequence of current computer architectures, wherein the memory, containing data and programs, is physically separated from the processor. As a result, computers spend a significant amount of time shuttling information between the two systems, causing the bottleneck. Also, despite extremely fast processor speeds, these units can be idling for extended amounts of time during periods of information exchange.

As an alternative to conventional electronic parts used for designing memory units and processors, devices called memristors offer a way to circumvent the von Neumann bottleneck. Memristors, such as those made of niobium dioxide and vanadium dioxide, transition from being an insulator to a conductor at a set temperature. This property gives these types of memristors the ability to perform computations and store data.

However, despite their many advantages, these metal oxide memristors are made of rare-earth elements and can operate only in restrictive temperature regimes. Hence, there has been an ongoing search for promising organic molecules that can perform a comparable memristive function, said Williams.

Dr. Sreebrata Goswami, a professor at the Indian Association for the Cultivation of Science, designed the material used in this work. The compound has a central metal atom (iron) bound to three phenyl azo pyridine organic molecules called ligands.

“This behaves like an electron sponge that can absorb as many as six electrons reversibly, resulting in seven different redox states,” said Sreebrata. “The interconnectivity between these states is the key behind the reconfigurability shown in this work.”

Dr. Sreetosh Goswami, a researcher at the National University of Singapore, devised this project by creating a tiny electrical circuit consisting of a 40-nanometer layer of molecular film sandwiched between a layer of gold on top and gold-infused nanodisc and indium tin oxide at the bottom.

On applying a negative voltage on the device, Sreetosh witnessed a current-voltage profile that was nothing like anyone had seen before. Unlike metal-oxide memristors that can switch from metal to insulator at only one fixed voltage, the organic molecular devices could switch back and forth from insulator to conductor at several discrete sequential voltages.

“So, if you think of the device as an on-off switch, as we were sweeping the voltage more negative, the device first switched from on to off, then off to on, then on to off and then back to on. I’ll say that we were just blown out of our seat,” said Venkatesan. “We had to convince ourselves that what we were seeing was real.”

Sreetosh and Sreebrata investigated the molecular mechanisms underlying the curious switching behavior using an imaging technique called Raman spectroscopy. In particular, they looked for spectral signatures in the vibrational motion of the organic molecule that could explain the multiple transitions. Their investigation revealed that sweeping the voltage negative triggered the ligands on the molecule to undergo a series of reduction, or electron-gaining, events that caused the molecule to transition between off state and on states.

Next, to describe the extremely complex current-voltage profile of the molecular device mathematically, Williams deviated from the conventional approach of basic physics-based equations. Instead, he described the behavior of the molecules using a decision tree algorithm with “if-then-else” statements, a commonplace line of code in several computer programs, particularly digital games.

“Video games have a structure where you have a character that does something, and then something occurs as a result. And so, if you write that out in a computer algorithm, they are if-then-else statements,” said Williams. “Here, the molecule is switching from on to off as a consequence of applied voltage, and that’s when I had the eureka moment to use decision trees to describe these devices, and it worked very well.” 

But the researchers went a step further to exploit these molecular devices to run programs for different real-world computational tasks. Sreetosh showed experimentally that their devices could perform fairly complex computations in a single time step and then be reprogrammed to perform another task in the next instant.

“It was quite extraordinary; our device was doing something like what the brain does, but in a very different way,” said Sreetosh. “When you’re learning something new or when you’re deciding, the brain can actually reconfigure and change physical wiring around. Similarly, we can logically reprogram or reconfigure our devices by giving them a different voltage pulse then they’ve seen before.” 

Venkatesan noted that it would take thousands of transistors to perform the same computational functions as one of their molecular devices with its different decision trees. Hence, he said their technology might first be used in handheld devices, like cell phones and sensors, and other applications where power is limited.

Other contributors to the research include Dr. Abhijeet Patra and Dr. Ariando from the National University of Singapore; Dr. Rajib Pramanick and Dr. Santi Prasad Rath from the Indian Association for the Cultivation of Science; Dr. Martin Foltin from Hewlett Packard Enterprise, Colorado; and Dr. Damien Thompson from the University of Limerick, Ireland.

Venkatesan said that this research is indicative of the future discoveries from this collaborative team, which will include the center of nanoscience and engineering at the Indian Institute of Science and the Microsystems and Nanotechnology Division at the NIST.

I’ve highlighted R. Stanley Williams because he and his team at HP [Hewlett Packard] Labs helped to kick off current memristor research in 2008 with the publication of two papers as per my April 5, 2010 posting,

In 2008, two memristor papers were published in Nature and Nature Nanotechnology, respectively. In the first (Nature, May 2008 [article still behind a paywall], a team at HP Labs claimed they had proved the existence of memristors (a fourth member of electrical engineering’s ‘Holy Trinity of the capacitor, resistor, and inductor’). In the second paper (Nature Nanotechnology, July 2008 [article still behind a paywall]) the team reported that they had achieved engineering control.

The novel memory device is based on a molecular system that can transition between on and off states at several discrete sequential voltages Courtesy: National University of Singapore

There is more technical detail in the September 2, 2022 NUS press release (also on EurekAlert),

Many electronic devices today are dependent on semiconductor logic circuits based on switches hard-wired to perform predefined logic functions. Physicists from the National University of Singapore (NUS), together with an international team of researchers, have developed a novel molecular memristor, or an electronic memory device, that has exceptional memory reconfigurability. 

Unlike hard-wired standard circuits, the molecular device can be reconfigured using voltage to embed different computational tasks. The energy-efficient new technology, which is capable of enhanced computational power and speed, can potentially be used in edge computing, as well as handheld devices and applications with limited power resource.

“This work is a significant breakthrough in our quest to design low-energy computing. The idea of using multiple switching in a single element draws inspiration from how the brain works and fundamentally reimagines the design strategy of a logic circuit,” said Associate Professor Ariando from the NUS Department of Physics who led the research.

The research was first published in the journal Nature on 1 September 2021, and carried out in collaboration with the Indian Association for the Cultivation of Science, Hewlett Packard Enterprise, the University of Limerick, the University of Oklahoma, and Texas A&M University.

Brain-inspired technology

“This new discovery can contribute to developments in edge computing as a sophisticated in-memory computing approach to overcome the von Neumann bottleneck, a delay in computational processing seen in many digital technologies due to the physical separation of memory storage from a device’s processor,” said Assoc Prof Ariando. The new molecular device also has the potential to contribute to designing next generation processing chips with enhanced computational power and speed.

“Similar to the flexibility and adaptability of connections in the human brain, our memory device can be reconfigured on the fly for different computational tasks by simply changing applied voltages. Furthermore, like how nerve cells can store memories, the same device can also retain information for future retrieval and processing,” said first author Dr Sreetosh Goswami, Research Fellow from the Department of Physics at NUS.

Research team member Dr Sreebrata Goswami, who was a Senior Research Scientist at NUS and previously Professor at the Indian Association for the Cultivation of Science, conceptualised and designed a molecular system belonging to the chemical family of phenyl azo pyridines that have a central metal atom bound to organic molecules called ligands. “These molecules are like electron sponges that can offer as many as six electron transfers resulting in five different molecular states. The interconnectivity between these states is the key behind the device’s reconfigurability,” explained Dr Sreebrata Goswami.

Dr Sreetosh Goswami created a tiny electrical circuit consisting a 40-nanometer layer of molecular film sandwiched between a top layer of gold, and a bottom layer of gold-infused nanodisc and indium tin oxide. He observed an unprecedented current-voltage profile upon applying a negative voltage to the device. Unlike conventional metal-oxide memristors that are switched on and off at only one fixed voltage, these organic molecular devices could switch between on-off states at several discrete sequential voltages.

Using an imaging technique called Raman spectroscopy, spectral signatures in the vibrational motion of the organic molecule were observed to explain the multiple transitions. Dr Sreebrata Goswami explained, “Sweeping the negative voltage triggered the ligands on the molecule to undergo a series of reduction, or electron-gaining which caused the molecule to transition between off and on states.”

The researchers described the behavior of the molecules using a decision tree algorithm with “if-then-else” statements, which is used in the coding of several computer programs, particularly digital games, as compared to the conventional approach of using basic physics-based equations.

New possibilities for energy-efficient devices

Building on their research, the team used the molecular memory devices to run programs for different real-world computational tasks. As a proof of concept, the team demonstrated that their technology could perform complex computations in a single step, and could be reprogrammed to perform another task in the next instant. An individual molecular memory device could perform the same computational functions as thousands of transistors, making the technology a more powerful and energy-efficient memory option.

“The technology might first be used in handheld devices, like cell phones and sensors, and other applications where power is limited,” added Assoc Prof Ariando.

The team in the midst of building new electronic devices incorporating their innovation, and working with collaborators to conduct simulation and benchmarking relating to existing technologies.

Other contributors to the research paper include Abhijeet Patra and Santi Prasad Rath from NUS, Rajib Pramanick from the Indian Association for the Cultivation of Science, Martin Foltin from Hewlett Packard Enterprise, Damien Thompson from the University of Limerick, T. Venkatesan from the University of Oklahoma, and R. Stanley Williams from Texas A&M University.

Here’s a link to and a citation for the paper,

Decision trees within a molecular memristor by Sreetosh Goswami, Rajib Pramanick, Abhijeet Patra, Santi Prasad Rath, Martin Foltin, A. Ariando, Damien Thompson, T. Venkatesan, Sreebrata Goswami & R. Stanley Williams. Nature volume 597, pages 51–56 (2021) DOI: https://doi.org/10.1038/s41586-021-03748-0 Published 01 September 2021 Issue Date 02 September 2021

This paper is behind a paywall.

Memristors, it’s all about the oxides

I have one research announcement from China and another from the Netherlands, both of which concern memristors and oxides.

China

A May 17, 2021 news item on Nanowerk announces work, which suggests that memristors may not need to rely solely on oxides but could instead utilize light more gainfully,

Scientists are getting better at making neuron-like junctions for computers that mimic the human brain’s random information processing, storage and recall. Fei Zhuge of the Chinese Academy of Sciences and colleagues reviewed the latest developments in the design of these ‘memristors’ for the journal Science and Technology of Advanced Materials …

Computers apply artificial intelligence programs to recall previously learned information and make predictions. These programs are extremely energy- and time-intensive: typically, vast volumes of data must be transferred between separate memory and processing units. To solve this issue, researchers have been developing computer hardware that allows for more random and simultaneous information transfer and storage, much like the human brain.

Electronic circuits in these ‘neuromorphic’ computers include memristors that resemble the junctions between neurons called synapses. Energy flows through a material from one electrode to another, much like a neuron firing a signal across the synapse to the next neuron. Scientists are now finding ways to better tune this intermediate material so the information flow is more stable and reliable.

I had no success locating the original news release, which originated the news item, but have found this May 17, 2021 news item on eedesignit.com, which provides the remaining portion of the news release.

“Oxides are the most widely used materials in memristors,” said Zhuge. “But oxide memristors have unsatisfactory stability and reliability. Oxide-based hybrid structures can effectively improve this.”

Memristors are usually made of an oxide-based material sandwiched between two electrodes. Researchers are getting better results when they combine two or more layers of different oxide-based materials between the electrodes. When an electrical current flows through the network, it induces ions to drift within the layers. The ions’ movements ultimately change the memristor’s resistance, which is necessary to send or stop a signal through the junction.

Memristors can be tuned further by changing the compounds used for electrodes or by adjusting the intermediate oxide-based materials. Zhuge and his team are currently developing optoelectronic neuromorphic computers based on optically-controlled oxide memristors. Compared to electronic memristors, photonic ones are expected to have higher operation speeds and lower energy consumption. They could be used to construct next generation artificial visual systems with high computing efficiency.

Now for a picture that accompanied the news release, which follows,

Fig. The all-optically controlled memristor developed for optoelectronic neuromorphic computing (Image by NIMTE)

Here’s the February 7, 2021 Ningbo Institute of Materials Technology and Engineering (NIMTE) press release featuring this work and a more technical description,

A research group led by Prof. ZHUGE Fei at the Ningbo Institute of Materials Technology and Engineering (NIMTE) of the Chinese Academy of Sciences (CAS) developed an all-optically controlled (AOC) analog memristor, whose memconductance can be reversibly tuned by varying only the wavelength of the controlling light.

As the next generation of artificial intelligence (AI), neuromorphic computing (NC) emulates the neural structure and operation of the human brain at the physical level, and thus can efficiently perform multiple advanced computing tasks such as learning, recognition and cognition.

Memristors are promising candidates for NC thanks to the feasibility of high-density 3D integration and low energy consumption. Among them, the emerging optoelectronic memristors are competitive by virtue of combining the advantages of both photonics and electronics. However, the reversible tuning of memconductance depends highly on the electric excitation, which have severely limited the development and application of optoelectronic NC.

To address this issue, researchers at NIMTE proposed a bilayered oxide AOC memristor, based on the relatively mature semiconductor material InGaZnO and a memconductance tuning mechanism of light-induced electron trapping and detrapping.

The traditional electrical memristors require strong electrical stimuli to tune their memconductance, leading to high power consumption, a large amount of Joule heat, microstructural change triggered by the Joule heat, and even high crosstalk in memristor crossbars.

On the contrary, the developed AOC memristor does not involve microstructure changes, and can operate upon weak light irradiation with light power density of only 20 μW cm-2, which has provided a new approach to overcome the instability of the memristor.

Specifically, the AOC memristor can serve as an excellent synaptic emulator and thus mimic spike-timing-dependent plasticity (STDP) which is an important learning rule in the brain, indicating its potential applications in AOC spiking neural networks for high-efficiency optoelectronic NC.

Moreover, compared to purely optical computing, the optoelectronic computing using our AOC memristor showed higher practical feasibility, on account of the simple structure and fabrication process of the device.

The study may shed light on the in-depth research and practical application of optoelectronic NC, and thus promote the development of the new generation of AI.

This work was supported by the National Natural Science Foundation of China (No. 61674156 and 61874125), the Strategic Priority Research Program of Chinese Academy of Sciences (No. XDB32050204), and the Zhejiang Provincial Natural Science Foundation of China (No. LD19E020001).

Here’s a link to and a citation for the paper,

Hybrid oxide brain-inspired neuromorphic devices for hardware implementation of artificial intelligence by Jingrui Wang, Xia Zhuge & Fei Zhuge. Science and Technology of Advanced Materials Volume 22, 2021 – Issue 1 Pages 326-344 DOI: https://doi.org/10.1080/14686996.2021.1911277 Published online:14 May 2021

This paper appears to be open access.

Netherlands

In this case, a May 18, 2021 news item on Nanowerk marries oxides to spintronics,

Classic computers use binary values (0/1) to perform. By contrast, our brain cells can use more values to operate, making them more energy-efficient than computers. This is why scientists are interested in neuromorphic (brain-like) computing.

Physicists from the University of Groningen (the Netherlands) have used a complex oxide to create elements comparable to the neurons and synapses in the brain using spins, a magnetic property of electrons.

The press release, which follows, was accompanied by this image illustrating the work,

Caption: Schematic of the proposed device structure for neuromorphic spintronic memristors. The write path is between the terminals through the top layer (black dotted line), the read path goes through the device stack (red dotted line). The right side of the figure indicates how the choice of substrate dictates whether the device will show deterministic or probabilistic behaviour. Credit: Banerjee group, University of Groningen

A May 18, 2021 University of Groningen press release (also on EurekAlert), which originated the news item, adds more ‘spin’ to the story,

Although computers can do straightforward calculations much faster than humans, our brains outperform silicon machines in tasks like object recognition. Furthermore, our brain uses less energy than computers. Part of this can be explained by the way our brain operates: whereas a computer uses a binary system (with values 0 or 1), brain cells can provide more analogue signals with a range of values.

Thin films

The operation of our brains can be simulated in computers, but the basic architecture still relies on a binary system. That is why scientist look for ways to expand this, creating hardware that is more brain-like, but will also interface with normal computers. ‘One idea is to create magnetic bits that can have intermediate states’, says Tamalika Banerjee, Professor of Spintronics of Functional Materials at the Zernike Institute for Advanced Materials, University of Groningen. She works on spintronics, which uses a magnetic property of electrons called ‘spin’ to transport, manipulate and store information.

In this study, her PhD student Anouk Goossens, first author of the paper, created thin films of a ferromagnetic metal (strontium-ruthenate oxide, SRO) grown on a substrate of strontium titanate oxide. The resulting thin film contained magnetic domains that were perpendicular to the plane of the film. ‘These can be switched more efficiently than in-plane magnetic domains’, explains Goossens. By adapting the growth conditions, it is possible to control the crystal orientation in the SRO. Previously, out-of-plane magnetic domains have been made using other techniques, but these typically require complex layer structures.

Magnetic anisotropy

The magnetic domains can be switched using a current through a platinum electrode on top of the SRO. Goossens: ‘When the magnetic domains are oriented perfectly perpendicular to the film, this switching is deterministic: the entire domain will switch.’ However, when the magnetic domains are slightly tilted, the response is probabilistic: not all the domains are the same, and intermediate values occur when only part of the crystals in the domain have switched.

By choosing variants of the substrate on which the SRO is grown, the scientists can control its magnetic anisotropy. This allows them to produce two different spintronic devices. ‘This magnetic anisotropy is exactly what we wanted’, says Goossens. ‘Probabilistic switching compares to how neurons function, while the deterministic switching is more like a synapse.’

The scientists expect that in the future, brain-like computer hardware can be created by combining these different domains in a spintronic device that can be connected to standard silicon-based circuits. Furthermore, probabilistic switching would also allow for stochastic computing, a promising technology which represents continuous values by streams of random bits. Banerjee: ‘We have found a way to control intermediate states, not just for memory but also for computing.’

Here’s a link to and a citation for the paper,

Anisotropy and Current Control of Magnetization in SrRuO3/SrTiO3 Heterostructures for Spin-Memristors by A.S. Goossens, M.A.T. Leiviskä and T. Banerjee. Frontiers in Nanotechnology DOI: https://doi.org/10.3389/fnano.2021.680468 Published: 18 May 2021

This appears to be open access.

Neuromorphic computing with a memristor is capable of replicating bio-neural system

There’s nothing especially new in this latest paper on neuromorphic computing and memristors, however it does a very good job of describing how these new computers might work. From a Nov. 30, 2020 news item on phys.org (Note: A link has been removed),

In a paper published in Nano, researchers study the role of memristors in neuromorphic computing. This novel fundamental electronic component supports the cloning of bio-neural systems with low cost and power.

Contemporary computing systems are unable to deal with critical challenges of size reduction and computing speed in the big data era. The Von Neumann bottleneck is referred to as a hindrance in data transfer through the bus connecting processor and memory cell. This gives an opportunity to create alternative architectures based on a biological neuron model. Neuromorphic computing is one of such alternative architectures that mimic neuro-biological brain architectures.

A November ??, 2020 World Scientific (Publishing) press release (also on EurekAlert and published on Nov. 27, 2020), which originated the news item, continues with this fine explanation,

The humanoid neural brain system comprises approximately 100 billion neurons and numerous synapses of connectivity. An efficient circuit device is therefore essential for the construction of a neural network that mimics the human brain. The development of a basic electrical component, the memristor, with several distinctive features such as scalability, in-memory processing and CMOS compatibility, has significantly facilitated the implementation of neural network hardware.

The memristor was introduced as a “memory-like resistor” where the background of the applied inputs would alter the resistance status of the device. It is a capable electronic component that can memorise the current in order to effectively reduce the size of the device and increase processing speed in neural networks. Parallel calculations, as in the human nervous system, are made with the support of memristor devices in a novel computing architecture.

System instability and uncertainty have been described as current problems for most memory-based applications. This is the opposite of the biological process. Despite noise, nonlinearity, variability and volatility, biological systems work well. It is still unclear, however, that the effectiveness of biological systems actually depends on these obstacles. Neural modeling is sometimes avoided because it is not easy to model and study. The possibility of exploiting these properties is therefore, of course, a critical path to success in the achievement of artificial and biological systems.

Here’s a link to and a citation for the paper (Note: I usually include the link as part of the paper’s title but couldn’t do it this time),

Memristors: Understanding, Utilization and Upgradation for Neuromorphic Computing [https://www.worldscientific.com/doi/abs/10.1142/S1793292020300054] by Mohanbabu Bharathi, Zhiwei Wang, Bingrui Guo, Babu Balraj, Qiuhong Li, Jianwei Shuai and Donghui Guo. NanoVol. 15, No. 11, 2030005 (2020) DOI: https://doi.org/10.1142/S1793292020300054 Published: 12 November 2020

This paper is open access.

Graphene-based memristors for neuromorphic computing

An Oct. 29, 2020 news item on ScienceDaily features an explanation of the reasons for investigating brainlike (neuromorphic) computing ,

As progress in traditional computing slows, new forms of computing are coming to the forefront. At Penn State, a team of engineers is attempting to pioneer a type of computing that mimics the efficiency of the brain’s neural networks while exploiting the brain’s analog nature.

Modern computing is digital, made up of two states, on-off or one and zero. An analog computer, like the brain, has many possible states. It is the difference between flipping a light switch on or off and turning a dimmer switch to varying amounts of lighting.

Neuromorphic or brain-inspired computing has been studied for more than 40 years, according to Saptarshi Das, the team leader and Penn State [Pennsylvania State University] assistant professor of engineering science and mechanics. What’s new is that as the limits of digital computing have been reached, the need for high-speed image processing, for instance for self-driving cars, has grown. The rise of big data, which requires types of pattern recognition for which the brain architecture is particularly well suited, is another driver in the pursuit of neuromorphic computing.

“We have powerful computers, no doubt about that, the problem is you have to store the memory in one place and do the computing somewhere else,” Das said.

The shuttling of this data from memory to logic and back again takes a lot of energy and slows the speed of computing. In addition, this computer architecture requires a lot of space. If the computation and memory storage could be located in the same space, this bottleneck could be eliminated.

An Oct. 29, 2020 Penn State news release (also on EurekAlert), which originated the news item, describes what makes the research different,

“We are creating artificial neural networks, which seek to emulate the energy and area efficiencies of the brain,” explained Thomas Shranghamer, a doctoral student in the Das group and first author on a paper recently published in Nature Communications. “The brain is so compact it can fit on top of your shoulders, whereas a modern supercomputer takes up a space the size of two or three tennis courts.”

Like synapses connecting the neurons in the brain that can be reconfigured, the artificial neural networks the team is building can be reconfigured by applying a brief electric field to a sheet of graphene, the one-atomic-thick layer of carbon atoms. In this work they show at least 16 possible memory states, as opposed to the two in most oxide-based memristors, or memory resistors [emphasis mine].

“What we have shown is that we can control a large number of memory states with precision using simple graphene field effect transistors [emphasis mine],” Das said.

The team thinks that ramping up this technology to a commercial scale is feasible. With many of the largest semiconductor companies actively pursuing neuromorphic computing, Das believes they will find this work of interest.

Here’s a link to and a citation for the paper,

Graphene memristive synapses for high precision neuromorphic computing by Thomas F. Schranghamer, Aaryan Oberoi & Saptarshi Das. Nature Communications volume 11, Article number: 5474 (2020) DOI: https://doi.org/10.1038/s41467-020-19203-z Published: 29 October 2020

This paper is open access.

Brain cell-like nanodevices

Given R. Stanley Williams’s presence on the author list, it’s a bit surprising that there’s no mention of memristors. If I read the signs rightly the interest is shifting, in some cases, from the memristor to a more comprehensive grouping of circuit elements referred to as ‘neuristors’ or, more likely, ‘nanocirucuit elements’ in the effort to achieve brainlike (neuromorphic) computing (engineering). (Williams was the leader of the HP Labs team that offered proof and more of the memristor’s existence, which I mentioned here in an April 5, 2010 posting. There are many, many postings on this topic here; try ‘memristors’ or ‘brainlike computing’ for your search terms.)

A September 24, 2020 news item on ScienceDaily announces a recent development in the field of neuromorphic engineering,

In the September [2020] issue of the journal Nature, scientists from Texas A&M University, Hewlett Packard Labs and Stanford University have described a new nanodevice that acts almost identically to a brain cell. Furthermore, they have shown that these synthetic brain cells can be joined together to form intricate networks that can then solve problems in a brain-like manner.

“This is the first study where we have been able to emulate a neuron with just a single nanoscale device, which would otherwise need hundreds of transistors,” said Dr. R. Stanley Williams, senior author on the study and professor in the Department of Electrical and Computer Engineering. “We have also been able to successfully use networks of our artificial neurons to solve toy versions of a real-world problem that is computationally intense even for the most sophisticated digital technologies.”

In particular, the researchers have demonstrated proof of concept that their brain-inspired system can identify possible mutations in a virus, which is highly relevant for ensuring the efficacy of vaccines and medications for strains exhibiting genetic diversity.

A September 24, 2020 Texas A&M University news release (also on EurekAlert) by Vandana Suresh, which originated the news item, provides some context for the research,

Over the past decades, digital technologies have become smaller and faster largely because of the advancements in transistor technology. However, these critical circuit components are fast approaching their limit of how small they can be built, initiating a global effort to find a new type of technology that can supplement, if not replace, transistors.

In addition to this “scaling-down” problem, transistor-based digital technologies have other well-known challenges. For example, they struggle at finding optimal solutions when presented with large sets of data.

“Let’s take a familiar example of finding the shortest route from your office to your home. If you have to make a single stop, it’s a fairly easy problem to solve. But if for some reason you need to make 15 stops in between, you have 43 billion routes to choose from,” said Dr. Suhas Kumar, lead author on the study and researcher at Hewlett Packard Labs. “This is now an optimization problem, and current computers are rather inept at solving it.”

Kumar added that another arduous task for digital machines is pattern recognition, such as identifying a face as the same regardless of viewpoint or recognizing a familiar voice buried within a din of sounds.

But tasks that can send digital machines into a computational tizzy are ones at which the brain excels. In fact, brains are not just quick at recognition and optimization problems, but they also consume far less energy than digital systems. Hence, by mimicking how the brain solves these types of tasks, Williams said brain-inspired or neuromorphic systems could potentially overcome some of the computational hurdles faced by current digital technologies.

To build the fundamental building block of the brain or a neuron, the researchers assembled a synthetic nanoscale device consisting of layers of different inorganic materials, each with a unique function. However, they said the real magic happens in the thin layer made of the compound niobium dioxide.

When a small voltage is applied to this region, its temperature begins to increase. But when the temperature reaches a critical value, niobium dioxide undergoes a quick change in personality, turning from an insulator to a conductor. But as it begins to conduct electric currents, its temperature drops and niobium dioxide switches back to being an insulator.

These back-and-forth transitions enable the synthetic devices to generate a pulse of electrical current that closely resembles the profile of electrical spikes, or action potentials, produced by biological neurons. Further, by changing the voltage across their synthetic neurons, the researchers reproduced a rich range of neuronal behaviors observed in the brain, such as sustained, burst and chaotic firing of electrical spikes.

“Capturing the dynamical behavior of neurons is a key goal for brain-inspired computers,” said Kumar. “Altogether, we were able to recreate around 15 types of neuronal firing profiles, all using a single electrical component and at much lower energies compared to transistor-based circuits.”

To evaluate if their synthetic neurons [neuristor?] can solve real-world problems, the researchers first wired 24 such nanoscale devices together in a network inspired by the connections between the brain’s cortex and thalamus, a well-known neural pathway involved in pattern recognition. Next, they used this system to solve a toy version of the viral quasispecies reconstruction problem, where mutant variations of a virus are identified without a reference genome.

By means of data inputs, the researchers introduced the network to short gene fragments. Then, by programming the strength of connections between the artificial neurons within the network, they established basic rules about joining these genetic fragments. The jigsaw puzzle-like task for the network was to list mutations in the virus’ genome based on these short genetic segments.

The researchers found that within a few microseconds, their network of artificial neurons settled down in a state that was indicative of the genome for a mutant strain.

Williams and Kumar noted this result is proof of principle that their neuromorphic systems can quickly perform tasks in an energy-efficient way.

The researchers said the next steps in their research will be to expand the repertoire of the problems that their brain-like networks can solve by incorporating other firing patterns and some hallmark properties of the human brain like learning and memory. They also plan to address hardware challenges for implementing their technology on a commercial scale.

“Calculating the national debt or solving some large-scale simulation is not the type of task the human brain is good at and that’s why we have digital computers. Alternatively, we can leverage our knowledge of neuronal connections for solving problems that the brain is exceptionally good at,” said Williams. “We have demonstrated that depending on the type of problem, there are different and more efficient ways of doing computations other than the conventional methods using digital computers with transistors.”

If you look at the news release on EurekAlert, you’ll see this informative image is titled: NeuristerSchematic [sic],

Caption: Networks of artificial neurons connected together can solve toy versions the viral quasispecies reconstruction problem. Credit: Texas A&M University College of Engineering

(On the university website, the image is credited to Rachel Barton.) You can see one of the first mentions of a ‘neuristor’ here in an August 24, 2017 posting.

Here’s a link to and a citation for the paper,

Third-order nanocircuit elements for neuromorphic engineering by Suhas Kumar, R. Stanley Williams & Ziwen Wang. Nature volume 585, pages518–523(2020) DOI: https://doi.org/10.1038/s41586-020-2735-5 Published: 23 September 2020 Issue Date: 24 September 2020

This paper is behind a paywall.

Energy-efficient artificial synapse

This is the second neuromorphic computing chip story from MIT this summer in what has turned out to be a bumper crop of research announcements in this field. The first MIT synapse story was featured in a June 16, 2020 posting. Now, there’s a second and completely different team announcing results for their artificial brain synapse work in a June 19, 2020 news item on Nanowerk (Note: A link has been removed),

Teams around the world are building ever more sophisticated artificial intelligence systems of a type called neural networks, designed in some ways to mimic the wiring of the brain, for carrying out tasks such as computer vision and natural language processing.

Using state-of-the-art semiconductor circuits to simulate neural networks requires large amounts of memory and high power consumption. Now, an MIT [Massachusetts Institute of Technology] team has made strides toward an alternative system, which uses physical, analog devices that can much more efficiently mimic brain processes.

The findings are described in the journal Nature Communications (“Protonic solid-state electrochemical synapse for physical neural networks”), in a paper by MIT professors Bilge Yildiz, Ju Li, and Jesús del Alamo, and nine others at MIT and Brookhaven National Laboratory. The first author of the paper is Xiahui Yao, a former MIT postdoc now working on energy storage at GRU Energy Lab.

That description of the work is one pretty much every team working on developing memristive (neuromorphic) chips could use.

On other fronts, the team has produced a very attractive illustration accompanying this research (aside: Is it my imagination or has there been a serious investment in the colour pink and other pastels for science illustrations?),

A new system developed at MIT and Brookhaven National Lab could provide a faster, more reliable and much more energy efficient approach to physical neural networks, by using analog ionic-electronic devices to mimic synapses.. Courtesy of the researchers

A June 19, 2020 MIT news release, which originated the news item, provides more insight into this specific piece of research (hint: it’s about energy use and repeatability),

Neural networks attempt to simulate the way learning takes place in the brain, which is based on the gradual strengthening or weakening of the connections between neurons, known as synapses. The core component of this physical neural network is the resistive switch, whose electronic conductance can be controlled electrically. This control, or modulation, emulates the strengthening and weakening of synapses in the brain.

In neural networks using conventional silicon microchip technology, the simulation of these synapses is a very energy-intensive process. To improve efficiency and enable more ambitious neural network goals, researchers in recent years have been exploring a number of physical devices that could more directly mimic the way synapses gradually strengthen and weaken during learning and forgetting.

Most candidate analog resistive devices so far for such simulated synapses have either been very inefficient, in terms of energy use, or performed inconsistently from one device to another or one cycle to the next. The new system, the researchers say, overcomes both of these challenges. “We’re addressing not only the energy challenge, but also the repeatability-related challenge that is pervasive in some of the existing concepts out there,” says Yildiz, who is a professor of nuclear science and engineering and of materials science and engineering.

“I think the bottleneck today for building [neural network] applications is energy efficiency. It just takes too much energy to train these systems, particularly for applications on the edge, like autonomous cars,” says del Alamo, who is the Donner Professor in the Department of Electrical Engineering and Computer Science. Many such demanding applications are simply not feasible with today’s technology, he adds.

The resistive switch in this work is an electrochemical device, which is made of tungsten trioxide (WO3) and works in a way similar to the charging and discharging of batteries. Ions, in this case protons, can migrate into or out of the crystalline lattice of the material,  explains Yildiz, depending on the polarity and strength of an applied voltage. These changes remain in place until altered by a reverse applied voltage — just as the strengthening or weakening of synapses does.

The mechanism is similar to the doping of semiconductors,” says Li, who is also a professor of nuclear science and engineering and of materials science and engineering. In that process, the conductivity of silicon can be changed by many orders of magnitude by introducing foreign ions into the silicon lattice. “Traditionally those ions were implanted at the factory,” he says, but with the new device, the ions are pumped in and out of the lattice in a dynamic, ongoing process. The researchers can control how much of the “dopant” ions go in or out by controlling the voltage, and “we’ve demonstrated a very good repeatability and energy efficiency,” he says.

Yildiz adds that this process is “very similar to how the synapses of the biological brain work. There, we’re not working with protons, but with other ions such as calcium, potassium, magnesium, etc., and by moving those ions you actually change the resistance of the synapses, and that is an element of learning.” The process taking place in the tungsten trioxide in their device is similar to the resistance modulation taking place in biological synapses, she says.

“What we have demonstrated here,” Yildiz says, “even though it’s not an optimized device, gets to the order of energy consumption per unit area per unit change in conductance that’s close to that in the brain.” Trying to accomplish the same task with conventional CMOS type semiconductors would take a million times more energy, she says.

The materials used in the demonstration of the new device were chosen for their compatibility with present semiconductor manufacturing systems, according to Li. But they include a polymer material that limits the device’s tolerance for heat, so the team is still searching for other variations of the device’s proton-conducting membrane and better ways of encapsulating its hydrogen source for long-term operations.

“There’s a lot of fundamental research to be done at the materials level for this device,” Yildiz says. Ongoing research will include “work on how to integrate these devices with existing CMOS transistors” adds del Alamo. “All that takes time,” he says, “and it presents tremendous opportunities for innovation, great opportunities for our students to launch their careers.”

Coincidentally or not a University of Massachusetts at Amherst team announced memristor voltage use comparable to human brain voltage use (see my June 15, 2020 posting), plus, there’s a team at Stanford University touting their low-energy biohybrid synapse in a XXX posting. (June 2020 has been a particularly busy month here for ‘artificial brain’ or ‘memristor’ stories.)

Getting back to this latest MIT research, here’s a link to and a citation for the paper,

Protonic solid-state electrochemical synapse for physical neural networks by Xiahui Yao, Konstantin Klyukin, Wenjie Lu, Murat Onen, Seungchan Ryu, Dongha Kim, Nicolas Emond, Iradwikanari Waluyo, Adrian Hunt, Jesús A. del Alamo, Ju Li & Bilge Yildiz. Nature Communications volume 11, Article number: 3134 (2020) DOI: https://doi.org/10.1038/s41467-020-16866-6 Published: 19 June 2020

This paper is open access.

Of sleep, electric sheep, and thousands of artificial synapses on a chip

A close-up view of a new neuromorphic “brain-on-a-chip” that includes tens of thousands of memristors, or memory transistors. Credit: Peng Lin Courtesy: MIT

It’s hard to believe that a brain-on-a-chip might need sleep but that seems to be the case as far as the US Dept. of Energy’s Los Alamos National Laboratory is concerned. Before pursuing that line of thought, here’s some work from the Massachusetts Institute of Technology (MIT) involving memristors and a brain-on-a-chip. From a June 8, 2020 news item on ScienceDaily,

MIT engineers have designed a “brain-on-a-chip,” smaller than a piece of confetti, that is made from tens of thousands of artificial brain synapses known as memristors — silicon-based components that mimic the information-transmitting synapses in the human brain.

The researchers borrowed from principles of metallurgy to fabricate each memristor from alloys of silver and copper, along with silicon. When they ran the chip through several visual tasks, the chip was able to “remember” stored images and reproduce them many times over, in versions that were crisper and cleaner compared with existing memristor designs made with unalloyed elements.

Their results, published today in the journal Nature Nanotechnology, demonstrate a promising new memristor design for neuromorphic devices — electronics that are based on a new type of circuit that processes information in a way that mimics the brain’s neural architecture. Such brain-inspired circuits could be built into small, portable devices, and would carry out complex computational tasks that only today’s supercomputers can handle.

This ‘metallurgical’ approach differs somewhat from the protein nanowire approach used by the University of Massachusetts at Amherst team mentioned in my June 15, 2020 posting. Scientists are pursuing multiple pathways and we may find that we arrive with not ‘a single artificial brain but with many types of artificial brains.

A June 8, 2020 MIT news release (also on EurekAlert) provides more detail about this brain-on-a-chip,

“So far, artificial synapse networks exist as software. We’re trying to build real neural network hardware for portable artificial intelligence systems,” says Jeehwan Kim, associate professor of mechanical engineering at MIT. “Imagine connecting a neuromorphic device to a camera on your car, and having it recognize lights and objects and make a decision immediately, without having to connect to the internet. We hope to use energy-efficient memristors to do those tasks on-site, in real-time.”

Wandering ions

Memristors, or memory transistors [Note: Memristors are usually described as memory resistors; this is the first time I’ve seen ‘memory transistor’], are an essential element in neuromorphic computing. In a neuromorphic device, a memristor would serve as the transistor in a circuit, though its workings would more closely resemble a brain synapse — the junction between two neurons. The synapse receives signals from one neuron, in the form of ions, and sends a corresponding signal to the next neuron.

A transistor in a conventional circuit transmits information by switching between one of only two values, 0 and 1, and doing so only when the signal it receives, in the form of an electric current, is of a particular strength. In contrast, a memristor would work along a gradient, much like a synapse in the brain. The signal it produces would vary depending on the strength of the signal that it receives. This would enable a single memristor to have many values, and therefore carry out a far wider range of operations than binary transistors.

Like a brain synapse, a memristor would also be able to “remember” the value associated with a given current strength, and produce the exact same signal the next time it receives a similar current. This could ensure that the answer to a complex equation, or the visual classification of an object, is reliable — a feat that normally involves multiple transistors and capacitors.

Ultimately, scientists envision that memristors would require far less chip real estate than conventional transistors, enabling powerful, portable computing devices that do not rely on supercomputers, or even connections to the Internet.

Existing memristor designs, however, are limited in their performance. A single memristor is made of a positive and negative electrode, separated by a “switching medium,” or space between the electrodes. When a voltage is applied to one electrode, ions from that electrode flow through the medium, forming a “conduction channel” to the other electrode. The received ions make up the electrical signal that the memristor transmits through the circuit. The size of the ion channel (and the signal that the memristor ultimately produces) should be proportional to the strength of the stimulating voltage.

Kim says that existing memristor designs work pretty well in cases where voltage stimulates a large conduction channel, or a heavy flow of ions from one electrode to the other. But these designs are less reliable when memristors need to generate subtler signals, via thinner conduction channels.

The thinner a conduction channel, and the lighter the flow of ions from one electrode to the other, the harder it is for individual ions to stay together. Instead, they tend to wander from the group, disbanding within the medium. As a result, it’s difficult for the receiving electrode to reliably capture the same number of ions, and therefore transmit the same signal, when stimulated with a certain low range of current.

Borrowing from metallurgy

Kim and his colleagues found a way around this limitation by borrowing a technique from metallurgy, the science of melding metals into alloys and studying their combined properties.

“Traditionally, metallurgists try to add different atoms into a bulk matrix to strengthen materials, and we thought, why not tweak the atomic interactions in our memristor, and add some alloying element to control the movement of ions in our medium,” Kim says.

Engineers typically use silver as the material for a memristor’s positive electrode. Kim’s team looked through the literature to find an element that they could combine with silver to effectively hold silver ions together, while allowing them to flow quickly through to the other electrode.

The team landed on copper as the ideal alloying element, as it is able to bind both with silver, and with silicon.

“It acts as a sort of bridge, and stabilizes the silver-silicon interface,” Kim says.

To make memristors using their new alloy, the group first fabricated a negative electrode out of silicon, then made a positive electrode by depositing a slight amount of copper, followed by a layer of silver. They sandwiched the two electrodes around an amorphous silicon medium. In this way, they patterned a millimeter-square silicon chip with tens of thousands of memristors.

As a first test of the chip, they recreated a gray-scale image of the Captain America shield. They equated each pixel in the image to a corresponding memristor in the chip. They then modulated the conductance of each memristor that was relative in strength to the color in the corresponding pixel.

The chip produced the same crisp image of the shield, and was able to “remember” the image and reproduce it many times, compared with chips made of other materials.

The team also ran the chip through an image processing task, programming the memristors to alter an image, in this case of MIT’s Killian Court, in several specific ways, including sharpening and blurring the original image. Again, their design produced the reprogrammed images more reliably than existing memristor designs.

“We’re using artificial synapses to do real inference tests,” Kim says. “We would like to develop this technology further to have larger-scale arrays to do image recognition tasks. And some day, you might be able to carry around artificial brains to do these kinds of tasks, without connecting to supercomputers, the internet, or the cloud.”

Here’s a link to and a citation for the paper,

Alloying conducting channels for reliable neuromorphic computing by Hanwool Yeon, Peng Lin, Chanyeol Choi, Scott H. Tan, Yongmo Park, Doyoon Lee, Jaeyong Lee, Feng Xu, Bin Gao, Huaqiang Wu, He Qian, Yifan Nie, Seyoung Kim & Jeehwan Kim. Nature Nanotechnology (2020 DOI: https://doi.org/10.1038/s41565-020-0694-5 Published: 08 June 2020

This paper is behind a paywall.

Electric sheep and sleeping androids

I find it impossible to mention that androids might need sleep without reference to Philip K. Dick’s 1968 novel, “Do Androids Dream of Electric Sheep?”; its Wikipedia entry is here.

June 8, 2020 Intelligent machines of the future may need to sleep as much as we do. Intelligent machines of the future may need to sleep as much as we do. Courtesy: Los Alamos National Laboratory

As it happens, I’m not the only one who felt the need to reference the novel, from a June 8, 2020 news item on ScienceDaily,

No one can say whether androids will dream of electric sheep, but they will almost certainly need periods of rest that offer benefits similar to those that sleep provides to living brains, according to new research from Los Alamos National Laboratory.

“We study spiking neural networks, which are systems that learn much as living brains do,” said Los Alamos National Laboratory computer scientist Yijing Watkins. “We were fascinated by the prospect of training a neuromorphic processor in a manner analogous to how humans and other biological systems learn from their environment during childhood development.”

Watkins and her research team found that the network simulations became unstable after continuous periods of unsupervised learning. When they exposed the networks to states that are analogous to the waves that living brains experience during sleep, stability was restored. “It was as though we were giving the neural networks the equivalent of a good night’s rest,” said Watkins.

A June 8, 2020 Los Alamos National Laboratory (LANL) news release (also on EurekAlert), which originated the news item, describes the research team’s presentation,

The discovery came about as the research team worked to develop neural networks that closely approximate how humans and other biological systems learn to see. The group initially struggled with stabilizing simulated neural networks undergoing unsupervised dictionary training, which involves classifying objects without having prior examples to compare them to.

“The issue of how to keep learning systems from becoming unstable really only arises when attempting to utilize biologically realistic, spiking neuromorphic processors or when trying to understand biology itself,” said Los Alamos computer scientist and study coauthor Garrett Kenyon. “The vast majority of machine learning, deep learning, and AI researchers never encounter this issue because in the very artificial systems they study they have the luxury of performing global mathematical operations that have the effect of regulating the overall dynamical gain of the system.”

The researchers characterize the decision to expose the networks to an artificial analog of sleep as nearly a last ditch effort to stabilize them. They experimented with various types of noise, roughly comparable to the static you might encounter between stations while tuning a radio. The best results came when they used waves of so-called Gaussian noise, which includes a wide range of frequencies and amplitudes. They hypothesize that the noise mimics the input received by biological neurons during slow-wave sleep. The results suggest that slow-wave sleep may act, in part, to ensure that cortical neurons maintain their stability and do not hallucinate.

The groups’ next goal is to implement their algorithm on Intel’s Loihi neuromorphic chip. They hope allowing Loihi to sleep from time to time will enable it to stably process information from a silicon retina camera in real time. If the findings confirm the need for sleep in artificial brains, we can probably expect the same to be true of androids and other intelligent machines that may come about in the future.

Watkins will be presenting the research at the Women in Computer Vision Workshop on June 14 [2020] in Seattle.

The 2020 Women in Computer Vition Workshop (WICV) website is here. As is becoming standard practice for these times, the workshop was held in a virtual environment. Here’s a link to and a citation for the poster presentation paper,

Using Sinusoidally-Modulated Noise as a Surrogate for Slow-Wave Sleep to
Accomplish Stable Unsupervised Dictionary Learning in a Spike-Based Sparse Coding Model
by Yijing Watkins, Edward Kim, Andrew Sornborger and Garrett T. Kenyon. Women in Computer Vision Workshop on June 14, 2020 in Seattle, Washington (state)

This paper is open access for now.

Neuromorphic computing with voltage usage comparable to human brains

Part of neuromorphic computing’s appeal is the promise of using less energy because, as it turns out, the human brain uses small amounts of energy very efficiently. A team of researchers at the University of Massachusetts at Amherst have developed function in the same range of voltages as the human brain. From an April 20, 2020 news item on ScienceDaily,

Only 10 years ago, scientists working on what they hoped would open a new frontier of neuromorphic computing could only dream of a device using miniature tools called memristors that would function/operate like real brain synapses.

But now a team at the University of Massachusetts Amherst has discovered, while on their way to better understanding protein nanowires, how to use these biological, electricity conducting filaments to make a neuromorphic memristor, or “memory transistor,” device. It runs extremely efficiently on very low power, as brains do, to carry signals between neurons. Details are in Nature Communications.

An April 20, 2020 University of Massachusetts at Amherst news release (also on EurekAlert), which originated the news items, dives into detail about how these researchers were able to achieve bio-voltages,

As first author Tianda Fu, a Ph.D. candidate in electrical and computer engineering, explains, one of the biggest hurdles to neuromorphic computing, and one that made it seem unreachable, is that most conventional computers operate at over 1 volt, while the brain sends signals called action potentials between neurons at around 80 millivolts – many times lower. Today, a decade after early experiments, memristor voltage has been achieved in the range similar to conventional computer, but getting below that seemed improbable, he adds.

Fu reports that using protein nanowires developed at UMass Amherst from the bacterium Geobacter by microbiologist and co-author Derek Lovely, he has now conducted experiments where memristors have reached neurological voltages. Those tests were carried out in the lab of electrical and computer engineering researcher and co-author Jun Yao.

Yao says, “This is the first time that a device can function at the same voltage level as the brain. People probably didn’t even dare to hope that we could create a device that is as power-efficient as the biological counterparts in a brain, but now we have realistic evidence of ultra-low power computing capabilities. It’s a concept breakthrough and we think it’s going to cause a lot of exploration in electronics that work in the biological voltage regime.”

Lovely points out that Geobacter’s electrically conductive protein nanowires offer many advantages over expensive silicon nanowires, which require toxic chemicals and high-energy processes to produce. Protein nanowires also are more stable in water or bodily fluids, an important feature for biomedical applications. For this work, the researchers shear nanowires off the bacteria so only the conductive protein is used, he adds.

Fu says that he and Yao had set out to put the purified nanowires through their paces, to see what they are capable of at different voltages, for example. They experimented with a pulsing on-off pattern of positive-negative charge sent through a tiny metal thread in a memristor, which creates an electrical switch.

They used a metal thread because protein nanowires facilitate metal reduction, changing metal ion reactivity and electron transfer properties. Lovely says this microbial ability is not surprising, because wild bacterial nanowires breathe and chemically reduce metals to get their energy the way we breathe oxygen.

As the on-off pulses create changes in the metal filaments, new branching and connections are created in the tiny device, which is 100 times smaller than the diameter of a human hair, Yao explains. It creates an effect similar to learning – new connections – in a real brain. He adds, “You can modulate the conductivity, or the plasticity of the nanowire-memristor synapse so it can emulate biological components for brain-inspired computing. Compared to a conventional computer, this device has a learning capability that is not software-based.”

Fu recalls, “In the first experiments we did, the nanowire performance was not satisfying, but it was enough for us to keep going.” Over two years, he saw improvement until one fateful day when his and Yao’s eyes were riveted by voltage measurements appearing on a computer screen.

“I remember the day we saw this great performance. We watched the computer as current voltage sweep was being measured. It kept doing down and down and we said to each other, ‘Wow, it’s working.’ It was very surprising and very encouraging.”

Fu, Yao, Lovely and colleagues plan to follow up this discovery with more research on mechanisms, and to “fully explore the chemistry, biology and electronics” of protein nanowires in memristors, Fu says, plus possible applications, which might include a device to monitor heart rate, for example. Yao adds, “This offers hope in the feasibility that one day this device can talk to actual neurons in biological systems.”

That last comment has me wondering about why you would want to have your device talk to actual neurons. For neuroprosthetics perhaps?

Here’s a link to and a citation for the paper,

Bioinspired bio-voltage memristors by Tianda Fu, Xiaomeng Liu, Hongyan Gao, Joy E. Ward, Xiaorong Liu, Bing Yin, Zhongrui Wang, Ye Zhuo, David J. F. Walker, J. Joshua Yang, Jianhan Chen, Derek R. Lovley & Jun Yao. Nature Communications volume 11, Article number: 1861 (2020) DOI: https://doi.org/10.1038/s41467-020-15759-y Published: 20 April 2020

This paper is open access.

There is an illustration of the work

Caption: A graphic depiction of protein nanowires (green) harvested from microbe Geobacter (orange) facilitate the electronic memristor device (silver) to function with biological voltages, emulating the neuronal components (blue junctions) in a brain. Credit: UMass Amherst/Yao lab

Brain-inspired electronics with organic memristors for wearable computing

I went down a rabbit hole while trying to figure out the difference between ‘organic’ memristors and standard memristors. I have put the results of my investigation at the end of this post. First, there’s the news.

An April 21, 2020 news item on ScienceDaily explains why researchers are so focused on memristors and brainlike computing,

The advent of artificial intelligence, machine learning and the internet of things is expected to change modern electronics and bring forth the fourth Industrial Revolution. The pressing question for many researchers is how to handle this technological revolution.

“It is important for us to understand that the computing platforms of today will not be able to sustain at-scale implementations of AI algorithms on massive datasets,” said Thirumalai Venkatesan, one of the authors of a paper published in Applied Physics Reviews, from AIP Publishing.

“Today’s computing is way too energy-intensive to handle big data. We need to rethink our approaches to computation on all levels: materials, devices and architecture that can enable ultralow energy computing.”

An April 21, 2020 American Institute of Physics (AIP) news release (also on EurekAlert), which originated the news item, describes the authors’ approach to the problems with organic memristors,

Brain-inspired electronics with organic memristors could offer a functionally promising and cost- effective platform, according to Venkatesan. Memristive devices are electronic devices with an inherent memory that are capable of both storing data and performing computation. Since memristors are functionally analogous to the operation of neurons, the computing units in the brain, they are optimal candidates for brain-inspired computing platforms.

Until now, oxides have been the leading candidate as the optimum material for memristors. Different material systems have been proposed but none have been successful so far.

“Over the last 20 years, there have been several attempts to come up with organic memristors, but none of those have shown any promise,” said Sreetosh Goswami, lead author on the paper. “The primary reason behind this failure is their lack of stability, reproducibility and ambiguity in mechanistic understanding. At a device level, we are now able to solve most of these problems,”

This new generation of organic memristors is developed based on metal azo complex devices, which are the brainchild of Sreebata Goswami, a professor at the Indian Association for the Cultivation of Science in Kolkata and another author on the paper.

“In thin films, the molecules are so robust and stable that these devices can eventually be the right choice for many wearable and implantable technologies or a body net, because these could be bendable and stretchable,” said Sreebata Goswami. A body net is a series of wireless sensors that stick to the skin and track health.

The next challenge will be to produce these organic memristors at scale, said Venkatesan.

“Now we are making individual devices in the laboratory. We need to make circuits for large-scale functional implementation of these devices.”

Caption: The device structure at a molecular level. The gold nanoparticles on the bottom electrode enhance the field enabling an ultra-low energy operation of the molecular device. Credit Sreetosh Goswami, Sreebrata Goswami and Thirumalai Venky Venkatesan

Here’s a link to and a citation for the paper,

An organic approach to low energy memory and brain inspired electronics by Sreetosh Goswami, Sreebrata Goswami, and T. Venkatesan. Applied Physics Reviews 7, 021303 (2020) DOI: https://doi.org/10.1063/1.5124155

This paper is open access.

Basics about memristors and organic memristors

This undated article on Nanowerk provides a relatively complete and technical description of memristors in general (Note: A link has been removed),

A memristor (named as a portmanteau of memory and resistor) is a non-volatile electronic memory device that was first theorized by Leon Ong Chua in 1971 as the fourth fundamental two-terminal circuit element following the resistor, the capacitor, and the inductor (IEEE Transactions on Circuit Theory, “Memristor-The missing circuit element”).

Its special property is that its resistance can be programmed (resistor function) and subsequently remains stored (memory function). Unlike other memories that exist today in modern electronics, memristors are stable and remember their state even if the device loses power.

However, it was only almost 40 years later that the first practical device was fabricated. This was in 2008, when a group led by Stanley Williams at HP Research Labs realized that switching of the resistance between a conducting and less conducting state in metal-oxide thin-film devices was showing Leon Chua’s memristor behavior. …

The article on Nanowerk includes an embedded video presentation on memristors given by Stanley Williams (also known as R. Stanley Williams).

Mention of an ‘organic’memristor can be found in an October 31, 2017 article by Ryan Whitwam,

The memristor is composed of the transition metal ruthenium complexed with “azo-aromatic ligands.” [emphasis mine] The theoretical work enabling this material was performed at Yale, and the organic molecules were synthesized at the Indian Association for the Cultivation of Sciences. …

I highlighted ‘ligands’ because that appears to be the difference. However, there is more than one type of ligand on Wikipedia.

First, there’s the Ligand (biochemistry) entry (Note: Links have been removed),

In biochemistry and pharmacology, a ligand is a substance that forms a complex with a biomolecule to serve a biological purpose. …

Then, there’s the Ligand entry,

In coordination chemistry, a ligand[help 1] is an ion or molecule (functional group) that binds to a central metal atom to form a coordination complex …

Finally, there’s the Ligand (disambiguation) entry (Note: Links have been removed),

  • Ligand, an atom, ion, or functional group that donates one or more of its electrons through a coordinate covalent bond to one or more central atoms or ions
  • Ligand (biochemistry), a substance that binds to a protein
  • a ‘guest’ in host–guest chemistry

I did take a look at the paper and did not see any references to proteins or other biomolecules that I could recognize as such. I’m not sure why the researchers are describing their device as an ‘organic’ memristor but this may reflect a shortcoming in the definitions I have found or shortcomings in my reading of the paper rather than an error on their parts.

Hopefully, more research will be forthcoming and it will be possible to better understand the terminology.

New design directions to increase variety, efficiency, selectivity and reliability for memristive devices

A May 11, 2020 news item on ScienceDaily provides a description of the current ‘memristor scene’ along with an announcement about a piece of recent research,

Scientists around the world are intensively working on memristive devices, which are capable in extremely low power operation and behave similarly to neurons in the brain. Researchers from the Jülich Aachen Research Alliance (JARA) and the German technology group Heraeus have now discovered how to systematically control the functional behaviour of these elements. The smallest differences in material composition are found crucial: differences so small that until now experts had failed to notice them. The researchers’ design directions could help to increase variety, efficiency, selectivity and reliability for memristive technology-based applications, for example for energy-efficient, non-volatile storage devices or neuro-inspired computers.

Memristors are considered a highly promising alternative to conventional nanoelectronic elements in computer Chips [sic]. Because of the advantageous functionalities, their development is being eagerly pursued by many companies and research institutions around the world. The Japanese corporation NEC installed already the first prototypes in space satellites back in 2017. Many other leading companies such as Hewlett Packard, Intel, IBM, and Samsung are working to bring innovative types of computer and storage devices based on memristive elements to market.

Fundamentally, memristors are simply “resistors with memory,” in which high resistance can be switched to low resistance and back again. This means in principle that the devices are adaptive, similar to a synapse in a biological nervous system. “Memristive elements are considered ideal candidates for neuro-inspired computers modelled on the brain, which are attracting a great deal of interest in connection with deep learning and artificial intelligence,” says Dr. Ilia Valov of the Peter Grünberg Institute (PGI-7) at Forschungszentrum Jülich.

In the latest issue of the open access journal Science Advances, he and his team describe how the switching and neuromorphic behaviour of memristive elements can be selectively controlled. According to their findings, the crucial factor is the purity of the switching oxide layer. “Depending on whether you use a material that is 99.999999 % pure, and whether you introduce one foreign atom into ten million atoms of pure material or into one hundred atoms, the properties of the memristive elements vary substantially” says Valov.

A May 11, 2020 Forschungszentrum Juelich press release (also on EurekAlert), which originated the news item, delves into the theme of increasing control over memristive systems,

This effect had so far been overlooked by experts. It can be used very specifically for designing memristive systems, in a similar way to doping semiconductors in information technology. “The introduction of foreign atoms allows us to control the solubility and transport properties of the thin oxide layers,” explains Dr. Christian Neumann of the technology group Heraeus. He has been contributing his materials expertise to the project ever since the initial idea was conceived in 2015.

“In recent years there has been remarkable progress in the development and use of memristive devices, however that progress has often been achieved on a purely empirical basis,” according to Valov. Using the insights that his team has gained, manufacturers could now methodically develop memristive elements selecting the functions they need. The higher the doping concentration, the slower the resistance of the elements changes as the number of incoming voltage pulses increases and decreases, and the more stable the resistance remains. “This means that we have found a way for designing types of artificial synapses with differing excitability,” explains Valov.

Design specification for artificial synapses

The brain’s ability to learn and retain information can largely be attributed to the fact that the connections between neurons are strengthened when they are frequently used. Memristive devices, of which there are different types such as electrochemical metallization cells (ECMs) or valence change memory cells (VCMs), behave similarly. When these components are used, the conductivity increases as the number of incoming voltage pulses increases. The changes can also be reversed by applying voltage pulses of the opposite polarity.

The JARA researchers conducted their systematic experiments on ECMs, which consist of a copper electrode, a platinum electrode, and a layer of silicon dioxide between them. Thanks to the cooperation with Heraeus researchers, the JARA scientists had access to different types of silicon dioxide: one with a purity of 99.999999 % – also called 8N silicon dioxide – and others containing 100 to 10,000 ppm (parts per million) of foreign atoms. The precisely doped glass used in their experiments was specially developed and manufactured by quartz glass specialist Heraeus Conamic, which also holds the patent for the procedure. Copper and protons acted as mobile doping agents, while aluminium and gallium were used as non-volatile doping.

Synapses, the connections between neurons, have the ability to transmit signals with varying degrees of strength when they are excited by a quick succession of electrical impulses. One effect of this repeated activity is to increase the concentration of calcium ions, with the result that more neurotransmitters are emitted. Depending on the activity, other effects cause long-term structural changes, which impact the strength of the transmission for several hours, or potentially even for the rest of the person’s life. Memristive elements allow the strength of the electrical transmission to be changed in a similar way to synaptic connections, by applying a voltage. In electrochemical metallization cells (ECMs), a metallic filament develops between the two metal electrodes, thus increasing conductivity. Applying voltage pulses with reversed polarity causes the filament to shrink again until the cell reaches its initial high resistance state. Copyright: Forschungszentrum Jülich / Tobias Schlößer

Record switching time confirms theory

Based on their series of experiments, the researchers were able to show that the ECMs’ switching times change as the amount of doping atoms changes. If the switching layer is made of 8N silicon dioxide, the memristive component switches in only 1.4 nanoseconds. To date, the fastest value ever measured for ECMs had been around 10 nanoseconds. By doping the oxide layer of the components with up to 10,000 ppm of foreign atoms, the switching time was prolonged into the range of milliseconds. “We can also theoretically explain our results. This is helping us to understand the physico-chemical processes on the nanoscale and apply this knowledge in the practice” says Valov. Based on generally applicable theoretical considerations, supported by experimental results, some also documented in the literature, he is convinced that the doping/impurity effect occurs and can be employed in all types memristive elements.

Top: In memristive elements (ECMs) with an undoped, high-purity switching layer of silicon oxide (SiO2), copper ions can move very fast. A filament of copper atoms forms correspondingly fast on the platinum electrode. This increases the total device conductivity respectively the capacity. Due to the high mobility of the ions, however, this filament is unstable at low forming voltages. Center: Gallium ions (Ga3+), which are introduced into the cell (non-volatile doping), bind copper ions (Cu2+) in the switching layer. The movement of the ions slows down, leading to lower switching times, but the filament, once formed remains longer stable. Bottom: Doping with aluminium ions (Al3+) slows down the process even more, since aluminium ions bind copper ions even stronger than gallium ions. Filament growth is even slower, while at the same time the stability of the filament is further increased. Depending on the chemical properties of the introduced doping elements, memristive cells – the artificial synapses – can be created with tailor-made switching and neuromorphic properties. Copyright: Forschungszentrum Jülich / Tobias Schloesser

Here’s a link to and a citation for the paper,

Design of defect-chemical properties and device performance in memristive systems by M. Lübben, F. Cüppers, J. Mohr, M. von Witzleben, U. Breuer, R. Waser, C. Neumann, and I. Valov. Science Advances 08 May 2020: Vol. 6, no. 19, eaaz9079 DOI: 10.1126/sciadv.aaz9079

This paper is open access.

For anyone curious about the German technology group, Heraeus, there’s a fascinating history in its Wikipedia entry. The technology company was formally founded in 1851 but it can be traced back to the 17th century and the founding family’s apothecary.