Tag Archives: brainlike computing

Neuromorphic computing with a memristor is capable of replicating bio-neural system

There’s nothing especially new in this latest paper on neuromorphic computing and memristors, however it does a very good job of describing how these new computers might work. From a Nov. 30, 2020 news item on phys.org (Note: A link has been removed),

In a paper published in Nano, researchers study the role of memristors in neuromorphic computing. This novel fundamental electronic component supports the cloning of bio-neural systems with low cost and power.

Contemporary computing systems are unable to deal with critical challenges of size reduction and computing speed in the big data era. The Von Neumann bottleneck is referred to as a hindrance in data transfer through the bus connecting processor and memory cell. This gives an opportunity to create alternative architectures based on a biological neuron model. Neuromorphic computing is one of such alternative architectures that mimic neuro-biological brain architectures.

A November ??, 2020 World Scientific (Publishing) press release (also on EurekAlert and published on Nov. 27, 2020), which originated the news item, continues with this fine explanation,

The humanoid neural brain system comprises approximately 100 billion neurons and numerous synapses of connectivity. An efficient circuit device is therefore essential for the construction of a neural network that mimics the human brain. The development of a basic electrical component, the memristor, with several distinctive features such as scalability, in-memory processing and CMOS compatibility, has significantly facilitated the implementation of neural network hardware.

The memristor was introduced as a “memory-like resistor” where the background of the applied inputs would alter the resistance status of the device. It is a capable electronic component that can memorise the current in order to effectively reduce the size of the device and increase processing speed in neural networks. Parallel calculations, as in the human nervous system, are made with the support of memristor devices in a novel computing architecture.

System instability and uncertainty have been described as current problems for most memory-based applications. This is the opposite of the biological process. Despite noise, nonlinearity, variability and volatility, biological systems work well. It is still unclear, however, that the effectiveness of biological systems actually depends on these obstacles. Neural modeling is sometimes avoided because it is not easy to model and study. The possibility of exploiting these properties is therefore, of course, a critical path to success in the achievement of artificial and biological systems.

Here’s a link to and a citation for the paper (Note: I usually include the link as part of the paper’s title but couldn’t do it this time),

Memristors: Understanding, Utilization and Upgradation for Neuromorphic Computing [https://www.worldscientific.com/doi/abs/10.1142/S1793292020300054] by Mohanbabu Bharathi, Zhiwei Wang, Bingrui Guo, Babu Balraj, Qiuhong Li, Jianwei Shuai and Donghui Guo. NanoVol. 15, No. 11, 2030005 (2020) DOI: https://doi.org/10.1142/S1793292020300054 Published: 12 November 2020

This paper is open access.

Graphene-based memristors for neuromorphic computing

An Oct. 29, 2020 news item on ScienceDaily features an explanation of the reasons for investigating brainlike (neuromorphic) computing ,

As progress in traditional computing slows, new forms of computing are coming to the forefront. At Penn State, a team of engineers is attempting to pioneer a type of computing that mimics the efficiency of the brain’s neural networks while exploiting the brain’s analog nature.

Modern computing is digital, made up of two states, on-off or one and zero. An analog computer, like the brain, has many possible states. It is the difference between flipping a light switch on or off and turning a dimmer switch to varying amounts of lighting.

Neuromorphic or brain-inspired computing has been studied for more than 40 years, according to Saptarshi Das, the team leader and Penn State [Pennsylvania State University] assistant professor of engineering science and mechanics. What’s new is that as the limits of digital computing have been reached, the need for high-speed image processing, for instance for self-driving cars, has grown. The rise of big data, which requires types of pattern recognition for which the brain architecture is particularly well suited, is another driver in the pursuit of neuromorphic computing.

“We have powerful computers, no doubt about that, the problem is you have to store the memory in one place and do the computing somewhere else,” Das said.

The shuttling of this data from memory to logic and back again takes a lot of energy and slows the speed of computing. In addition, this computer architecture requires a lot of space. If the computation and memory storage could be located in the same space, this bottleneck could be eliminated.

An Oct. 29, 2020 Penn State news release (also on EurekAlert), which originated the news item, describes what makes the research different,

“We are creating artificial neural networks, which seek to emulate the energy and area efficiencies of the brain,” explained Thomas Shranghamer, a doctoral student in the Das group and first author on a paper recently published in Nature Communications. “The brain is so compact it can fit on top of your shoulders, whereas a modern supercomputer takes up a space the size of two or three tennis courts.”

Like synapses connecting the neurons in the brain that can be reconfigured, the artificial neural networks the team is building can be reconfigured by applying a brief electric field to a sheet of graphene, the one-atomic-thick layer of carbon atoms. In this work they show at least 16 possible memory states, as opposed to the two in most oxide-based memristors, or memory resistors [emphasis mine].

“What we have shown is that we can control a large number of memory states with precision using simple graphene field effect transistors [emphasis mine],” Das said.

The team thinks that ramping up this technology to a commercial scale is feasible. With many of the largest semiconductor companies actively pursuing neuromorphic computing, Das believes they will find this work of interest.

Here’s a link to and a citation for the paper,

Graphene memristive synapses for high precision neuromorphic computing by Thomas F. Schranghamer, Aaryan Oberoi & Saptarshi Das. Nature Communications volume 11, Article number: 5474 (2020) DOI: https://doi.org/10.1038/s41467-020-19203-z Published: 29 October 2020

This paper is open access.

Brain cell-like nanodevices

Given R. Stanley Williams’s presence on the author list, it’s a bit surprising that there’s no mention of memristors. If I read the signs rightly the interest is shifting, in some cases, from the memristor to a more comprehensive grouping of circuit elements referred to as ‘neuristors’ or, more likely, ‘nanocirucuit elements’ in the effort to achieve brainlike (neuromorphic) computing (engineering). (Williams was the leader of the HP Labs team that offered proof and more of the memristor’s existence, which I mentioned here in an April 5, 2010 posting. There are many, many postings on this topic here; try ‘memristors’ or ‘brainlike computing’ for your search terms.)

A September 24, 2020 news item on ScienceDaily announces a recent development in the field of neuromorphic engineering,

In the September [2020] issue of the journal Nature, scientists from Texas A&M University, Hewlett Packard Labs and Stanford University have described a new nanodevice that acts almost identically to a brain cell. Furthermore, they have shown that these synthetic brain cells can be joined together to form intricate networks that can then solve problems in a brain-like manner.

“This is the first study where we have been able to emulate a neuron with just a single nanoscale device, which would otherwise need hundreds of transistors,” said Dr. R. Stanley Williams, senior author on the study and professor in the Department of Electrical and Computer Engineering. “We have also been able to successfully use networks of our artificial neurons to solve toy versions of a real-world problem that is computationally intense even for the most sophisticated digital technologies.”

In particular, the researchers have demonstrated proof of concept that their brain-inspired system can identify possible mutations in a virus, which is highly relevant for ensuring the efficacy of vaccines and medications for strains exhibiting genetic diversity.

A September 24, 2020 Texas A&M University news release (also on EurekAlert) by Vandana Suresh, which originated the news item, provides some context for the research,

Over the past decades, digital technologies have become smaller and faster largely because of the advancements in transistor technology. However, these critical circuit components are fast approaching their limit of how small they can be built, initiating a global effort to find a new type of technology that can supplement, if not replace, transistors.

In addition to this “scaling-down” problem, transistor-based digital technologies have other well-known challenges. For example, they struggle at finding optimal solutions when presented with large sets of data.

“Let’s take a familiar example of finding the shortest route from your office to your home. If you have to make a single stop, it’s a fairly easy problem to solve. But if for some reason you need to make 15 stops in between, you have 43 billion routes to choose from,” said Dr. Suhas Kumar, lead author on the study and researcher at Hewlett Packard Labs. “This is now an optimization problem, and current computers are rather inept at solving it.”

Kumar added that another arduous task for digital machines is pattern recognition, such as identifying a face as the same regardless of viewpoint or recognizing a familiar voice buried within a din of sounds.

But tasks that can send digital machines into a computational tizzy are ones at which the brain excels. In fact, brains are not just quick at recognition and optimization problems, but they also consume far less energy than digital systems. Hence, by mimicking how the brain solves these types of tasks, Williams said brain-inspired or neuromorphic systems could potentially overcome some of the computational hurdles faced by current digital technologies.

To build the fundamental building block of the brain or a neuron, the researchers assembled a synthetic nanoscale device consisting of layers of different inorganic materials, each with a unique function. However, they said the real magic happens in the thin layer made of the compound niobium dioxide.

When a small voltage is applied to this region, its temperature begins to increase. But when the temperature reaches a critical value, niobium dioxide undergoes a quick change in personality, turning from an insulator to a conductor. But as it begins to conduct electric currents, its temperature drops and niobium dioxide switches back to being an insulator.

These back-and-forth transitions enable the synthetic devices to generate a pulse of electrical current that closely resembles the profile of electrical spikes, or action potentials, produced by biological neurons. Further, by changing the voltage across their synthetic neurons, the researchers reproduced a rich range of neuronal behaviors observed in the brain, such as sustained, burst and chaotic firing of electrical spikes.

“Capturing the dynamical behavior of neurons is a key goal for brain-inspired computers,” said Kumar. “Altogether, we were able to recreate around 15 types of neuronal firing profiles, all using a single electrical component and at much lower energies compared to transistor-based circuits.”

To evaluate if their synthetic neurons [neuristor?] can solve real-world problems, the researchers first wired 24 such nanoscale devices together in a network inspired by the connections between the brain’s cortex and thalamus, a well-known neural pathway involved in pattern recognition. Next, they used this system to solve a toy version of the viral quasispecies reconstruction problem, where mutant variations of a virus are identified without a reference genome.

By means of data inputs, the researchers introduced the network to short gene fragments. Then, by programming the strength of connections between the artificial neurons within the network, they established basic rules about joining these genetic fragments. The jigsaw puzzle-like task for the network was to list mutations in the virus’ genome based on these short genetic segments.

The researchers found that within a few microseconds, their network of artificial neurons settled down in a state that was indicative of the genome for a mutant strain.

Williams and Kumar noted this result is proof of principle that their neuromorphic systems can quickly perform tasks in an energy-efficient way.

The researchers said the next steps in their research will be to expand the repertoire of the problems that their brain-like networks can solve by incorporating other firing patterns and some hallmark properties of the human brain like learning and memory. They also plan to address hardware challenges for implementing their technology on a commercial scale.

“Calculating the national debt or solving some large-scale simulation is not the type of task the human brain is good at and that’s why we have digital computers. Alternatively, we can leverage our knowledge of neuronal connections for solving problems that the brain is exceptionally good at,” said Williams. “We have demonstrated that depending on the type of problem, there are different and more efficient ways of doing computations other than the conventional methods using digital computers with transistors.”

If you look at the news release on EurekAlert, you’ll see this informative image is titled: NeuristerSchematic [sic],

Caption: Networks of artificial neurons connected together can solve toy versions the viral quasispecies reconstruction problem. Credit: Texas A&M University College of Engineering

(On the university website, the image is credited to Rachel Barton.) You can see one of the first mentions of a ‘neuristor’ here in an August 24, 2017 posting.

Here’s a link to and a citation for the paper,

Third-order nanocircuit elements for neuromorphic engineering by Suhas Kumar, R. Stanley Williams & Ziwen Wang. Nature volume 585, pages518–523(2020) DOI: https://doi.org/10.1038/s41586-020-2735-5 Published: 23 September 2020 Issue Date: 24 September 2020

This paper is behind a paywall.

Neurotransistor for brainlike (neuromorphic) computing

According to researchers at Helmholtz-Zentrum Dresden-Rossendorf and the rest of the international team collaborating on the work, it’s time to look more closely at plasticity in the neuronal membrane,.

From the abstract for their paper, Intrinsic plasticity of silicon nanowire neurotransistors for dynamic memory and learning functions by Eunhye Baek, Nikhil Ranjan Das, Carlo Vittorio Cannistraci, Taiuk Rim, Gilbert Santiago Cañón Bermúdez, Khrystyna Nych, Hyeonsu Cho, Kihyun Kim, Chang-Ki Baek, Denys Makarov, Ronald Tetzlaff, Leon Chua, Larysa Baraban & Gianaurelio Cuniberti. Nature Electronics volume 3, pages 398–408 (2020) DOI: https://doi.org/10.1038/s41928-020-0412-1 Published online: 25 May 2020 Issue Date: July 2020

Neuromorphic architectures merge learning and memory functions within a single unit cell and in a neuron-like fashion. Research in the field has been mainly focused on the plasticity of artificial synapses. However, the intrinsic plasticity of the neuronal membrane is also important in the implementation of neuromorphic information processing. Here we report a neurotransistor made from a silicon nanowire transistor coated by an ion-doped sol–gel silicate film that can emulate the intrinsic plasticity of the neuronal membrane.

Caption: Neurotransistors: from silicon chips to neuromorphic architecture. Credit: TU Dresden / E. Baek Courtesy: Helmholtz-Zentrum Dresden-Rossendorf

A July 14, 2020 news item on Nanowerk announced the research (Note: A link has been removed),

Especially activities in the field of artificial intelligence, like teaching robots to walk or precise automatic image recognition, demand ever more powerful, yet at the same time more economical computer chips. While the optimization of conventional microelectronics is slowly reaching its physical limits, nature offers us a blueprint how information can be processed and stored quickly and efficiently: our own brain.

For the very first time, scientists at TU Dresden and the Helmholtz-Zentrum Dresden-Rossendorf (HZDR) have now successfully imitated the functioning of brain neurons using semiconductor materials. They have published their research results in the journal Nature Electronics (“Intrinsic plasticity of silicon nanowire neurotransistors for dynamic memory and learning functions”).

A July 14, 2020 Helmholtz-Zentrum Dresden-Rossendorf press release (also on EurekAlert), which originated the news items delves further into the research,

Today, enhancing the performance of microelectronics is usually achieved by reducing component size, especially of the individual transistors on the silicon computer chips. “But that can’t go on indefinitely – we need new approaches”, Larysa Baraban asserts. The physicist, who has been working at HZDR since the beginning of the year, is one of the three primary authors of the international study, which involved a total of six institutes. One approach is based on the brain, combining data processing with data storage in an artificial neuron.

“Our group has extensive experience with biological and chemical electronic sensors,” Baraban continues. “So, we simulated the properties of neurons using the principles of biosensors and modified a classical field-effect transistor to create an artificial neurotransistor.” The advantage of such an architecture lies in the simultaneous storage and processing of information in a single component. In conventional transistor technology, they are separated, which slows processing time and hence ultimately also limits performance.

Silicon wafer + polymer = chip capable of learning

Modeling computers on the human brain is no new idea. Scientists made attempts to hook up nerve cells to electronics in Petri dishes decades ago. “But a wet computer chip that has to be fed all the time is of no use to anybody,” says Gianaurelio Cuniberti from TU Dresden. The Professor for Materials Science and Nanotechnology is one of the three brains behind the neurotransistor alongside Ronald Tetzlaff, Professor of Fundamentals of Electrical Engineering in Dresden, and Leon Chua [emphasis mine] from the University of California at Berkeley, who had already postulated similar components in the early 1970s.

Now, Cuniberti, Baraban and their team have been able to implement it: “We apply a viscous substance – called solgel – to a conventional silicon wafer with circuits. This polymer hardens and becomes a porous ceramic,” the materials science professor explains. “Ions move between the holes. They are heavier than electrons and slower to return to their position after excitation. This delay, called hysteresis, is what causes the storage effect.” As Cuniberti explains, this is a decisive factor in the functioning of the transistor. “The more an individual transistor is excited, the sooner it will open and let the current flow. This strengthens the connection. The system is learning.”

Cuniberti and his team are not focused on conventional issues, though. “Computers based on our chip would be less precise and tend to estimate mathematical computations rather than calculating them down to the last decimal,” the scientist explains. “But they would be more intelligent. For example, a robot with such processors would learn to walk or grasp; it would possess an optical system and learn to recognize connections. And all this without having to develop any software.” But these are not the only advantages of neuromorphic computers. Thanks to their plasticity, which is similar to that of the human brain, they can adapt to changing tasks during operation and, thus, solve problems for which they were not originally programmed.

I highlighted Dr. Leon Chua’s name as he was one of the first to conceptualize the notion of a memristor (memory resistor), which is what the press release seems to be referencing with the mention of artificial synapses. Dr. Chua very kindly answered a few questions for me about his work which I published in an April 13, 2010 posting (scroll down about 40% of the way).

Improving neuromorphic devices with ion conducting polymer

A July 1, 2020 news item on ScienceDaily announces work which researchers are hopeful will allow them exert more control over neuromorphic devices’ speed of response,

“Neuromorphic” refers to mimicking the behavior of brain neural cells. When one speaks of neuromorphic computers, they are talking about making computers think and process more like human brains-operating at high-speed with low energy consumption.

Despite a growing interest in polymer-based neuromorphic devices, researchers have yet to establish an effective method for controlling the response speed of devices. Researchers from Tohoku University and the University of Cambridge, however, have overcome this obstacle through mixing the polymers PSS-Na and PEDOT:PSS, discovering that adding an ion conducting polymer enhances neuromorphic device response time.

A June 24, 2020 Tohoku University press release (also on EurekAlert), which originated the news item, provides a few more technical details,

Polymers are materials composed of long molecular chains and play a fundamental aspect in modern life from the rubber in tires, to water bottles, to polystyrene. Mixing polymers together results in the creation of new materials with their own distinct physical properties.

Most studies on neuromorphic devices based on polymer focus exclusively on the application of PEDOT: PSS, a mixed conductor that transports both electrons and ions. PSS-Na, on the other hand, transports ions only. By blending these two polymers, the researchers could enhance the ion diffusivity in the active layer of the device. Their measurements confirmed an increase in device response time, achieving a 5-time shorting at maximum. The results also proved how closely related response time is to the diffusivity of ions in the active layer.

“Our study paves the way for a deeper understanding behind the science of conducting polymers.” explains co-author Shunsuke Yamamoto from the Department of Biomolecular Engineering at Tohoku University’s Graduate School of Engineering. “Moving forward, it may be possible to create artificial neural networks composed of multiple neuromorphic devices,” he adds.

Here’s a link to and a citation for the paper,

Controlling the Neuromorphic Behavior of Organic Electrochemical Transistors by Blending Mixed and Ion Conductors by Shunsuke Yamamoto and George G. Malliaras. ACS [American Chemical Society] Appl. Electron. Mater. 2020, XXXX, XXX, XXX-XXX DOI: https://doi.org/10.1021/acsaelm.0c00203 Publication Date:June 15, 2020 Copyright © 2020 American Chemical Society

This paper is behind a paywall.

Energy-efficient artificial synapse

This is the second neuromorphic computing chip story from MIT this summer in what has turned out to be a bumper crop of research announcements in this field. The first MIT synapse story was featured in a June 16, 2020 posting. Now, there’s a second and completely different team announcing results for their artificial brain synapse work in a June 19, 2020 news item on Nanowerk (Note: A link has been removed),

Teams around the world are building ever more sophisticated artificial intelligence systems of a type called neural networks, designed in some ways to mimic the wiring of the brain, for carrying out tasks such as computer vision and natural language processing.

Using state-of-the-art semiconductor circuits to simulate neural networks requires large amounts of memory and high power consumption. Now, an MIT [Massachusetts Institute of Technology] team has made strides toward an alternative system, which uses physical, analog devices that can much more efficiently mimic brain processes.

The findings are described in the journal Nature Communications (“Protonic solid-state electrochemical synapse for physical neural networks”), in a paper by MIT professors Bilge Yildiz, Ju Li, and Jesús del Alamo, and nine others at MIT and Brookhaven National Laboratory. The first author of the paper is Xiahui Yao, a former MIT postdoc now working on energy storage at GRU Energy Lab.

That description of the work is one pretty much every team working on developing memristive (neuromorphic) chips could use.

On other fronts, the team has produced a very attractive illustration accompanying this research (aside: Is it my imagination or has there been a serious investment in the colour pink and other pastels for science illustrations?),

A new system developed at MIT and Brookhaven National Lab could provide a faster, more reliable and much more energy efficient approach to physical neural networks, by using analog ionic-electronic devices to mimic synapses.. Courtesy of the researchers

A June 19, 2020 MIT news release, which originated the news item, provides more insight into this specific piece of research (hint: it’s about energy use and repeatability),

Neural networks attempt to simulate the way learning takes place in the brain, which is based on the gradual strengthening or weakening of the connections between neurons, known as synapses. The core component of this physical neural network is the resistive switch, whose electronic conductance can be controlled electrically. This control, or modulation, emulates the strengthening and weakening of synapses in the brain.

In neural networks using conventional silicon microchip technology, the simulation of these synapses is a very energy-intensive process. To improve efficiency and enable more ambitious neural network goals, researchers in recent years have been exploring a number of physical devices that could more directly mimic the way synapses gradually strengthen and weaken during learning and forgetting.

Most candidate analog resistive devices so far for such simulated synapses have either been very inefficient, in terms of energy use, or performed inconsistently from one device to another or one cycle to the next. The new system, the researchers say, overcomes both of these challenges. “We’re addressing not only the energy challenge, but also the repeatability-related challenge that is pervasive in some of the existing concepts out there,” says Yildiz, who is a professor of nuclear science and engineering and of materials science and engineering.

“I think the bottleneck today for building [neural network] applications is energy efficiency. It just takes too much energy to train these systems, particularly for applications on the edge, like autonomous cars,” says del Alamo, who is the Donner Professor in the Department of Electrical Engineering and Computer Science. Many such demanding applications are simply not feasible with today’s technology, he adds.

The resistive switch in this work is an electrochemical device, which is made of tungsten trioxide (WO3) and works in a way similar to the charging and discharging of batteries. Ions, in this case protons, can migrate into or out of the crystalline lattice of the material,  explains Yildiz, depending on the polarity and strength of an applied voltage. These changes remain in place until altered by a reverse applied voltage — just as the strengthening or weakening of synapses does.

The mechanism is similar to the doping of semiconductors,” says Li, who is also a professor of nuclear science and engineering and of materials science and engineering. In that process, the conductivity of silicon can be changed by many orders of magnitude by introducing foreign ions into the silicon lattice. “Traditionally those ions were implanted at the factory,” he says, but with the new device, the ions are pumped in and out of the lattice in a dynamic, ongoing process. The researchers can control how much of the “dopant” ions go in or out by controlling the voltage, and “we’ve demonstrated a very good repeatability and energy efficiency,” he says.

Yildiz adds that this process is “very similar to how the synapses of the biological brain work. There, we’re not working with protons, but with other ions such as calcium, potassium, magnesium, etc., and by moving those ions you actually change the resistance of the synapses, and that is an element of learning.” The process taking place in the tungsten trioxide in their device is similar to the resistance modulation taking place in biological synapses, she says.

“What we have demonstrated here,” Yildiz says, “even though it’s not an optimized device, gets to the order of energy consumption per unit area per unit change in conductance that’s close to that in the brain.” Trying to accomplish the same task with conventional CMOS type semiconductors would take a million times more energy, she says.

The materials used in the demonstration of the new device were chosen for their compatibility with present semiconductor manufacturing systems, according to Li. But they include a polymer material that limits the device’s tolerance for heat, so the team is still searching for other variations of the device’s proton-conducting membrane and better ways of encapsulating its hydrogen source for long-term operations.

“There’s a lot of fundamental research to be done at the materials level for this device,” Yildiz says. Ongoing research will include “work on how to integrate these devices with existing CMOS transistors” adds del Alamo. “All that takes time,” he says, “and it presents tremendous opportunities for innovation, great opportunities for our students to launch their careers.”

Coincidentally or not a University of Massachusetts at Amherst team announced memristor voltage use comparable to human brain voltage use (see my June 15, 2020 posting), plus, there’s a team at Stanford University touting their low-energy biohybrid synapse in a XXX posting. (June 2020 has been a particularly busy month here for ‘artificial brain’ or ‘memristor’ stories.)

Getting back to this latest MIT research, here’s a link to and a citation for the paper,

Protonic solid-state electrochemical synapse for physical neural networks by Xiahui Yao, Konstantin Klyukin, Wenjie Lu, Murat Onen, Seungchan Ryu, Dongha Kim, Nicolas Emond, Iradwikanari Waluyo, Adrian Hunt, Jesús A. del Alamo, Ju Li & Bilge Yildiz. Nature Communications volume 11, Article number: 3134 (2020) DOI: https://doi.org/10.1038/s41467-020-16866-6 Published: 19 June 2020

This paper is open access.

A biohybrid artificial synapse that can communicate with living cells

As I noted in my June 16, 2020 posting, we may have more than one kind of artificial brain in our future. This latest work features a biohybrid. From a June 15, 2020 news item on ScienceDaily,

In 2017, Stanford University researchers presented a new device that mimics the brain’s efficient and low-energy neural learning process [see my March 8, 2017 posting for more]. It was an artificial version of a synapse — the gap across which neurotransmitters travel to communicate between neurons — made from organic materials. In 2019, the researchers assembled nine of their artificial synapses together in an array, showing that they could be simultaneously programmed to mimic the parallel operation of the brain [see my Sept. 17, 2019 posting].

Now, in a paper published June 15 [2020] in Nature Materials, they have tested the first biohybrid version of their artificial synapse and demonstrated that it can communicate with living cells. Future technologies stemming from this device could function by responding directly to chemical signals from the brain. The research was conducted in collaboration with researchers at Istituto Italiano di Tecnologia (Italian Institute of Technology — IIT) in Italy and at Eindhoven University of Technology (Netherlands).

“This paper really highlights the unique strength of the materials that we use in being able to interact with living matter,” said Alberto Salleo, professor of materials science and engineering at Stanford and co-senior author of the paper. “The cells are happy sitting on the soft polymer. But the compatibility goes deeper: These materials work with the same molecules neurons use naturally.”

While other brain-integrated devices require an electrical signal to detect and process the brain’s messages, the communications between this device and living cells occur through electrochemistry — as though the material were just another neuron receiving messages from its neighbor.

A June 15, 2020 Stanford University news release (also on EurekAlert) by Taylor Kubota, which originated the news item, delves further into this recent work,

How neurons learn

The biohybrid artificial synapse consists of two soft polymer electrodes, separated by a trench filled with electrolyte solution – which plays the part of the synaptic cleft that separates communicating neurons in the brain. When living cells are placed on top of one electrode, neurotransmitters that those cells release can react with that electrode to produce ions. Those ions travel across the trench to the second electrode and modulate the conductive state of this electrode. Some of that change is preserved, simulating the learning process occurring in nature.

“In a biological synapse, essentially everything is controlled by chemical interactions at the synaptic junction. Whenever the cells communicate with one another, they’re using chemistry,” said Scott Keene, a graduate student at Stanford and co-lead author of the paper. “Being able to interact with the brain’s natural chemistry gives the device added utility.”

This process mimics the same kind of learning seen in biological synapses, which is highly efficient in terms of energy because computing and memory storage happen in one action. In more traditional computer systems, the data is processed first and then later moved to storage.

To test their device, the researchers used rat neuroendocrine cells that release the neurotransmitter dopamine. Before they ran their experiment, they were unsure how the dopamine would interact with their material – but they saw a permanent change in the state of their device upon the first reaction.

“We knew the reaction is irreversible, so it makes sense that it would cause a permanent change in the device’s conductive state,” said Keene. “But, it was hard to know whether we’d achieve the outcome we predicted on paper until we saw it happen in the lab. That was when we realized the potential this has for emulating the long-term learning process of a synapse.”

A first step

This biohybrid design is in such early stages that the main focus of the current research was simply to make it work.

“It’s a demonstration that this communication melding chemistry and electricity is possible,” said Salleo. “You could say it’s a first step toward a brain-machine interface, but it’s a tiny, tiny very first step.”

Now that the researchers have successfully tested their design, they are figuring out the best paths for future research, which could include work on brain-inspired computers, brain-machine interfaces, medical devices or new research tools for neuroscience. Already, they are working on how to make the device function better in more complex biological settings that contain different kinds of cells and neurotransmitters.

Here’s a link to and a citation for the paper,

A biohybrid synapse with neurotransmitter-mediated plasticity by Scott T. Keene, Claudia Lubrano, Setareh Kazemzadeh, Armantas Melianas, Yaakov Tuchman, Giuseppina Polino, Paola Scognamiglio, Lucio Cinà, Alberto Salleo, Yoeri van de Burgt & Francesca Santoro. Nature Materials (2020) DOI: https://doi.org/10.1038/s41563-020-0703-y Published: 15 June 2020

This paper is behind a paywall.

A tangle of silver nanowires for brain-like action

I’ve been meaning to get to this news item from late 2019 as it features work from a team that I’ve been following for a number of years now. First mentioned here in an October 17, 2011 posting, James Gimzewski has been working with researchers at the University of California at Los Angeles (UCLA) and researchers at Japan’s National Institute for Materials Science (NIMS) on neuromorphic computing.

This particular research had a protracted rollout with the paper being published in October 2019 and the last news item about it being published in mid-December 2019.

A December 17, 2029 news item on Nanowerk was the first to alert me to this new work (Note: A link has been removed),

UCLA scientists James Gimzewski and Adam Stieg are part of an international research team that has taken a significant stride toward the goal of creating thinking machines.

Led by researchers at Japan’s National Institute for Materials Science, the team created an experimental device that exhibited characteristics analogous to certain behaviors of the brain — learning, memorization, forgetting, wakefulness and sleep. The paper, published in Scientific Reports (“Emergent dynamics of neuromorphic nanowire networks”), describes a network in a state of continuous flux.

A December 16, 2019 UCLA news release, which originated the news item, offers more detail (Note: A link has been removed),

“This is a system between order and chaos, on the edge of chaos,” said Gimzewski, a UCLA distinguished professor of chemistry and biochemistry, a member of the California NanoSystems Institute at UCLA and a co-author of the study. “The way that the device constantly evolves and shifts mimics the human brain. It can come up with different types of behavior patterns that don’t repeat themselves.”

The research is one early step along a path that could eventually lead to computers that physically and functionally resemble the brain — machines that may be capable of solving problems that contemporary computers struggle with, and that may require much less power than today’s computers do.

The device the researchers studied is made of a tangle of silver nanowires — with an average diameter of just 360 nanometers. (A nanometer is one-billionth of a meter.) The nanowires were coated in an insulating polymer about 1 nanometer thick. Overall, the device itself measured about 10 square millimeters — so small that it would take 25 of them to cover a dime.

Allowed to randomly self-assemble on a silicon wafer, the nanowires formed highly interconnected structures that are remarkably similar to those that form the neocortex, the part of the brain involved with higher functions such as language, perception and cognition.

One trait that differentiates the nanowire network from conventional electronic circuits is that electrons flowing through them cause the physical configuration of the network to change. In the study, electrical current caused silver atoms to migrate from within the polymer coating and form connections where two nanowires overlap. The system had about 10 million of these junctions, which are analogous to the synapses where brain cells connect and communicate.

The researchers attached two electrodes to the brain-like mesh to profile how the network performed. They observed “emergent behavior,” meaning that the network displayed characteristics as a whole that could not be attributed to the individual parts that make it up. This is another trait that makes the network resemble the brain and sets it apart from conventional computers.

After current flowed through the network, the connections between nanowires persisted for as much as one minute in some cases, which resembled the process of learning and memorization in the brain. Other times, the connections shut down abruptly after the charge ended, mimicking the brain’s process of forgetting.

In other experiments, the research team found that with less power flowing in, the device exhibited behavior that corresponds to what neuroscientists see when they use functional MRI scanning to take images of the brain of a sleeping person. With more power, the nanowire network’s behavior corresponded to that of the wakeful brain.

The paper is the latest in a series of publications examining nanowire networks as a brain-inspired system, an area of research that Gimzewski helped pioneer along with Stieg, a UCLA research scientist and an associate director of CNSI.

“Our approach may be useful for generating new types of hardware that are both energy-efficient and capable of processing complex datasets that challenge the limits of modern computers,” said Stieg, a co-author of the study.

The borderline-chaotic activity of the nanowire network resembles not only signaling within the brain but also other natural systems such as weather patterns. That could mean that, with further development, future versions of the device could help model such complex systems.

In other experiments, Gimzewski and Stieg already have coaxed a silver nanowire device to successfully predict statistical trends in Los Angeles traffic patterns based on previous years’ traffic data.

Because of their similarities to the inner workings of the brain, future devices based on nanowire technology could also demonstrate energy efficiency like the brain’s own processing. The human brain operates on power roughly equivalent to what’s used by a 20-watt incandescent bulb. By contrast, computer servers where work-intensive tasks take place — from training for machine learning to executing internet searches — can use the equivalent of many households’ worth of energy, with the attendant carbon footprint.

“In our studies, we have a broader mission than just reprogramming existing computers,” Gimzewski said. “Our vision is a system that will eventually be able to handle tasks that are closer to the way the human being operates.”

The study’s first author, Adrian Diaz-Alvarez, is from the International Center for Material Nanoarchitectonics at Japan’s National Institute for Materials Science. Co-authors include Tomonobu Nakayama and Rintaro Higuchi, also of NIMS; and Zdenka Kuncic at the University of Sydney in Australia.

Caption: (a) Micrograph of the neuromorphic network fabricated by this research team. The network contains of numerous junctions between nanowires, which operate as synaptic elements. When voltage is applied to the network (between the green probes), current pathways (orange) are formed in the network. (b) A Human brain and one of its neuronal networks. The brain is known to have a complex network structure and to operate by means of electrical signal propagation across the network. Credit: NIMS

A November 11, 2019 National Institute for Materials Science (Japan) press release (also on EurekAlert but dated December 25, 2019) first announced the news,

An international joint research team led by NIMS succeeded in fabricating a neuromorphic network composed of numerous metallic nanowires. Using this network, the team was able to generate electrical characteristics similar to those associated with higher order brain functions unique to humans, such as memorization, learning, forgetting, becoming alert and returning to calm. The team then clarified the mechanisms that induced these electrical characteristics.

The development of artificial intelligence (AI) techniques has been rapidly advancing in recent years and has begun impacting our lives in various ways. Although AI processes information in a manner similar to the human brain, the mechanisms by which human brains operate are still largely unknown. Fundamental brain components, such as neurons and the junctions between them (synapses), have been studied in detail. However, many questions concerning the brain as a collective whole need to be answered. For example, we still do not fully understand how the brain performs such functions as memorization, learning and forgetting, and how the brain becomes alert and returns to calm. In addition, live brains are difficult to manipulate in experimental research. For these reasons, the brain remains a “mysterious organ.” A different approach to brain research?in which materials and systems capable of performing brain-like functions are created and their mechanisms are investigated?may be effective in identifying new applications of brain-like information processing and advancing brain science.

The joint research team recently built a complex brain-like network by integrating numerous silver (Ag) nanowires coated with a polymer (PVP) insulating layer approximately 1 nanometer in thickness. A junction between two nanowires forms a variable resistive element (i.e., a synaptic element) that behaves like a neuronal synapse. This nanowire network, which contains a large number of intricately interacting synaptic elements, forms a “neuromorphic network”. When a voltage was applied to the neuromorphic network, it appeared to “struggle” to find optimal current pathways (i.e., the most electrically efficient pathways). The research team measured the processes of current pathway formation, retention and deactivation while electric current was flowing through the network and found that these processes always fluctuate as they progress, similar to the human brain’s memorization, learning, and forgetting processes. The observed temporal fluctuations also resemble the processes by which the brain becomes alert or returns to calm. Brain-like functions simulated by the neuromorphic network were found to occur as the huge number of synaptic elements in the network collectively work to optimize current transport, in the other words, as a result of self-organized and emerging dynamic processes..

The research team is currently developing a brain-like memory device using the neuromorphic network material. The team intends to design the memory device to operate using fundamentally different principles than those used in current computers. For example, while computers are currently designed to spend as much time and electricity as necessary in pursuit of absolutely optimum solutions, the new memory device is intended to make a quick decision within particular limits even though the solution generated may not be absolutely optimum. The team also hopes that this research will facilitate understanding of the brain’s information processing mechanisms.

This project was carried out by an international joint research team led by Tomonobu Nakayama (Deputy Director, International Center for Materials Nanoarchitectonics (WPI-MANA), NIMS), Adrian Diaz Alvarez (Postdoctoral Researcher, WPI-MANA, NIMS), Zdenka Kuncic (Professor, School of Physics, University of Sydney, Australia) and James K. Gimzewski (Professor, California NanoSystems Institute, University of California Los Angeles, USA).

Here at last is a link to and a citation for the paper,

Emergent dynamics of neuromorphic nanowire networks by Adrian Diaz-Alvarez, Rintaro Higuchi, Paula Sanz-Leon, Ido Marcus, Yoshitaka Shingaya, Adam Z. Stieg, James K. Gimzewski, Zdenka Kuncic & Tomonobu Nakayama. Scientific Reports volume 9, Article number: 14920 (2019) DOI: https://doi.org/10.1038/s41598-019-51330-6 Published: 17 October 2019

This paper is open access.

Brain-inspired electronics with organic memristors for wearable computing

I went down a rabbit hole while trying to figure out the difference between ‘organic’ memristors and standard memristors. I have put the results of my investigation at the end of this post. First, there’s the news.

An April 21, 2020 news item on ScienceDaily explains why researchers are so focused on memristors and brainlike computing,

The advent of artificial intelligence, machine learning and the internet of things is expected to change modern electronics and bring forth the fourth Industrial Revolution. The pressing question for many researchers is how to handle this technological revolution.

“It is important for us to understand that the computing platforms of today will not be able to sustain at-scale implementations of AI algorithms on massive datasets,” said Thirumalai Venkatesan, one of the authors of a paper published in Applied Physics Reviews, from AIP Publishing.

“Today’s computing is way too energy-intensive to handle big data. We need to rethink our approaches to computation on all levels: materials, devices and architecture that can enable ultralow energy computing.”

An April 21, 2020 American Institute of Physics (AIP) news release (also on EurekAlert), which originated the news item, describes the authors’ approach to the problems with organic memristors,

Brain-inspired electronics with organic memristors could offer a functionally promising and cost- effective platform, according to Venkatesan. Memristive devices are electronic devices with an inherent memory that are capable of both storing data and performing computation. Since memristors are functionally analogous to the operation of neurons, the computing units in the brain, they are optimal candidates for brain-inspired computing platforms.

Until now, oxides have been the leading candidate as the optimum material for memristors. Different material systems have been proposed but none have been successful so far.

“Over the last 20 years, there have been several attempts to come up with organic memristors, but none of those have shown any promise,” said Sreetosh Goswami, lead author on the paper. “The primary reason behind this failure is their lack of stability, reproducibility and ambiguity in mechanistic understanding. At a device level, we are now able to solve most of these problems,”

This new generation of organic memristors is developed based on metal azo complex devices, which are the brainchild of Sreebata Goswami, a professor at the Indian Association for the Cultivation of Science in Kolkata and another author on the paper.

“In thin films, the molecules are so robust and stable that these devices can eventually be the right choice for many wearable and implantable technologies or a body net, because these could be bendable and stretchable,” said Sreebata Goswami. A body net is a series of wireless sensors that stick to the skin and track health.

The next challenge will be to produce these organic memristors at scale, said Venkatesan.

“Now we are making individual devices in the laboratory. We need to make circuits for large-scale functional implementation of these devices.”

Caption: The device structure at a molecular level. The gold nanoparticles on the bottom electrode enhance the field enabling an ultra-low energy operation of the molecular device. Credit Sreetosh Goswami, Sreebrata Goswami and Thirumalai Venky Venkatesan

Here’s a link to and a citation for the paper,

An organic approach to low energy memory and brain inspired electronics by Sreetosh Goswami, Sreebrata Goswami, and T. Venkatesan. Applied Physics Reviews 7, 021303 (2020) DOI: https://doi.org/10.1063/1.5124155

This paper is open access.

Basics about memristors and organic memristors

This undated article on Nanowerk provides a relatively complete and technical description of memristors in general (Note: A link has been removed),

A memristor (named as a portmanteau of memory and resistor) is a non-volatile electronic memory device that was first theorized by Leon Ong Chua in 1971 as the fourth fundamental two-terminal circuit element following the resistor, the capacitor, and the inductor (IEEE Transactions on Circuit Theory, “Memristor-The missing circuit element”).

Its special property is that its resistance can be programmed (resistor function) and subsequently remains stored (memory function). Unlike other memories that exist today in modern electronics, memristors are stable and remember their state even if the device loses power.

However, it was only almost 40 years later that the first practical device was fabricated. This was in 2008, when a group led by Stanley Williams at HP Research Labs realized that switching of the resistance between a conducting and less conducting state in metal-oxide thin-film devices was showing Leon Chua’s memristor behavior. …

The article on Nanowerk includes an embedded video presentation on memristors given by Stanley Williams (also known as R. Stanley Williams).

Mention of an ‘organic’memristor can be found in an October 31, 2017 article by Ryan Whitwam,

The memristor is composed of the transition metal ruthenium complexed with “azo-aromatic ligands.” [emphasis mine] The theoretical work enabling this material was performed at Yale, and the organic molecules were synthesized at the Indian Association for the Cultivation of Sciences. …

I highlighted ‘ligands’ because that appears to be the difference. However, there is more than one type of ligand on Wikipedia.

First, there’s the Ligand (biochemistry) entry (Note: Links have been removed),

In biochemistry and pharmacology, a ligand is a substance that forms a complex with a biomolecule to serve a biological purpose. …

Then, there’s the Ligand entry,

In coordination chemistry, a ligand[help 1] is an ion or molecule (functional group) that binds to a central metal atom to form a coordination complex …

Finally, there’s the Ligand (disambiguation) entry (Note: Links have been removed),

  • Ligand, an atom, ion, or functional group that donates one or more of its electrons through a coordinate covalent bond to one or more central atoms or ions
  • Ligand (biochemistry), a substance that binds to a protein
  • a ‘guest’ in host–guest chemistry

I did take a look at the paper and did not see any references to proteins or other biomolecules that I could recognize as such. I’m not sure why the researchers are describing their device as an ‘organic’ memristor but this may reflect a shortcoming in the definitions I have found or shortcomings in my reading of the paper rather than an error on their parts.

Hopefully, more research will be forthcoming and it will be possible to better understand the terminology.

New design directions to increase variety, efficiency, selectivity and reliability for memristive devices

A May 11, 2020 news item on ScienceDaily provides a description of the current ‘memristor scene’ along with an announcement about a piece of recent research,

Scientists around the world are intensively working on memristive devices, which are capable in extremely low power operation and behave similarly to neurons in the brain. Researchers from the Jülich Aachen Research Alliance (JARA) and the German technology group Heraeus have now discovered how to systematically control the functional behaviour of these elements. The smallest differences in material composition are found crucial: differences so small that until now experts had failed to notice them. The researchers’ design directions could help to increase variety, efficiency, selectivity and reliability for memristive technology-based applications, for example for energy-efficient, non-volatile storage devices or neuro-inspired computers.

Memristors are considered a highly promising alternative to conventional nanoelectronic elements in computer Chips [sic]. Because of the advantageous functionalities, their development is being eagerly pursued by many companies and research institutions around the world. The Japanese corporation NEC installed already the first prototypes in space satellites back in 2017. Many other leading companies such as Hewlett Packard, Intel, IBM, and Samsung are working to bring innovative types of computer and storage devices based on memristive elements to market.

Fundamentally, memristors are simply “resistors with memory,” in which high resistance can be switched to low resistance and back again. This means in principle that the devices are adaptive, similar to a synapse in a biological nervous system. “Memristive elements are considered ideal candidates for neuro-inspired computers modelled on the brain, which are attracting a great deal of interest in connection with deep learning and artificial intelligence,” says Dr. Ilia Valov of the Peter Grünberg Institute (PGI-7) at Forschungszentrum Jülich.

In the latest issue of the open access journal Science Advances, he and his team describe how the switching and neuromorphic behaviour of memristive elements can be selectively controlled. According to their findings, the crucial factor is the purity of the switching oxide layer. “Depending on whether you use a material that is 99.999999 % pure, and whether you introduce one foreign atom into ten million atoms of pure material or into one hundred atoms, the properties of the memristive elements vary substantially” says Valov.

A May 11, 2020 Forschungszentrum Juelich press release (also on EurekAlert), which originated the news item, delves into the theme of increasing control over memristive systems,

This effect had so far been overlooked by experts. It can be used very specifically for designing memristive systems, in a similar way to doping semiconductors in information technology. “The introduction of foreign atoms allows us to control the solubility and transport properties of the thin oxide layers,” explains Dr. Christian Neumann of the technology group Heraeus. He has been contributing his materials expertise to the project ever since the initial idea was conceived in 2015.

“In recent years there has been remarkable progress in the development and use of memristive devices, however that progress has often been achieved on a purely empirical basis,” according to Valov. Using the insights that his team has gained, manufacturers could now methodically develop memristive elements selecting the functions they need. The higher the doping concentration, the slower the resistance of the elements changes as the number of incoming voltage pulses increases and decreases, and the more stable the resistance remains. “This means that we have found a way for designing types of artificial synapses with differing excitability,” explains Valov.

Design specification for artificial synapses

The brain’s ability to learn and retain information can largely be attributed to the fact that the connections between neurons are strengthened when they are frequently used. Memristive devices, of which there are different types such as electrochemical metallization cells (ECMs) or valence change memory cells (VCMs), behave similarly. When these components are used, the conductivity increases as the number of incoming voltage pulses increases. The changes can also be reversed by applying voltage pulses of the opposite polarity.

The JARA researchers conducted their systematic experiments on ECMs, which consist of a copper electrode, a platinum electrode, and a layer of silicon dioxide between them. Thanks to the cooperation with Heraeus researchers, the JARA scientists had access to different types of silicon dioxide: one with a purity of 99.999999 % – also called 8N silicon dioxide – and others containing 100 to 10,000 ppm (parts per million) of foreign atoms. The precisely doped glass used in their experiments was specially developed and manufactured by quartz glass specialist Heraeus Conamic, which also holds the patent for the procedure. Copper and protons acted as mobile doping agents, while aluminium and gallium were used as non-volatile doping.

Synapses, the connections between neurons, have the ability to transmit signals with varying degrees of strength when they are excited by a quick succession of electrical impulses. One effect of this repeated activity is to increase the concentration of calcium ions, with the result that more neurotransmitters are emitted. Depending on the activity, other effects cause long-term structural changes, which impact the strength of the transmission for several hours, or potentially even for the rest of the person’s life. Memristive elements allow the strength of the electrical transmission to be changed in a similar way to synaptic connections, by applying a voltage. In electrochemical metallization cells (ECMs), a metallic filament develops between the two metal electrodes, thus increasing conductivity. Applying voltage pulses with reversed polarity causes the filament to shrink again until the cell reaches its initial high resistance state. Copyright: Forschungszentrum Jülich / Tobias Schlößer

Record switching time confirms theory

Based on their series of experiments, the researchers were able to show that the ECMs’ switching times change as the amount of doping atoms changes. If the switching layer is made of 8N silicon dioxide, the memristive component switches in only 1.4 nanoseconds. To date, the fastest value ever measured for ECMs had been around 10 nanoseconds. By doping the oxide layer of the components with up to 10,000 ppm of foreign atoms, the switching time was prolonged into the range of milliseconds. “We can also theoretically explain our results. This is helping us to understand the physico-chemical processes on the nanoscale and apply this knowledge in the practice” says Valov. Based on generally applicable theoretical considerations, supported by experimental results, some also documented in the literature, he is convinced that the doping/impurity effect occurs and can be employed in all types memristive elements.

Top: In memristive elements (ECMs) with an undoped, high-purity switching layer of silicon oxide (SiO2), copper ions can move very fast. A filament of copper atoms forms correspondingly fast on the platinum electrode. This increases the total device conductivity respectively the capacity. Due to the high mobility of the ions, however, this filament is unstable at low forming voltages. Center: Gallium ions (Ga3+), which are introduced into the cell (non-volatile doping), bind copper ions (Cu2+) in the switching layer. The movement of the ions slows down, leading to lower switching times, but the filament, once formed remains longer stable. Bottom: Doping with aluminium ions (Al3+) slows down the process even more, since aluminium ions bind copper ions even stronger than gallium ions. Filament growth is even slower, while at the same time the stability of the filament is further increased. Depending on the chemical properties of the introduced doping elements, memristive cells – the artificial synapses – can be created with tailor-made switching and neuromorphic properties. Copyright: Forschungszentrum Jülich / Tobias Schloesser

Here’s a link to and a citation for the paper,

Design of defect-chemical properties and device performance in memristive systems by M. Lübben, F. Cüppers, J. Mohr, M. von Witzleben, U. Breuer, R. Waser, C. Neumann, and I. Valov. Science Advances 08 May 2020: Vol. 6, no. 19, eaaz9079 DOI: 10.1126/sciadv.aaz9079

This paper is open access.

For anyone curious about the German technology group, Heraeus, there’s a fascinating history in its Wikipedia entry. The technology company was formally founded in 1851 but it can be traced back to the 17th century and the founding family’s apothecary.