Tag Archives: neuromorphic computing

Mechano-photonic artificial synapse is bio-inspired

The word ‘memristor’ usually pops up when there’s research into artificial synapses but not in this new piece of research. I didn’t see any mention of the memristor in the paper’s references either but I did find James Gimzewski from the University of California at Los Angeles (UCLA) whose research into brainlike computing (neuromorphic computing) is running parallel but separately to the memristor research.

Dr. Thamarasee Jeewandara has written a March 25, 2021 article for phys.org about the latest neuromorphic computing research (Note: Links have been removed)

Multifunctional and diverse artificial neural systems can incorporate multimodal plasticity, memory and supervised learning functions to assist neuromorphic computation. In a new report, Jinran Yu and a research team in nanoenergy, nanoscience and materials science in China and the US., presented a bioinspired mechano-photonic artificial synapse with synergistic mechanical and optical plasticity. The team used an optoelectronic transistor made of graphene/molybdenum disulphide (MoS2) heterostructure and an integrated triboelectric nanogenerator to compose the artificial synapse. They controlled the charge transfer/exchange in the heterostructure with triboelectric potential and modulated the optoelectronic synapse behaviors readily, including postsynaptic photocurrents, photosensitivity and photoconductivity. The mechano-photonic artificial synapse is a promising implementation to mimic the complex biological nervous system and promote the development of interactive artificial intelligence. The work is now published on Science Advances.

The human brain can integrate cognition, learning and memory tasks via auditory, visual, olfactory and somatosensory interactions. This process is difficult to be mimicked using conventional von Neumann architectures that require additional sophisticated functions. Brain-inspired neural networks are made of various synaptic devices to transmit information and process using the synaptic weight. Emerging photonic synapse combine the optical and electric neuromorphic modulation and computation to offer a favorable option with high bandwidth, fast speed and low cross-talk to significantly reduce power consumption. Biomechanical motions including touch, eye blinking and arm waving are other ubiquitous triggers or interactive signals to operate electronics during artificial synapse plasticization. In this work, Yu et al. presented a mechano-photonic artificial synapse with synergistic mechanical and optical plasticity. The device contained an optoelectronic transistor and an integrated triboelectric nanogenerator (TENG) in contact-separation mode. The mechano-optical artificial synapses have huge functional potential as interactive optoelectronic interfaces, synthetic retinas and intelligent robots. [emphasis mine]

As you can see Jeewandara has written quite a technical summary of the work. Here’s an image from the Science Advances paper,

Fig. 1 Biological tactile/visual neurons and mechano-photonic artificial synapse. (A) Schematic illustrations of biological tactile/visual sensory system. (B) Schematic diagram of the mechano-photonic artificial synapse based on graphene/MoS2 (Gr/MoS2) heterostructure. (i) Top-view scanning electron microscope (SEM) image of the optoelectronic transistor; scale bar, 5 μm. The cyan area indicates the MoS2 flake, while the white strip is graphene. (ii) Illustration of charge transfer/exchange for Gr/MoS2 heterostructure. (iii) Output mechano-photonic signals from the artificial synapse for image recognition.

You can find the paper here,

Bioinspired mechano-photonic artificial synapse based on graphene/MoS2 heterostructure by Jinran Yu, Xixi Yang, Guoyun Gao, Yao Xiong, Yifei Wang, Jing Han, Youhui Chen, Huai Zhang, Qijun Sun and Zhong Lin Wang. Science Advances 17 Mar 2021: Vol. 7, no. 12, eabd9117 DOI: 10.1126/sciadv.abd9117

This appears to be open access.

Memristor artificial neural network learning based on phase-change memory (PCM)

Caption: Professor Hongsik Jeong and his research team in the Department of Materials Science and Engineering at UNIST. Credit: UNIST

I’m pretty sure that Professor Hongsik Jeong is the one on the right. He seems more relaxed, like he’s accustomed to posing for pictures highlighting his work.

Now on to the latest memristor news, which features the number 8.

For anyone unfamiliar with the term memristor, it’s a device (of sorts) which scientists, involved in neuromorphic computing (computers that operate like human brains), are researching as they attempt to replicate brainlike processes for computers.

From a January 22, 2021 Ulsan National Institute of Science and Technology (UNIST) press release (also on EurekAlert but published March 15, 2021),

An international team of researchers, affiliated with UNIST has unveiled a novel technology that could improve the learning ability of artificial neural networks (ANNs).

Professor Hongsik Jeong and his research team in the Department of Materials Science and Engineering at UNIST, in collaboration with researchers from Tsinghua University in China, proposed a new learning method to improve the learning ability of ANN chips by challenging its instability.

Artificial neural network chips are capable of mimicking the structural, functional and biological features of human neural networks, and thus have been considered the technology of the future. In this study, the research team demonstrated the effectiveness of the proposed learning method by building phase change memory (PCM) memristor arrays that operate like ANNs. This learning method is also advantageous in that its learning ability can be improved without additional power consumption, since PCM undergoes a spontaneous resistance increase due to the structural relaxation after amorphization.

ANNs, like human brains, use less energy even when performing computation and memory tasks, simultaneously. However, the artificial neural network chip in which a large number of physical devices are integrated has a disadvantage that there is an error. The existing artificial neural network learning method assumes a perfect artificial neural network chip with no errors, so the learning ability of the artificial neural network is poor.

The research team developed a memristor artificial neural network learning method based on a phase-change memory, conceiving that the real human brain does not require near-perfect motion. This learning method reflects the “resistance drift” (increased electrical resistance) of the phase change material in the memory semiconductor in learning. During the learning process, since the information update pattern is recorded in the form of increasing electrical resistance in the memristor, which serves as a synapse, the synapse additionally learns the association between the pattern it changes and the data it is learning.

The research team showed that the learning method developed through an experiment to classify handwriting composed of numbers 0-9 has an effect of improving learning ability by about 3%. In particular, the accuracy of the number 8, which is difficult to classify handwriting, has improved significantly. [emphasis mine] The learning ability improved thanks to the synaptic update pattern that changes differently according to the difficulty of handwriting classification.

Researchers expect that their findings are expected to promote the learning algorithms with the intrinsic properties of memristor devices, opening a new direction for development of neuromorphic computing chips.

Here’s a link to and a citation for the paper,

Spontaneous sparse learning for PCM-based memristor neural networks by Dong-Hyeok Lim, Shuang Wu, Rong Zhao, Jung-Hoon Lee, Hongsik Jeong & Luping Shi. Nature Communications volume 12, Article number: 319 (2021) DOI: https://doi.org/10.1038/s41467-020-20519-z Published 12 January 2021

This paper is open access.

Supercomputing capability at home with Graphical Processing Units (GPUs)

Researchers at the University of Sussex (in the UK) have found a way to make your personal computer as powerful as a supercomputer according to a February 2, 2021 University of Sussex press release (also on EurekAlert),

University of Sussex academics have established a method of turbocharging desktop PCs to give them the same capability as supercomputers worth tens of millions of pounds.

Dr James Knight and Prof Thomas Nowotny from the University of Sussex’s School of Engineering and Informatics used the latest Graphical Processing Units (GPUs) to give a single desktop PC the capacity to simulate brain models of almost unlimited size.

The researchers believe the innovation, detailed in Nature Computational Science, will make it possible for many more researchers around the world to carry out research on large-scale brain simulation, including the investigation of neurological disorders.

Currently, the cost of supercomputers is so prohibitive they are only affordable to very large institutions and government agencies and so are not accessible for large numbers of researchers.

As well as shaving tens of millions of pounds off the costs of a supercomputer, the simulations run on the desktop PC require approximately 10 times less energy bringing a significant sustainability benefit too.

Dr Knight, Research Fellow in Computer Science at the University of Sussex, said: “I think the main benefit of our research is one of accessibility. Outside of these very large organisations, academics typically have to apply to get even limited time on a supercomputer for a particular scientific purpose. This is quite a high barrier for entry which is potentially holding back a lot of significant research.

“Our hope for our own research now is to apply these techniques to brain-inspired machine learning so that we can help solve problems that biological brains excel at but which are currently beyond simulations.

“As well as the advances we have demonstrated in procedural connectivity in the context of GPU hardware, we also believe that there is also potential for developing new types of neuromorphic hardware built from the ground up for procedural connectivity. Key components could be implemented directly in hardware which could lead to even more truly significant compute time improvements.”

The research builds on the pioneering work of US researcher Eugene Izhikevich who pioneered a similar method for large-scale brain simulation in 2006.

At the time, computers were too slow for the method to be widely applicable meaning simulating large-scale brain models has until now only been possible for a minority of researchers privileged to have access to supercomputer systems.

The researchers applied Izhikevich’s technique to a modern GPU, with approximately 2,000 times the computing power available 15 years ago, to create a cutting-edge model of a Macaque’s visual cortex (with 4.13 × 106 neurons and 24.2 × 109 synapse) which previously could only be simulated on a supercomputer.

The researchers’ GPU accelerated spiking neural network simulator uses the large amount of computational power available on a GPU to ‘procedurally’ generate connectivity and synaptic weights ‘on the go’ as spikes are triggered – removing the need to store connectivity data in memory.

Initialization of the researchers’ model took six minutes and simulation of each biological second took 7.7 min in the ground state and 8.4 min in the resting state- up to 35 % less time than a previous supercomputer simulation. In 2018, one rack of an IBM Blue Gene/Q supercomputer initialization of the model took around five minutes and simulating one second of biological time took approximately 12 minutes.

Prof Nowotny, Professor of Informatics at the University of Sussex, said: “Large-scale simulations of spiking neural network models are an important tool for improving our understanding of the dynamics and ultimately the function of brains. However, even small mammals such as mice have on the order of 1 × 1012 synaptic connections meaning that simulations require several terabytes of data – an unrealistic memory requirement for a single desktop machine.

“This research is a game-changer for computational Neuroscience and AI researchers who can now simulate brain circuits on their local workstations, but it also allows people outside academia to turn their gaming PC into a supercomputer and run large neural networks.”

Here’s a link to and a citation for the paper,

Larger GPU-accelerated brain simulations with procedural connectivity by James C. Knight & Thomas Nowotny. Nature Computational Science (2021) DOI: DOIhttps://doi.org/10.1038/s43588-020-00022-7 Published: 01 February 2021

This paper is behind a paywall.

Baby steps toward a quantum brain

My first quantum brain posting! (Well, I do have something that seems loosely related in a July 5, 2017 posting about quantum entanglement and machine learning and more. Also, I have lots of item on brainlike or neuromorphic computing.)

Getting to the latest news, a February 1, 2021 news item on Nanowerk announces research in to new intelligent materials that could lead to a ‘quantum brain’,

An intelligent material that learns by physically changing itself, similar to how the human brain works, could be the foundation of a completely new generation of computers. Radboud [university in the Netherlands] physicists working toward this so-called “quantum brain” have made an important step. They have demonstrated that they can pattern and interconnect a network of single atoms, and mimic the autonomous behaviour of neurons and synapses in a brain.

If I understand the difference between the work in 2017 and this latest work, it’s that in 2017 they were looking at quantum states and their possible effect on machine learning, while this work in 2021 is focused on a new material with some special characteristics.

A February 1, 2021 Radboud University press release (also on EurekAlert), which originated the news item, provides information on the case supporting the need for a quantum brain and some technical details about how it might be achieved,

Considering the growing global demand for computing capacity, more and more data centres are necessary, all of which leave an ever-expanding energy footprint. ‘It is clear that we have to find new strategies to store and process information in an energy efficient way’, says project leader Alexander Khajetoorians, Professor of Scanning Probe Microscopy at Radboud University.

‘This requires not only improvements to technology, but also fundamental research in game changing approaches. Our new idea of building a ‘quantum brain’ based on the quantum properties of materials could be the basis for a future solution for applications in artificial intelligence.’

Quantum brain

For artificial intelligence to work, a computer needs to be able to recognise patterns in the world and learn new ones. Today’s computers do this via machine learning software that controls the storage and processing of information on a separate computer hard drive. ‘Until now, this technology, which is based on a century-old paradigm, worked sufficiently. However, in the end, it is a very energy-inefficient process’, says co-author Bert Kappen, Professor of Neural networks and machine intelligence.

The physicists at Radboud University researched whether a piece of hardware could do the same, without the need of software. They discovered that by constructing a network of cobalt atoms on black phosphorus they were able to build a material that stores and processes information in similar ways to the brain, and, even more surprisingly, adapts itself.

Self-adapting atoms

In 2018, Khajetoorians and collaborators showed that it is possible to store information in the state of a single cobalt atom. By applying a voltage to the atom, they could induce “firing”, where the atom shuttles between a value of 0 and 1 randomly, much like one neuron. They have now discovered a way to create tailored ensembles of these atoms, and found that the firing behaviour of these ensembles mimics the behaviour of a brain-like model used in artificial intelligence.

In addition to observing the behaviour of spiking neurons, they were able to create the smallest synapse known to date. Unknowingly, they observed that these ensembles had an inherent adaptive property: their synapses changed their behaviour depending on what input they “saw”. ‘When stimulating the material over a longer period of time with a certain voltage, we were very surprised to see that the synapses actually changed. The material adapted its reaction based on the external stimuli that it received. It learned by itself’, says Khajetoorians.

Exploring and developing the quantum brain

The researchers now plan to scale up the system and build a larger network of atoms, as well as dive into new “quantum” materials that can be used. Also, they need to understand why the atom network behaves as it does. ‘We are at a state where we can start to relate fundamental physics to concepts in biology, like memory and learning’, says Khajetoorians.

If we could eventually construct a real machine from this material, we would be able to build self-learning computing devices that are more energy efficient and smaller than today’s computers. Yet, only when we understand how it works – and that is still a mystery – will we be able to tune its behaviour and start developing it into a technology. It is a very exciting time.’

Here is a charming image illustrating the reasons for a quantum brain,

Courtesy: Radboud University

Here’s a link to and a citation for the paper,

An atomic Boltzmann machine capable of self-adaption by Brian Kiraly, Elze J. Knol, Werner M. J. van Weerdenburg, Hilbert J. Kappen & Alexander A. Khajetoorians. Nature Nanotechnology (2021) DOI: https://doi.org/10.1038/s41565-020-00838-4 Published: 01 February 2021

This paper is behind a paywall.

Graphene-based memristors for neuromorphic computing

An Oct. 29, 2020 news item on ScienceDaily features an explanation of the reasons for investigating brainlike (neuromorphic) computing ,

As progress in traditional computing slows, new forms of computing are coming to the forefront. At Penn State, a team of engineers is attempting to pioneer a type of computing that mimics the efficiency of the brain’s neural networks while exploiting the brain’s analog nature.

Modern computing is digital, made up of two states, on-off or one and zero. An analog computer, like the brain, has many possible states. It is the difference between flipping a light switch on or off and turning a dimmer switch to varying amounts of lighting.

Neuromorphic or brain-inspired computing has been studied for more than 40 years, according to Saptarshi Das, the team leader and Penn State [Pennsylvania State University] assistant professor of engineering science and mechanics. What’s new is that as the limits of digital computing have been reached, the need for high-speed image processing, for instance for self-driving cars, has grown. The rise of big data, which requires types of pattern recognition for which the brain architecture is particularly well suited, is another driver in the pursuit of neuromorphic computing.

“We have powerful computers, no doubt about that, the problem is you have to store the memory in one place and do the computing somewhere else,” Das said.

The shuttling of this data from memory to logic and back again takes a lot of energy and slows the speed of computing. In addition, this computer architecture requires a lot of space. If the computation and memory storage could be located in the same space, this bottleneck could be eliminated.

An Oct. 29, 2020 Penn State news release (also on EurekAlert), which originated the news item, describes what makes the research different,

“We are creating artificial neural networks, which seek to emulate the energy and area efficiencies of the brain,” explained Thomas Shranghamer, a doctoral student in the Das group and first author on a paper recently published in Nature Communications. “The brain is so compact it can fit on top of your shoulders, whereas a modern supercomputer takes up a space the size of two or three tennis courts.”

Like synapses connecting the neurons in the brain that can be reconfigured, the artificial neural networks the team is building can be reconfigured by applying a brief electric field to a sheet of graphene, the one-atomic-thick layer of carbon atoms. In this work they show at least 16 possible memory states, as opposed to the two in most oxide-based memristors, or memory resistors [emphasis mine].

“What we have shown is that we can control a large number of memory states with precision using simple graphene field effect transistors [emphasis mine],” Das said.

The team thinks that ramping up this technology to a commercial scale is feasible. With many of the largest semiconductor companies actively pursuing neuromorphic computing, Das believes they will find this work of interest.

Here’s a link to and a citation for the paper,

Graphene memristive synapses for high precision neuromorphic computing by Thomas F. Schranghamer, Aaryan Oberoi & Saptarshi Das. Nature Communications volume 11, Article number: 5474 (2020) DOI: https://doi.org/10.1038/s41467-020-19203-z Published: 29 October 2020

This paper is open access.

Brain cell-like nanodevices

Given R. Stanley Williams’s presence on the author list, it’s a bit surprising that there’s no mention of memristors. If I read the signs rightly the interest is shifting, in some cases, from the memristor to a more comprehensive grouping of circuit elements referred to as ‘neuristors’ or, more likely, ‘nanocirucuit elements’ in the effort to achieve brainlike (neuromorphic) computing (engineering). (Williams was the leader of the HP Labs team that offered proof and more of the memristor’s existence, which I mentioned here in an April 5, 2010 posting. There are many, many postings on this topic here; try ‘memristors’ or ‘brainlike computing’ for your search terms.)

A September 24, 2020 news item on ScienceDaily announces a recent development in the field of neuromorphic engineering,

In the September [2020] issue of the journal Nature, scientists from Texas A&M University, Hewlett Packard Labs and Stanford University have described a new nanodevice that acts almost identically to a brain cell. Furthermore, they have shown that these synthetic brain cells can be joined together to form intricate networks that can then solve problems in a brain-like manner.

“This is the first study where we have been able to emulate a neuron with just a single nanoscale device, which would otherwise need hundreds of transistors,” said Dr. R. Stanley Williams, senior author on the study and professor in the Department of Electrical and Computer Engineering. “We have also been able to successfully use networks of our artificial neurons to solve toy versions of a real-world problem that is computationally intense even for the most sophisticated digital technologies.”

In particular, the researchers have demonstrated proof of concept that their brain-inspired system can identify possible mutations in a virus, which is highly relevant for ensuring the efficacy of vaccines and medications for strains exhibiting genetic diversity.

A September 24, 2020 Texas A&M University news release (also on EurekAlert) by Vandana Suresh, which originated the news item, provides some context for the research,

Over the past decades, digital technologies have become smaller and faster largely because of the advancements in transistor technology. However, these critical circuit components are fast approaching their limit of how small they can be built, initiating a global effort to find a new type of technology that can supplement, if not replace, transistors.

In addition to this “scaling-down” problem, transistor-based digital technologies have other well-known challenges. For example, they struggle at finding optimal solutions when presented with large sets of data.

“Let’s take a familiar example of finding the shortest route from your office to your home. If you have to make a single stop, it’s a fairly easy problem to solve. But if for some reason you need to make 15 stops in between, you have 43 billion routes to choose from,” said Dr. Suhas Kumar, lead author on the study and researcher at Hewlett Packard Labs. “This is now an optimization problem, and current computers are rather inept at solving it.”

Kumar added that another arduous task for digital machines is pattern recognition, such as identifying a face as the same regardless of viewpoint or recognizing a familiar voice buried within a din of sounds.

But tasks that can send digital machines into a computational tizzy are ones at which the brain excels. In fact, brains are not just quick at recognition and optimization problems, but they also consume far less energy than digital systems. Hence, by mimicking how the brain solves these types of tasks, Williams said brain-inspired or neuromorphic systems could potentially overcome some of the computational hurdles faced by current digital technologies.

To build the fundamental building block of the brain or a neuron, the researchers assembled a synthetic nanoscale device consisting of layers of different inorganic materials, each with a unique function. However, they said the real magic happens in the thin layer made of the compound niobium dioxide.

When a small voltage is applied to this region, its temperature begins to increase. But when the temperature reaches a critical value, niobium dioxide undergoes a quick change in personality, turning from an insulator to a conductor. But as it begins to conduct electric currents, its temperature drops and niobium dioxide switches back to being an insulator.

These back-and-forth transitions enable the synthetic devices to generate a pulse of electrical current that closely resembles the profile of electrical spikes, or action potentials, produced by biological neurons. Further, by changing the voltage across their synthetic neurons, the researchers reproduced a rich range of neuronal behaviors observed in the brain, such as sustained, burst and chaotic firing of electrical spikes.

“Capturing the dynamical behavior of neurons is a key goal for brain-inspired computers,” said Kumar. “Altogether, we were able to recreate around 15 types of neuronal firing profiles, all using a single electrical component and at much lower energies compared to transistor-based circuits.”

To evaluate if their synthetic neurons [neuristor?] can solve real-world problems, the researchers first wired 24 such nanoscale devices together in a network inspired by the connections between the brain’s cortex and thalamus, a well-known neural pathway involved in pattern recognition. Next, they used this system to solve a toy version of the viral quasispecies reconstruction problem, where mutant variations of a virus are identified without a reference genome.

By means of data inputs, the researchers introduced the network to short gene fragments. Then, by programming the strength of connections between the artificial neurons within the network, they established basic rules about joining these genetic fragments. The jigsaw puzzle-like task for the network was to list mutations in the virus’ genome based on these short genetic segments.

The researchers found that within a few microseconds, their network of artificial neurons settled down in a state that was indicative of the genome for a mutant strain.

Williams and Kumar noted this result is proof of principle that their neuromorphic systems can quickly perform tasks in an energy-efficient way.

The researchers said the next steps in their research will be to expand the repertoire of the problems that their brain-like networks can solve by incorporating other firing patterns and some hallmark properties of the human brain like learning and memory. They also plan to address hardware challenges for implementing their technology on a commercial scale.

“Calculating the national debt or solving some large-scale simulation is not the type of task the human brain is good at and that’s why we have digital computers. Alternatively, we can leverage our knowledge of neuronal connections for solving problems that the brain is exceptionally good at,” said Williams. “We have demonstrated that depending on the type of problem, there are different and more efficient ways of doing computations other than the conventional methods using digital computers with transistors.”

If you look at the news release on EurekAlert, you’ll see this informative image is titled: NeuristerSchematic [sic],

Caption: Networks of artificial neurons connected together can solve toy versions the viral quasispecies reconstruction problem. Credit: Texas A&M University College of Engineering

(On the university website, the image is credited to Rachel Barton.) You can see one of the first mentions of a ‘neuristor’ here in an August 24, 2017 posting.

Here’s a link to and a citation for the paper,

Third-order nanocircuit elements for neuromorphic engineering by Suhas Kumar, R. Stanley Williams & Ziwen Wang. Nature volume 585, pages518–523(2020) DOI: https://doi.org/10.1038/s41586-020-2735-5 Published: 23 September 2020 Issue Date: 24 September 2020

This paper is behind a paywall.

Neurotransistor for brainlike (neuromorphic) computing

According to researchers at Helmholtz-Zentrum Dresden-Rossendorf and the rest of the international team collaborating on the work, it’s time to look more closely at plasticity in the neuronal membrane,.

From the abstract for their paper, Intrinsic plasticity of silicon nanowire neurotransistors for dynamic memory and learning functions by Eunhye Baek, Nikhil Ranjan Das, Carlo Vittorio Cannistraci, Taiuk Rim, Gilbert Santiago Cañón Bermúdez, Khrystyna Nych, Hyeonsu Cho, Kihyun Kim, Chang-Ki Baek, Denys Makarov, Ronald Tetzlaff, Leon Chua, Larysa Baraban & Gianaurelio Cuniberti. Nature Electronics volume 3, pages 398–408 (2020) DOI: https://doi.org/10.1038/s41928-020-0412-1 Published online: 25 May 2020 Issue Date: July 2020

Neuromorphic architectures merge learning and memory functions within a single unit cell and in a neuron-like fashion. Research in the field has been mainly focused on the plasticity of artificial synapses. However, the intrinsic plasticity of the neuronal membrane is also important in the implementation of neuromorphic information processing. Here we report a neurotransistor made from a silicon nanowire transistor coated by an ion-doped sol–gel silicate film that can emulate the intrinsic plasticity of the neuronal membrane.

Caption: Neurotransistors: from silicon chips to neuromorphic architecture. Credit: TU Dresden / E. Baek Courtesy: Helmholtz-Zentrum Dresden-Rossendorf

A July 14, 2020 news item on Nanowerk announced the research (Note: A link has been removed),

Especially activities in the field of artificial intelligence, like teaching robots to walk or precise automatic image recognition, demand ever more powerful, yet at the same time more economical computer chips. While the optimization of conventional microelectronics is slowly reaching its physical limits, nature offers us a blueprint how information can be processed and stored quickly and efficiently: our own brain.

For the very first time, scientists at TU Dresden and the Helmholtz-Zentrum Dresden-Rossendorf (HZDR) have now successfully imitated the functioning of brain neurons using semiconductor materials. They have published their research results in the journal Nature Electronics (“Intrinsic plasticity of silicon nanowire neurotransistors for dynamic memory and learning functions”).

A July 14, 2020 Helmholtz-Zentrum Dresden-Rossendorf press release (also on EurekAlert), which originated the news items delves further into the research,

Today, enhancing the performance of microelectronics is usually achieved by reducing component size, especially of the individual transistors on the silicon computer chips. “But that can’t go on indefinitely – we need new approaches”, Larysa Baraban asserts. The physicist, who has been working at HZDR since the beginning of the year, is one of the three primary authors of the international study, which involved a total of six institutes. One approach is based on the brain, combining data processing with data storage in an artificial neuron.

“Our group has extensive experience with biological and chemical electronic sensors,” Baraban continues. “So, we simulated the properties of neurons using the principles of biosensors and modified a classical field-effect transistor to create an artificial neurotransistor.” The advantage of such an architecture lies in the simultaneous storage and processing of information in a single component. In conventional transistor technology, they are separated, which slows processing time and hence ultimately also limits performance.

Silicon wafer + polymer = chip capable of learning

Modeling computers on the human brain is no new idea. Scientists made attempts to hook up nerve cells to electronics in Petri dishes decades ago. “But a wet computer chip that has to be fed all the time is of no use to anybody,” says Gianaurelio Cuniberti from TU Dresden. The Professor for Materials Science and Nanotechnology is one of the three brains behind the neurotransistor alongside Ronald Tetzlaff, Professor of Fundamentals of Electrical Engineering in Dresden, and Leon Chua [emphasis mine] from the University of California at Berkeley, who had already postulated similar components in the early 1970s.

Now, Cuniberti, Baraban and their team have been able to implement it: “We apply a viscous substance – called solgel – to a conventional silicon wafer with circuits. This polymer hardens and becomes a porous ceramic,” the materials science professor explains. “Ions move between the holes. They are heavier than electrons and slower to return to their position after excitation. This delay, called hysteresis, is what causes the storage effect.” As Cuniberti explains, this is a decisive factor in the functioning of the transistor. “The more an individual transistor is excited, the sooner it will open and let the current flow. This strengthens the connection. The system is learning.”

Cuniberti and his team are not focused on conventional issues, though. “Computers based on our chip would be less precise and tend to estimate mathematical computations rather than calculating them down to the last decimal,” the scientist explains. “But they would be more intelligent. For example, a robot with such processors would learn to walk or grasp; it would possess an optical system and learn to recognize connections. And all this without having to develop any software.” But these are not the only advantages of neuromorphic computers. Thanks to their plasticity, which is similar to that of the human brain, they can adapt to changing tasks during operation and, thus, solve problems for which they were not originally programmed.

I highlighted Dr. Leon Chua’s name as he was one of the first to conceptualize the notion of a memristor (memory resistor), which is what the press release seems to be referencing with the mention of artificial synapses. Dr. Chua very kindly answered a few questions for me about his work which I published in an April 13, 2010 posting (scroll down about 40% of the way).

Brain-inspired computer with optimized neural networks

Caption: Left to right: The experiment was performed on a prototype of the BrainScales-2 chip; Schematic representation of a neural network; Results for simple and complex tasks. Credit: Heidelberg University

I don’t often stumble across research from the European Union’s flagship Human Brain Project. So, this is a delightful occurrence especially with my interest in neuromorphic computing. From a July 22, 2020 Human Brain Project press release (also on EurekAlert),

Many computational properties are maximized when the dynamics of a network are at a “critical point”, a state where systems can quickly change their overall characteristics in fundamental ways, transitioning e.g. between order and chaos or stability and instability. Therefore, the critical state is widely assumed to be optimal for any computation in recurrent neural networks, which are used in many AI [artificial intelligence] applications.

Researchers from the HBP [Human Brain Project] partner Heidelberg University and the Max-Planck-Institute for Dynamics and Self-Organization challenged this assumption by testing the performance of a spiking recurrent neural network on a set of tasks with varying complexity at – and away from critical dynamics. They instantiated the network on a prototype of the analog neuromorphic BrainScaleS-2 system. BrainScaleS is a state-of-the-art brain-inspired computing system with synaptic plasticity implemented directly on the chip. It is one of two neuromorphic systems currently under development within the European Human Brain Project.

First, the researchers showed that the distance to criticality can be easily adjusted in the chip by changing the input strength, and then demonstrated a clear relation between criticality and task-performance. The assumption that criticality is beneficial for every task was not confirmed: whereas the information-theoretic measures all showed that network capacity was maximal at criticality, only the complex, memory intensive tasks profited from it, while simple tasks actually suffered. The study thus provides a more precise understanding of how the collective network state should be tuned to different task requirements for optimal performance.

Mechanistically, the optimal working point for each task can be set very easily under homeostatic plasticity by adapting the mean input strength. The theory behind this mechanism was developed very recently at the Max Planck Institute. “Putting it to work on neuromorphic hardware shows that these plasticity rules are very capable in tuning network dynamics to varying distances from criticality”, says senior author Viola Priesemann, group leader at MPIDS. Thereby tasks of varying complexity can be solved optimally within that space.

The finding may also explain why biological neural networks operate not necessarily at criticality, but in the dynamically rich vicinity of a critical point, where they can tune their computation properties to task requirements. Furthermore, it establishes neuromorphic hardware as a fast and scalable avenue to explore the impact of biological plasticity rules on neural computation and network dynamics.

“As a next step, we now study and characterize the impact of the spiking network’s working point on classifying artificial and real-world spoken words”, says first author Benjamin Cramer of Heidelberg University.

Here’s a link to and a citation for the paper,

Control of criticality and computation in spiking neuromorphic networks with plasticity by Benjamin Cramer, David Stöckel, Markus Kreft, Michael Wibral, Johannes Schemmel, Karlheinz Meier & Viola Priesemann. Nature Communications volume 11, Article number: 2853 (2020) DOI: https://doi.org/10.1038/s41467-020-16548-3 Published: 05 June 2020

This paper is open access.

Improving neuromorphic devices with ion conducting polymer

A July 1, 2020 news item on ScienceDaily announces work which researchers are hopeful will allow them exert more control over neuromorphic devices’ speed of response,

“Neuromorphic” refers to mimicking the behavior of brain neural cells. When one speaks of neuromorphic computers, they are talking about making computers think and process more like human brains-operating at high-speed with low energy consumption.

Despite a growing interest in polymer-based neuromorphic devices, researchers have yet to establish an effective method for controlling the response speed of devices. Researchers from Tohoku University and the University of Cambridge, however, have overcome this obstacle through mixing the polymers PSS-Na and PEDOT:PSS, discovering that adding an ion conducting polymer enhances neuromorphic device response time.

A June 24, 2020 Tohoku University press release (also on EurekAlert), which originated the news item, provides a few more technical details,

Polymers are materials composed of long molecular chains and play a fundamental aspect in modern life from the rubber in tires, to water bottles, to polystyrene. Mixing polymers together results in the creation of new materials with their own distinct physical properties.

Most studies on neuromorphic devices based on polymer focus exclusively on the application of PEDOT: PSS, a mixed conductor that transports both electrons and ions. PSS-Na, on the other hand, transports ions only. By blending these two polymers, the researchers could enhance the ion diffusivity in the active layer of the device. Their measurements confirmed an increase in device response time, achieving a 5-time shorting at maximum. The results also proved how closely related response time is to the diffusivity of ions in the active layer.

“Our study paves the way for a deeper understanding behind the science of conducting polymers.” explains co-author Shunsuke Yamamoto from the Department of Biomolecular Engineering at Tohoku University’s Graduate School of Engineering. “Moving forward, it may be possible to create artificial neural networks composed of multiple neuromorphic devices,” he adds.

Here’s a link to and a citation for the paper,

Controlling the Neuromorphic Behavior of Organic Electrochemical Transistors by Blending Mixed and Ion Conductors by Shunsuke Yamamoto and George G. Malliaras. ACS [American Chemical Society] Appl. Electron. Mater. 2020, XXXX, XXX, XXX-XXX DOI: https://doi.org/10.1021/acsaelm.0c00203 Publication Date:June 15, 2020 Copyright © 2020 American Chemical Society

This paper is behind a paywall.

Energy-efficient artificial synapse

This is the second neuromorphic computing chip story from MIT this summer in what has turned out to be a bumper crop of research announcements in this field. The first MIT synapse story was featured in a June 16, 2020 posting. Now, there’s a second and completely different team announcing results for their artificial brain synapse work in a June 19, 2020 news item on Nanowerk (Note: A link has been removed),

Teams around the world are building ever more sophisticated artificial intelligence systems of a type called neural networks, designed in some ways to mimic the wiring of the brain, for carrying out tasks such as computer vision and natural language processing.

Using state-of-the-art semiconductor circuits to simulate neural networks requires large amounts of memory and high power consumption. Now, an MIT [Massachusetts Institute of Technology] team has made strides toward an alternative system, which uses physical, analog devices that can much more efficiently mimic brain processes.

The findings are described in the journal Nature Communications (“Protonic solid-state electrochemical synapse for physical neural networks”), in a paper by MIT professors Bilge Yildiz, Ju Li, and Jesús del Alamo, and nine others at MIT and Brookhaven National Laboratory. The first author of the paper is Xiahui Yao, a former MIT postdoc now working on energy storage at GRU Energy Lab.

That description of the work is one pretty much every team working on developing memristive (neuromorphic) chips could use.

On other fronts, the team has produced a very attractive illustration accompanying this research (aside: Is it my imagination or has there been a serious investment in the colour pink and other pastels for science illustrations?),

A new system developed at MIT and Brookhaven National Lab could provide a faster, more reliable and much more energy efficient approach to physical neural networks, by using analog ionic-electronic devices to mimic synapses.. Courtesy of the researchers

A June 19, 2020 MIT news release, which originated the news item, provides more insight into this specific piece of research (hint: it’s about energy use and repeatability),

Neural networks attempt to simulate the way learning takes place in the brain, which is based on the gradual strengthening or weakening of the connections between neurons, known as synapses. The core component of this physical neural network is the resistive switch, whose electronic conductance can be controlled electrically. This control, or modulation, emulates the strengthening and weakening of synapses in the brain.

In neural networks using conventional silicon microchip technology, the simulation of these synapses is a very energy-intensive process. To improve efficiency and enable more ambitious neural network goals, researchers in recent years have been exploring a number of physical devices that could more directly mimic the way synapses gradually strengthen and weaken during learning and forgetting.

Most candidate analog resistive devices so far for such simulated synapses have either been very inefficient, in terms of energy use, or performed inconsistently from one device to another or one cycle to the next. The new system, the researchers say, overcomes both of these challenges. “We’re addressing not only the energy challenge, but also the repeatability-related challenge that is pervasive in some of the existing concepts out there,” says Yildiz, who is a professor of nuclear science and engineering and of materials science and engineering.

“I think the bottleneck today for building [neural network] applications is energy efficiency. It just takes too much energy to train these systems, particularly for applications on the edge, like autonomous cars,” says del Alamo, who is the Donner Professor in the Department of Electrical Engineering and Computer Science. Many such demanding applications are simply not feasible with today’s technology, he adds.

The resistive switch in this work is an electrochemical device, which is made of tungsten trioxide (WO3) and works in a way similar to the charging and discharging of batteries. Ions, in this case protons, can migrate into or out of the crystalline lattice of the material,  explains Yildiz, depending on the polarity and strength of an applied voltage. These changes remain in place until altered by a reverse applied voltage — just as the strengthening or weakening of synapses does.

The mechanism is similar to the doping of semiconductors,” says Li, who is also a professor of nuclear science and engineering and of materials science and engineering. In that process, the conductivity of silicon can be changed by many orders of magnitude by introducing foreign ions into the silicon lattice. “Traditionally those ions were implanted at the factory,” he says, but with the new device, the ions are pumped in and out of the lattice in a dynamic, ongoing process. The researchers can control how much of the “dopant” ions go in or out by controlling the voltage, and “we’ve demonstrated a very good repeatability and energy efficiency,” he says.

Yildiz adds that this process is “very similar to how the synapses of the biological brain work. There, we’re not working with protons, but with other ions such as calcium, potassium, magnesium, etc., and by moving those ions you actually change the resistance of the synapses, and that is an element of learning.” The process taking place in the tungsten trioxide in their device is similar to the resistance modulation taking place in biological synapses, she says.

“What we have demonstrated here,” Yildiz says, “even though it’s not an optimized device, gets to the order of energy consumption per unit area per unit change in conductance that’s close to that in the brain.” Trying to accomplish the same task with conventional CMOS type semiconductors would take a million times more energy, she says.

The materials used in the demonstration of the new device were chosen for their compatibility with present semiconductor manufacturing systems, according to Li. But they include a polymer material that limits the device’s tolerance for heat, so the team is still searching for other variations of the device’s proton-conducting membrane and better ways of encapsulating its hydrogen source for long-term operations.

“There’s a lot of fundamental research to be done at the materials level for this device,” Yildiz says. Ongoing research will include “work on how to integrate these devices with existing CMOS transistors” adds del Alamo. “All that takes time,” he says, “and it presents tremendous opportunities for innovation, great opportunities for our students to launch their careers.”

Coincidentally or not a University of Massachusetts at Amherst team announced memristor voltage use comparable to human brain voltage use (see my June 15, 2020 posting), plus, there’s a team at Stanford University touting their low-energy biohybrid synapse in a XXX posting. (June 2020 has been a particularly busy month here for ‘artificial brain’ or ‘memristor’ stories.)

Getting back to this latest MIT research, here’s a link to and a citation for the paper,

Protonic solid-state electrochemical synapse for physical neural networks by Xiahui Yao, Konstantin Klyukin, Wenjie Lu, Murat Onen, Seungchan Ryu, Dongha Kim, Nicolas Emond, Iradwikanari Waluyo, Adrian Hunt, Jesús A. del Alamo, Ju Li & Bilge Yildiz. Nature Communications volume 11, Article number: 3134 (2020) DOI: https://doi.org/10.1038/s41467-020-16866-6 Published: 19 June 2020

This paper is open access.