Tag Archives: artificial neurons

Artificial organic neuron mimics characteristics of biological nerve cells

There’s a possibility that in the future, artificial neurons could be used for medical treatment according to a January 12, 2023 news item on phys.org,

Researchers at Linköping University (LiU), Sweden, have created an artificial organic neuron that closely mimics the characteristics of biological nerve cells. This artificial neuron can stimulate natural nerves, making it a promising technology for various medical treatments in the future.

Work to develop increasingly functional artificial nerve cells continues at the Laboratory for Organic Electronics, LOE. In 2022, a team of scientists led by associate professor Simone Fabiano demonstrated how an artificial organic neuron could be integrated into a living carnivorous plant [emphasis mine] to control the opening and closing of its maw. This synthetic nerve cell met two of the 20 characteristics that differentiate it from a biological nerve cell.

I wasn’t expecting a carnivorous plant, living or otherwise. Sadly, they don’t seem to have been able to include it in this image although the ‘green mitts’ are evocative,

Caption: Artificial neurons created by the researchers at Linköping University. Credit: Thor Balkhed

A January 13, 2023 Linköping University (LiU) press release by Mikael Sönne (also on EurkeAlert but published January 12, 2023), which originated the news item, delves further into the work,

In their latest study, published in the journal Nature Materials, the same researchers at LiU have developed a new artificial nerve cell called “conductance-based organic electrochemical neuron” or c-OECN, which closely mimics 15 out of the 20 neural features that characterise biological nerve cells, making its functioning much more similar to natural nerve cells.

“One of the key challenges in creating artificial neurons that effectively mimic real biological neurons is the ability to incorporate ion modulation. Traditional artificial neurons made of silicon can emulate many neural features but cannot communicate through ions. In contrast, c-OECNs use ions to demonstrate several key features of real biological neurons”, says Simone Fabiano, principal investigator of the Organic Nanoelectronics group at LOE.

In 2018, this research group at Linköping University was one of the first to develop organic electrochemical transistors based on n-type conducting polymers, which are materials that can conduct negative charges. This made it possible to build printable complementary organic electrochemical circuits. Since then, the group has been working to optimise these transistors so that they can be printed in a printing press on a thin plastic foil. As a result, it is now possible to print thousands of transistors on a flexible substrate and use them to develop artificial nerve cells.

In the newly developed artificial neuron, ions are used to control the flow of electronic current through an n-type conducting polymer, leading to spikes in the device’s voltage. This process is similar to that which occurs in biological nerve cells. The unique material in the artificial nerve cell also allows the current to be increased and decreased in an almost perfect bell-shaped curve that resembles the activation and inactivation of sodium ion channels found in biology.

“Several other polymers show this behaviour, but only rigid polymers are resilient to disorder, enabling stable device operation”, says Simone Fabiano

In experiments carried out in collaboration with Karolinska Institute (KI), the new c-OECN neurons were connected to the vagus nerve of mice. The results show that the artificial neuron could stimulate the mice’s nerves, causing a 4.5% change in their heart rate.

The fact that the artificial neuron can stimulate the vagus nerve itself could, in the long run, pave the way for essential applications in various forms of medical treatment. In general, organic semiconductors have the advantage of being biocompatible, soft, and malleable, while the vagus nerve plays a key role, for example, in the body’s immune system and metabolism.

The next step for the researchers will be to reduce the energy consumption of the artificial neurons, which is still much higher than that of human nerve cells. Much work remains to be done to replicate nature artificially.

“There is much we still don’t fully understand about the human brain and nerve cells. In fact, we don’t know how the nerve cell makes use of many of these 15 demonstrated features. Mimicking the nerve cells can enable us to understand the brain better and build circuits capable of performing intelligent tasks. We’ve got a long road ahead, but this study is a good start,” says Padinhare Cholakkal Harikesh, postdoc and main author of the scientific paper.

Here’s a link to and a citation for the paper,

Ion-tunable antiambipolarity in mixed ion–electron conducting polymers enables biorealistic organic electrochemical neurons by Padinhare Cholakkal Harikesh, Chi-Yuan Yang, Han-Yan Wu, Silan Zhang, Mary J. Donahue, April S. Caravaca, Jun-Da Huang, Peder S. Olofsson, Magnus Berggren, Deyu Tu & Simone Fabiano. Nature Materials volume 22, pages 242–248 (2023) DOI: https://doi.org/10.1038/s41563-022-01450-8 Published online: 12 January 2023 Issue Date: February 2023

This paper is open access.

Enhance or weaken memory with stretchy, bioinspired synaptic transistor

This news is intriguing since they usually want to enhance memory not weaken it. Interestingly, this October 3, 2022 news item on ScienceDaily doesn’t immediately answer why you might want to weaken memory,

Robotics and wearable devices might soon get a little smarter with the addition of a stretchy, wearable synaptic transistor developed by Penn State engineers. The device works like neurons in the brain to send signals to some cells and inhibit others in order to enhance and weaken the devices’ memories.

Led by Cunjiang Yu, Dorothy Quiggle Career Development Associate Professor of Engineering Science and Mechanics and associate professor of biomedical engineering and of materials science and engineering, the team designed the synaptic transistor to be integrated in robots or wearables and use artificial intelligence to optimize functions. The details were published on Sept. 29 [2022] in Nature Electronics.

“Mirroring the human brain, robots and wearable devices using the synaptic transistor can use its artificial neurons to ‘learn’ and adapt their behaviors,” Yu said. “For example, if we burn our hand on a stove, it hurts, and we know to avoid touching it next time. The same results will be possible for devices that use the synaptic transistor, as the artificial intelligence is able to ‘learn’ and adapt to its environment.”

A September 29, 2022 Pennsylvania State University (Penn State) news release (also on EurekAlert but published on October 3, 2022) by Mariah Chuprinski, which originated the news item, explains why you might want to weaken memory,

According to Yu, the artificial neurons in the device were designed to perform like neurons in the ventral tegmental area, a tiny segment of the human brain located in the uppermost part of the brain stem. Neurons process and transmit information by releasing neurotransmitters at their synapses, typically located at the neural cell ends. Excitatory neurotransmitters trigger the activity of other neurons and are associated with enhancing memories, while inhibitory neurotransmitters reduce the activity of other neurons and are associated with weakening memories.

“Unlike all other areas of the brain, neurons in the ventral tegmental area are capable of releasing both excitatory and inhibitory neurotransmitters at the same time,” Yu said. “By designing the synaptic transistor to operate with both synaptic behaviors simultaneously, fewer transistors are needed [emphasis mine] compared to conventional integrated electronics technology, which simplifies the system architecture and allows the device to conserve energy.”

To model soft, stretchy biological tissues, the researchers used stretchable bilayer semiconductor materials to fabricate the device, allowing it to stretch and twist while in use, according to Yu. Conventional transistors, on the other hand, are rigid and will break when deformed.

“The transistor is mechanically deformable and functionally reconfigurable, yet still retains its functions when stretched extensively,” Yu said. “It can attach to a robot or wearable device to serve as their outermost skin.”

In addition to Yu, other contributors include Hyunseok Shim and Shubham Patel, Penn State Department of Engineering Science and Mechanics; Yongcao Zhang, the University of Houston Materials Science and Engineering Program; Faheem Ershad, Penn State Department of Biomedical Engineering and University of Houston Department of Biomedical Engineering; Binghao Wang, School of Electronic Science and Engineering, Southeast University [Note: There’s one in Bangladesh, one in China, and there’s a Southeastern University in Florida, US] and Department of Chemistry and the Materials Research Center, Northwestern University; Zhihua Chen, Flexterra Inc.; Tobin J. Marks, Department of Chemistry and the Materials Research Center, Northwestern University; Antonio Facchetti, Flexterra Inc. and Northwestern University’s Department of Chemistry and Materials Research Center.

Here’s a link to and a citation for the paper,

An elastic and reconfigurable synaptic transistor based on a stretchable bilayer semiconductor by Hyunseok Shim, Faheem Ershad, Shubham Patel, Yongcao Zhang, Binghao Wang, Zhihua Chen, Tobin J. Marks, Antonio Facchetti & Cunjiang Yu. Nature Electronics (2022) DOI: DOI: https://doi.org/10.1038/s41928-022-00836-5 Published: 29 September 2022

This paper is behind a paywall.

Neurotransistor for brainlike (neuromorphic) computing

According to researchers at Helmholtz-Zentrum Dresden-Rossendorf and the rest of the international team collaborating on the work, it’s time to look more closely at plasticity in the neuronal membrane,.

From the abstract for their paper, Intrinsic plasticity of silicon nanowire neurotransistors for dynamic memory and learning functions by Eunhye Baek, Nikhil Ranjan Das, Carlo Vittorio Cannistraci, Taiuk Rim, Gilbert Santiago Cañón Bermúdez, Khrystyna Nych, Hyeonsu Cho, Kihyun Kim, Chang-Ki Baek, Denys Makarov, Ronald Tetzlaff, Leon Chua, Larysa Baraban & Gianaurelio Cuniberti. Nature Electronics volume 3, pages 398–408 (2020) DOI: https://doi.org/10.1038/s41928-020-0412-1 Published online: 25 May 2020 Issue Date: July 2020

Neuromorphic architectures merge learning and memory functions within a single unit cell and in a neuron-like fashion. Research in the field has been mainly focused on the plasticity of artificial synapses. However, the intrinsic plasticity of the neuronal membrane is also important in the implementation of neuromorphic information processing. Here we report a neurotransistor made from a silicon nanowire transistor coated by an ion-doped sol–gel silicate film that can emulate the intrinsic plasticity of the neuronal membrane.

Caption: Neurotransistors: from silicon chips to neuromorphic architecture. Credit: TU Dresden / E. Baek Courtesy: Helmholtz-Zentrum Dresden-Rossendorf

A July 14, 2020 news item on Nanowerk announced the research (Note: A link has been removed),

Especially activities in the field of artificial intelligence, like teaching robots to walk or precise automatic image recognition, demand ever more powerful, yet at the same time more economical computer chips. While the optimization of conventional microelectronics is slowly reaching its physical limits, nature offers us a blueprint how information can be processed and stored quickly and efficiently: our own brain.

For the very first time, scientists at TU Dresden and the Helmholtz-Zentrum Dresden-Rossendorf (HZDR) have now successfully imitated the functioning of brain neurons using semiconductor materials. They have published their research results in the journal Nature Electronics (“Intrinsic plasticity of silicon nanowire neurotransistors for dynamic memory and learning functions”).

A July 14, 2020 Helmholtz-Zentrum Dresden-Rossendorf press release (also on EurekAlert), which originated the news items delves further into the research,

Today, enhancing the performance of microelectronics is usually achieved by reducing component size, especially of the individual transistors on the silicon computer chips. “But that can’t go on indefinitely – we need new approaches”, Larysa Baraban asserts. The physicist, who has been working at HZDR since the beginning of the year, is one of the three primary authors of the international study, which involved a total of six institutes. One approach is based on the brain, combining data processing with data storage in an artificial neuron.

“Our group has extensive experience with biological and chemical electronic sensors,” Baraban continues. “So, we simulated the properties of neurons using the principles of biosensors and modified a classical field-effect transistor to create an artificial neurotransistor.” The advantage of such an architecture lies in the simultaneous storage and processing of information in a single component. In conventional transistor technology, they are separated, which slows processing time and hence ultimately also limits performance.

Silicon wafer + polymer = chip capable of learning

Modeling computers on the human brain is no new idea. Scientists made attempts to hook up nerve cells to electronics in Petri dishes decades ago. “But a wet computer chip that has to be fed all the time is of no use to anybody,” says Gianaurelio Cuniberti from TU Dresden. The Professor for Materials Science and Nanotechnology is one of the three brains behind the neurotransistor alongside Ronald Tetzlaff, Professor of Fundamentals of Electrical Engineering in Dresden, and Leon Chua [emphasis mine] from the University of California at Berkeley, who had already postulated similar components in the early 1970s.

Now, Cuniberti, Baraban and their team have been able to implement it: “We apply a viscous substance – called solgel – to a conventional silicon wafer with circuits. This polymer hardens and becomes a porous ceramic,” the materials science professor explains. “Ions move between the holes. They are heavier than electrons and slower to return to their position after excitation. This delay, called hysteresis, is what causes the storage effect.” As Cuniberti explains, this is a decisive factor in the functioning of the transistor. “The more an individual transistor is excited, the sooner it will open and let the current flow. This strengthens the connection. The system is learning.”

Cuniberti and his team are not focused on conventional issues, though. “Computers based on our chip would be less precise and tend to estimate mathematical computations rather than calculating them down to the last decimal,” the scientist explains. “But they would be more intelligent. For example, a robot with such processors would learn to walk or grasp; it would possess an optical system and learn to recognize connections. And all this without having to develop any software.” But these are not the only advantages of neuromorphic computers. Thanks to their plasticity, which is similar to that of the human brain, they can adapt to changing tasks during operation and, thus, solve problems for which they were not originally programmed.

I highlighted Dr. Leon Chua’s name as he was one of the first to conceptualize the notion of a memristor (memory resistor), which is what the press release seems to be referencing with the mention of artificial synapses. Dr. Chua very kindly answered a few questions for me about his work which I published in an April 13, 2010 posting (scroll down about 40% of the way).

A lipid-based memcapacitor,for neuromorphic computing

Caption: Researchers at ORNL’s Center for Nanophase Materials Sciences demonstrated the first example of capacitance in a lipid-based biomimetic membrane, opening nondigital routes to advanced, brain-like computation. Credit: Michelle Lehman/Oak Ridge National Laboratory, U.S. Dept. of Energy

The last time I wrote about memcapacitors (June 30, 2014 posting: Memristors, memcapacitors, and meminductors for faster computers), the ideas were largely theoretical; I believe this work is the first research I’ve seen on the topic. From an October 17, 2019 news item on ScienceDaily,

Researchers at the Department of Energy’s Oak Ridge National Laboratory ]ORNL], the University of Tennessee and Texas A&M University demonstrated bio-inspired devices that accelerate routes to neuromorphic, or brain-like, computing.

Results published in Nature Communications report the first example of a lipid-based “memcapacitor,” a charge storage component with memory that processes information much like synapses do in the brain. Their discovery could support the emergence of computing networks modeled on biology for a sensory approach to machine learning.

An October 16, 2019 ORNL news release (also on EurekAlert but published Oct. 17, 2019), which originated the news item, provides more detail about the work,

“Our goal is to develop materials and computing elements that work like biological synapses and neurons—with vast interconnectivity and flexibility—to enable autonomous systems that operate differently than current computing devices and offer new functionality and learning capabilities,” said Joseph Najem, a recent postdoctoral researcher at ORNL’s Center for Nanophase Materials Sciences, a DOE Office of Science User Facility, and current assistant professor of mechanical engineering at Penn State.

The novel approach uses soft materials to mimic biomembranes and simulate the way nerve cells communicate with one another.

The team designed an artificial cell membrane, formed at the interface of two lipid-coated water droplets in oil, to explore the material’s dynamic, electrophysiological properties. At applied voltages, charges build up on both sides of the membrane as stored energy, analogous to the way capacitors work in traditional electric circuits.

But unlike regular capacitors, the memcapacitor can “remember” a previously applied voltage and—literally—shape how information is processed. The synthetic membranes change surface area and thickness depending on electrical activity. These shapeshifting membranes could be tuned as adaptive filters for specific biophysical and biochemical signals.

“The novel functionality opens avenues for nondigital signal processing and machine learning modeled on nature,” said ORNL’s Pat Collier, a CNMS staff research scientist.

A distinct feature of all digital computers is the separation of processing and memory. Information is transferred back and forth from the hard drive and the central processor, creating an inherent bottleneck in the architecture no matter how small or fast the hardware can be.

Neuromorphic computing, modeled on the nervous system, employs architectures that are fundamentally different in that memory and signal processing are co-located in memory elements—memristors, memcapacitors and meminductors.

These “memelements” make up the synaptic hardware of systems that mimic natural information processing, learning and memory.

Systems designed with memelements offer advantages in scalability and low power consumption, but the real goal is to carve out an alternative path to artificial intelligence, said Collier.

Tapping into biology could enable new computing possibilities, especially in the area of “edge computing,” such as wearable and embedded technologies that are not connected to a cloud but instead make on-the-fly decisions based on sensory input and past experience.

Biological sensing has evolved over billions of years into a highly sensitive system with receptors in cell membranes that are able to pick out a single molecule of a specific odor or taste. “This is not something we can match digitally,” Collier said.

Digital computation is built around digital information, the binary language of ones and zeros coursing through electronic circuits. It can emulate the human brain, but its solid-state components do not compute sensory data the way a brain does.

“The brain computes sensory information pushed through synapses in a neural network that is reconfigurable and shaped by learning,” said Collier. “Incorporating biology—using biomembranes that sense bioelectrochemical information—is key to developing the functionality of neuromorphic computing.”

While numerous solid-state versions of memelements have been demonstrated, the team’s biomimetic elements represent new opportunities for potential “spiking” neural networks that can compute natural data in natural ways.

Spiking neural networks are intended to simulate the way neurons spike with electrical potential and, if the signal is strong enough, pass it on to their neighbors through synapses, carving out learning pathways that are pruned over time for efficiency.

A bio-inspired version with analog data processing is a distant aim. Current early-stage research focuses on developing the components of bio-circuitry.

“We started with the basics, a memristor that can weigh information via conductance to determine if a spike is strong enough to be broadcast through a network of synapses connecting neurons,” said Collier. “Our memcapacitor goes further in that it can actually store energy as an electric charge in the membrane, enabling the complex ‘integrate and fire’ activity of neurons needed to achieve dense networks capable of brain-like computation.”

The team’s next steps are to explore new biomaterials and study simple networks to achieve more complex brain-like functionalities with memelements.

Here’s a link to and a citation for the paper,

Dynamical nonlinear memory capacitance in biomimetic membranes by Joseph S. Najem, Md Sakib Hasan, R. Stanley Williams, Ryan J. Weiss, Garrett S. Rose, Graham J. Taylor, Stephen A. Sarles & C. Patrick Collier. Nature Communications volume 10, Article number: 3239 (2019) DOI: DOIhttps://doi.org/10.1038/s41467-019-11223-8 Published July 19, 2019

This paper is open access.

One final comment, you might recognize one of the authors (R. Stanley Williams) who in 2008 helped launch ‘memristor’ research.

Electronics begone! Enter: the light-based brainlike computing chip

At this point, it’s possible I’m wrong but I think this is the first ‘memristor’ type device (also called a neuromorphic chip) based on light rather than electronics that I’ve featured here on this blog. In other words, it’s not, technically speaking, a memristor but it does have the same properties so it is a neuromorphic chip.

Caption: The optical microchips that the researchers are working on developing are about the size of a one-cent piece. Credit: WWU Muenster – Peter Leßmann

A May 8, 2019 news item on Nanowerk announces this new approach to neuromorphic hardware (Note: A link has been removed),

Researchers from the Universities of Münster (Germany), Oxford and Exeter (both UK) have succeeded in developing a piece of hardware which could pave the way for creating computers which resemble the human brain.

The scientists produced a chip containing a network of artificial neurons that works with light and can imitate the behaviour of neurons and their synapses. The network is able to “learn” information and use this as a basis for computing and recognizing patterns. As the system functions solely with light and not with electrons, it can process data many times faster than traditional systems. …

A May 8, 2019 University of Münster press release (also on EurekAlert), which originated the news item, reveals the full story,

A technology that functions like a brain? In these times of artificial intelligence, this no longer seems so far-fetched – for example, when a mobile phone can recognise faces or languages. With more complex applications, however, computers still quickly come up against their own limitations. One of the reasons for this is that a computer traditionally has separate memory and processor units – the consequence of which is that all data have to be sent back and forth between the two. In this respect, the human brain is way ahead of even the most modern computers because it processes and stores information in the same place – in the synapses, or connections between neurons, of which there are a million-billion in the brain. An international team of researchers from the Universities of Münster (Germany), Oxford and Exeter (both UK) have now succeeded in developing a piece of hardware which could pave the way for creating computers which resemble the human brain. The scientists managed to produce a chip containing a network of artificial neurons that works with light and can imitate the behaviour of neurons and their synapses.

The researchers were able to demonstrate, that such an optical neurosynaptic network is able to “learn” information and use this as a basis for computing and recognizing patterns – just as a brain can. As the system functions solely with light and not with traditional electrons, it can process data many times faster. “This integrated photonic system is an experimental milestone,” says Prof. Wolfram Pernice from Münster University and lead partner in the study. “The approach could be used later in many different fields for evaluating patterns in large quantities of data, for example in medical diagnoses.” The study is published in the latest issue of the “Nature” journal.

The story in detail – background and method used

Most of the existing approaches relating to so-called neuromorphic networks are based on electronics, whereas optical systems – in which photons, i.e. light particles, are used – are still in their infancy. The principle which the German and British scientists have now presented works as follows: optical waveguides that can transmit light and can be fabricated into optical microchips are integrated with so-called phase-change materials – which are already found today on storage media such as re-writable DVDs. These phase-change materials are characterised by the fact that they change their optical properties dramatically, depending on whether they are crystalline – when their atoms arrange themselves in a regular fashion – or amorphous – when their atoms organise themselves in an irregular fashion. This phase-change can be triggered by light if a laser heats the material up. “Because the material reacts so strongly, and changes its properties dramatically, it is highly suitable for imitating synapses and the transfer of impulses between two neurons,” says lead author Johannes Feldmann, who carried out many of the experiments as part of his PhD thesis at the Münster University.

In their study, the scientists succeeded for the first time in merging many nanostructured phase-change materials into one neurosynaptic network. The researchers developed a chip with four artificial neurons and a total of 60 synapses. The structure of the chip – consisting of different layers – was based on the so-called wavelength division multiplex technology, which is a process in which light is transmitted on different channels within the optical nanocircuit.

In order to test the extent to which the system is able to recognise patterns, the researchers “fed” it with information in the form of light pulses, using two different algorithms of machine learning. In this process, an artificial system “learns” from examples and can, ultimately, generalise them. In the case of the two algorithms used – both in so-called supervised and in unsupervised learning – the artificial network was ultimately able, on the basis of given light patterns, to recognise a pattern being sought – one of which was four consecutive letters.

“Our system has enabled us to take an important step towards creating computer hardware which behaves similarly to neurons and synapses in the brain and which is also able to work on real-world tasks,” says Wolfram Pernice. “By working with photons instead of electrons we can exploit to the full the known potential of optical technologies – not only in order to transfer data, as has been the case so far, but also in order to process and store them in one place,” adds co-author Prof. Harish Bhaskaran from the University of Oxford.

A very specific example is that with the aid of such hardware cancer cells could be identified automatically. Further work will need to be done, however, before such applications become reality. The researchers need to increase the number of artificial neurons and synapses and increase the depth of neural networks. This can be done, for example, with optical chips manufactured using silicon technology. “This step is to be taken in the EU joint project ‘Fun-COMP’ by using foundry processing for the production of nanochips,” says co-author and leader of the Fun-COMP project, Prof. C. David Wright from the University of Exeter.

Here’s a link to and a citation for the paper,

All-optical spiking neurosynaptic networks with self-learning capabilities by J. Feldmann, N. Youngblood, C. D. Wright, H. Bhaskaran & W. H. P. Pernice. Nature volume 569, pages208–214 (2019) DOI: https://doi.org/10.1038/s41586-019-1157-8 Issue Date: 09 May 2019

This paper is behind a paywall.

For the curious, I found a little more information about Fun-COMP (functionally-scaled computer technology). It’s a European Commission (EC) Horizon 2020 project coordinated through the University of Exeter. For information with details such as the total cost, contribution from the EC, the list of partnerships and more there is the Fun-COMP webpage on fabiodisconzi.com.

Brainlike computing with spintronic devices

Adding to the body of ‘memristor’ research I have here, there’s an April 17, 2019 news item on Nanowerk announcing the development of ‘memristor’ hardware by Japanese researchers (Note: A link has been removed),

A research group from Tohoku University has developed spintronics devices which are promising for future energy-efficient and adoptive computing systems, as they behave like neurons and synapses in the human brain (Advanced Materials, “Artificial Neuron and Synapse Realized in an Antiferromagnet/Ferromagnet Heterostructure Using Dynamics of Spin–Orbit Torque Switching”).

Just because this ‘synapse’ is pretty,

Courtesy: Tohoku University

An April 16, 2019 Tohoku University press release, which originated the news item, expands on the theme,

Today’s information society is built on digital computers that have evolved drastically for half a century and are capable of executing complicated tasks reliably. The human brain, by contrast, operates under very limited power and is capable of executing complex tasks efficiently using an architecture that is vastly different from that of digital computers.

So the development of computing schemes or hardware inspired by the processing of information in the brain is of broad interest to scientists in fields ranging from physics, chemistry, material science and mathematics, to electronics and computer science.

In computing, there are various ways to implement the processing of information by a brain. Spiking neural network is a kind of implementation method which closely mimics the brain’s architecture and temporal information processing. Successful implementation of spiking neural network requires dedicated hardware with artificial neurons and synapses that are designed to exhibit the dynamics of biological neurons and synapses.

Here, the artificial neuron and synapse would ideally be made of the same material system and operated under the same working principle. However, this has been a challenging issue due to the fundamentally different nature of the neuron and synapse in biological neural networks.

The research group – which includes Professor Hideo Ohno (currently the university president), Associate Professor Shunsuke Fukami, Dr. Aleksandr Kurenkov and Professor Yoshihiko Horio – created an artificial neuron and synapse by using spintronics technology. Spintronics is an academic field that aims to simultaneously use an electron’s electric (charge) and magnetic (spin) properties.

The research group had previously developed a functional material system consisting of antiferromagnetic and ferromagnetic materials. This time, they prepared artificial neuronal and synaptic devices microfabricated from the material system, which demonstrated fundamental behavior of biological neuron and synapse – leaky integrate-and-fire and spike-timing-dependent plasticity, respectively – based on the same concept of spintronics.

The spiking neural network is known to be advantageous over today’s artificial intelligence for the processing and prediction of temporal information. Expansion of the developed technology to unit-circuit, block and system levels is expected to lead to computers that can process time-varying information such as voice and video with a small amount of power or edge devices that have the an ability to adopt users and the environment through usage.

Here’s a link to and a citation for the paper,

Artificial Neuron and Synapse Realized in an Antiferromagnet/Ferromagnet Heterostructure Using Dynamics of Spin–Orbit Torque Switching by Aleksandr Kurenkov, Samik DuttaGupta, Chaoliang Zhang, Shunsuke Fukami, Yoshihiko Horio, Hideo Ohno. Advanced Materials https://doi.org/10.1002/adma.201900636 First published: 16 April 2019

This paper is behind a paywall.

Memristive capabilities from IBM (International Business Machines)

Does memristive mean it’s like a memristor but it’s not one? In any event, IBM is claiming some new ground in the world of cognitive computing (also known as, neuromorphic computing).

An artistic rendering of a population of stochastic phase-change neurons which appears on the cover of Nature Nanotechnology, 3 August 2016. (Credit: IBM Research)

An artistic rendering of a population of stochastic phase-change neurons which appears on the cover of Nature Nanotechnology, 3 August 2016. (Credit: IBM Research)

From an Aug. 3, 2016 news item on phys.org,

IBM scientists have created randomly spiking neurons using phase-change materials to store and process data. This demonstration marks a significant step forward in the development of energy-efficient, ultra-dense integrated neuromorphic technologies for applications in cognitive computing.

Inspired by the way the biological brain functions, scientists have theorized for decades that it should be possible to imitate the versatile computational capabilities of large populations of neurons. However, doing so at densities and with a power budget that would be comparable to those seen in biology has been a significant challenge, until now.

“We have been researching phase-change materials for memory applications for over a decade, and our progress in the past 24 months has been remarkable,” said IBM Fellow Evangelos Eleftheriou. “In this period, we have discovered and published new memory techniques, including projected memory, stored 3 bits per cell in phase-change memory for the first time, and now are demonstrating the powerful capabilities of phase-change-based artificial neurons, which can perform various computational primitives such as data-correlation detection and unsupervised learning at high speeds using very little energy.”

An Aug. 3, 2016 IBM news release, which originated the news item, expands on the theme,

The artificial neurons designed by IBM scientists in Zurich consist of phase-change materials, including germanium antimony telluride, which exhibit two stable states, an amorphous one (without a clearly defined structure) and a crystalline one (with structure). These materials are the basis of re-writable Blu-ray discs. However, the artificial neurons do not store digital information; they are analog, just like the synapses and neurons in our biological brain.

In the published demonstration, the team applied a series of electrical pulses to the artificial neurons, which resulted in the progressive crystallization of the phase-change material, ultimately causing the neuron to fire. In neuroscience, this function is known as the integrate-and-fire property of biological neurons. This is the foundation for event-based computation and, in principle, is similar to how our brain triggers a response when we touch something hot.

Exploiting this integrate-and-fire property, even a single neuron can be used to detect patterns and discover correlations in real-time streams of event-based data. For example, in the Internet of Things, sensors can collect and analyze volumes of weather data collected at the edge for faster forecasts. The artificial neurons could be used to detect patterns in financial transactions to find discrepancies or use data from social media to discover new cultural trends in real time. Large populations of these high-speed, low-energy nano-scale neurons could also be used in neuromorphic coprocessors with co-located memory and processing units.

IBM scientists have organized hundreds of artificial neurons into populations and used them to represent fast and complex signals. Moreover, the artificial neurons have been shown to sustain billions of switching cycles, which would correspond to multiple years of operation at an update frequency of 100 Hz. The energy required for each neuron update was less than five picojoule and the average power less than 120 microwatts — for comparison, 60 million microwatts power a 60 watt lightbulb.

“Populations of stochastic phase-change neurons, combined with other nanoscale computational elements such as artificial synapses, could be a key enabler for the creation of a new generation of extremely dense neuromorphic computing systems,” said Tomas Tuma, a co-author of the paper.

Here’s a link to and a citation for the paper,

Stochastic phase-change neurons by Tomas Tuma, Angeliki Pantazi, Manuel Le Gallo, Abu Sebastian, & Evangelos Eleftheriou. Nature Nanotechnology  11, 693–699 (2016) doi:10.1038/nnano.2016.70 Published online 16 May 2016

I gather IBM waited for the print version of the paper before publicizing the work. The online version is behind paper. For those who can’t get past the paywall, there is a video offering a demonstration of sorts,

For the interested, the US government recently issued a white paper on neuromorphic computing (my Aug. 22, 2016 post).

This team has published a paper that has a similar theme to the one in Nature Nanotechnology,

All-memristive neuromorphic computing with level-tuned neurons by Angeliki Pantazi, Stanisław Woźniak, Tomas Tuma, and Evangelos Eleftheriou. Nanotechnology, Volume 27, Number 35  DOI: 10.1088/0957-4484/27/35/355205 Published 26 July 2016

© 2016 IOP Publishing Ltd

This paper is open access.

An Aug. 18, 2016 news piece by Lisa Zyga for phys.org provides a summary of the research in the July 2016 published paper.