Research on novel nanoelectronics devices led by the University of Southampton enabled brain neurons and artificial neurons to communicate with each other. This study has for the first time shown how three key emerging technologies can work together: brain-computer interfaces, artificial neural networks and advanced memory technologies (also known as memristors). The discovery opens the door to further significant developments in neural and artificial intelligence research.
Brain functions are made possible by circuits of spiking neurons, connected together by microscopic, but highly complex links called ‘synapses’. In this new study, published in the scientific journal Nature Scientific Reports, the scientists created a hybrid neural network where biological and artificial neurons in different parts of the world were able to communicate with each other over the internet through a hub of artificial synapses made using cutting-edge nanotechnology. This is the first time the three components have come together in a unified network.
During the study, researchers based at the University of Padova in Italy cultivated rat neurons in their laboratory, whilst partners from the University of Zurich and ETH Zurich created artificial neurons on Silicon microchips. The virtual laboratory was brought together via an elaborate setup controlling nanoelectronic synapses developed at the University of Southampton. These synaptic devices are known as memristors.
The Southampton based researchers captured spiking events being sent over the internet from the biological neurons in Italy and then distributed them to the memristive synapses. Responses were then sent onward to the artificial neurons in Zurich also in the form of spiking activity. The process simultaneously works in reverse too; from Zurich to Padova. Thus, artificial and biological neurons were able to communicate bidirectionally and in real time.
Themis Prodromakis, Professor of Nanotechnology and Director of the Centre for Electronics Frontiers at the University of Southampton said “One of the biggest challenges in conducting research of this kind and at this level has been integrating such distinct cutting edge technologies and specialist expertise that are not typically found under one roof. By creating a virtual lab we have been able to achieve this.”
The researchers now anticipate that their approach will ignite interest from a range of scientific disciplines and accelerate the pace of innovation and scientific advancement in the field of neural interfaces research. In particular, the ability to seamlessly connect disparate technologies across the globe is a step towards the democratisation of these technologies, removing a significant barrier to collaboration.
Professor Prodromakis added “We are very excited with this new development. On one side it sets the basis for a novel scenario that was never encountered during natural evolution, where biological and artificial neurons are linked together and communicate across global networks; laying the foundations for the Internet of Neuro-electronics. On the other hand, it brings new prospects to neuroprosthetic technologies, paving the way towards research into replacing dysfunctional parts of the brain with AI [artificial intelligence] chips.”
I’m fascinated by this work and after taking a look at the paper, I have to say, the paper is surprisingly accessible. In other words, I think I get the general picture. For example (from the Introduction to the paper; citation and link follow further down),
… To emulate plasticity, the memristor MR1 is operated as a two-terminal device through a control system that receives pre- and post-synaptic depolarisations from one silicon neuron (ANpre) and one biological neuron (BN), respectively. …
If I understand this properly, they’ve integrated a biological neuron and an artificial neuron in a single system across three countries.
For those who care to venture forth, here’s a link and a citation for the paper,
Memristive synapses connect brain and silicon spiking neurons by Alexantrou Serb, Andrea Corna, Richard George, Ali Khiat, Federico Rocchi, Marco Reato, Marta Maschietto, Christian Mayr, Giacomo Indiveri, Stefano Vassanelli & Themistoklis Prodromakis. Scientific Reports volume 10, Article number: 2590 (2020) DOI: https://doi.org/10.1038/s41598-020-58831-9 Published 25 February 2020
No brain but it learns, it has about 720 sexes, and it travels at a rate of approximately 4 cm (1.6 inches) per hour, it is known as ‘le blob’. Fascinated when I first stumbled across the news, I had to post this piece but wish I hadn’t waited so long.
Here’s the 101: the 900-odd species of slime mould, of which P. polycephalum is just one, are a taxonomic headache. They’re currently boxed into the Protista kingdom, because where else are you going to put something that isn’t a fungus, plant, bacteria, or animal?
When life is good, they tend to live solitary lives as single cells like amoeba.
On occasion they squish together, forming a wide, branching structure called a plasmodium that can cover several square metres as they search cities to conquer. Well, bacteria to digest at least.
If you thought your experience on Tinder was hard, dating for slime moulds is a nightmare. Cells can only mix-and-match their genetic material if each has a compatible set of genes called matA, mat B, and mat C, each with up to 16 variations.
But the truly fascinating part is their ability to sense and rapidly adapt to their environment – a behaviour we might, for lack of a better word, call learning.
It isn’t an animal, a plant, or a fungus. The slime mold (Physarum polycephalum) is a strange, creeping, bloblike organism made up of one giant cell. Though it has no brain, it can learn from experience, as biologists at the Research Centre on Animal Cognition (CNRS, Université Toulouse III — Paul Sabatier) previously demonstrated. Now the same team of scientists has gone a step further, proving that a slime mold can transmit what it has learned to a fellow slime mold when the two combine. These new findings are published in the December 21, 2016, issue of the Proceedings of the Royal Society B.
Imagine you could temporarily fuse with someone, acquire that person’s knowledge, and then split off to become your separate self again. With slime molds, that really happens! The slime mold — Physarum polycephalum for scientists — is a unicellular organism whose natural habitat is forest litter. But it can also be cultured in a laboratory petri dish. Audrey Dussutour and David Vogel had already trained slime molds to move past repellent but harmless substances (e.g. coffee, quinine, or salt) to reach their food. They now reveal that a slime mold that has learned to ignore salt can transmit this acquired behavior to another simply by fusing with it.
To achieve this, the researchers taught more than 2,000 slime molds that salt posed no threat. In order to reach their food, these slime molds had to cross a bridge covered with salt. This experience made them habituated slime molds. Meanwhile, another 2,000 slime molds had to cross a bridge bare of any substance. They made up the group of naive slime molds. After this training period, the scientists grouped slime molds into habituated, naive, and mixed pairs. Paired slime molds fused together where they came into contact. The new, fused slime molds then had to cross salt-covered bridges. To the researchers’ surprise, the mixed slime molds moved just as fast as habituated pairs, and much faster than naive ones, suggesting that knowledge of the harmless nature of salt had been shared. This held true for slime molds formed from 3 or 4 individuals. No matter how many fused, only 1 habituated slime mold was needed to transfer the information.
To check that transfer had indeed taken place, the scientists separated the slime molds 1 hour and 3 hours after fusion and repeated the bridge experiment. Only naive slime molds that had been fused with habituated slime molds for 3 hours ignored the salt; all others were repulsed by it. This was proof of learning. When viewing the slime molds through a microscope, the scientists noticed that, after 3 hours, a vein formed at the point of fusion. This vein is undoubtedly the channel through which information is shared. The next challenges facing the researchers are to elucidate the form this information takes, and to test whether more than one behavior can be transmitted simultaneously. If Slime Mold A learns how to ignore quinine and Slime Mold B to ignore salt, the biologists wonder whether both behaviors can be transmitted and retained through fusion.
Here’s a link to and a citation for the paper published in 2016,
Le blob est un organisme unicellulaire complexe mais dépourvu de système nerveux. Celui-ci est capable d’emmagasiner une connaissance et de la transmettre à ses congénères mais la manière dont il procède demeurait un mystère. Des chercheuses et chercheurs du Centre de recherches sur la cognition animale (CNRS/UT3 Paul Sabatier)* viennent de montrer que le blob apprend à tolérer une substance en l’absorbant.
Cette découverte découle d’une observation : les blobs s’échangent de l’information seulement lorsque leurs réseaux veineux fusionnent. Dans ce cas-là, la connaissance circule-t-elle au travers de ces veines ? Dès lors, la substance à laquelle le blob s’habitue constitue-t-elle le support de sa « mémoire » ?
Dans un premier temps l’équipe de scientifiques a entrainé des blobs à traverser des environnements salés pendant six jours dans le but de les habituer au sel. Par la suite, elle a évalué la concentration en sel au sein de ces blobs : ceux-ci en contenaient dix fois plus que les blobs « naïfs ». Les chercheurs les ont alors placés dans un environnement neutre et ont observé qu’ils excrétaient le sel qu’ils contenaient au bout de deux jours, perdant de fait « la mémoire ». Cette expérience semblait donc indiquer un lien entre la concentration de sel au sein de l’organisme et la « mémoire » de l’apprentissage.
Pour aller plus loin et confirmer cette hypothèse, les scientifiques ont introduit dans des blobs naïfs la « mémoire » de l’habituation au sel en en injectant directement dans leurs organismes. Deux heures après, les blobs ne se comportaient plus comme des naïfs mais comme des blobs ayant subi un entrainement de six jours.
Lorsque les conditions environnementales se détériorent, les blobs sont capables d’entrer dans un état de dormance. Les chercheurs ont démontré qu’un mois après être entrés dans cet état, les blobs conservaient leur habituation au sel. Les blobs stockent en effet le sel absorbé pendant la phase de dormance et conservent ainsi la connaissance sur le long terme.
Les résultats de cette étude prouvent que la substance aversive pourrait constituer le support de la « mémoire » du blob. Les chercheurs essayent maintenant de comprendre si le blob peut mémoriser plusieurs substances aversives en même temps et dans quelle mesure il est capable de s’y habituer.
* Le Centre de recherche sur la cognition animale fait partie du Centre de biologie intégrative (CNRS/UT3 Paul Sabatier)
Here’s the abstract for the paper (the link and citation follow afterward),
Learning and memory are indisputably key features of animal success. Using information about past experiences is critical for optimal decision-making in a fluctuating environment. Those abilities are usually believed to be limited to organisms with a nervous system, precluding their existence in non-neural organisms. However, recent studies showed that the slime mould Physarum polycephalum, despite being unicellular, displays habituation, a simple form of learning. In this paper, we studied the possible substrate of both short- and long-term habituation in slime moulds. We habituated slime moulds to sodium, a known repellent, using a 6 day training and turned them into a dormant state named sclerotia. Those slime moulds were then revived and tested for habituation. We showed that information acquired during the training was preserved through the dormant stage as slime moulds still showed habituation after a one-month dormancy period. Chemical analyses indicated a continuous uptake of sodium during the process of habituation and showed that sodium was retained throughout the dormant stage. Lastly, we showed that memory inception via constrained absorption of sodium for 2 h elicited habituation. Our results suggest that slime moulds absorbed the repellent and used it as a ‘circulating memory’.
This article is part of the theme issue ‘Liquid brains, solid brains: How distributed cognitive architectures process information’.
Here’s the link and the citation for the 2019 paper,
Should you ever wish to find ‘le blob’, the Paris Zoological Park, known as the parc zoologique de Paris, is one of four establishments which comprise the totality of the Muséum national d’histoire naturelle in Paris. There are others outside Paris. (You can find more in the Muséum’s Wikipedia entry but it is in French.)
The last time I wrote about memcapacitors (June 30, 2014 posting: Memristors, memcapacitors, and meminductors for faster computers), the ideas were largely theoretical; I believe this work is the first research I’ve seen on the topic. From an October 17, 2019 news item on ScienceDaily,
Researchers at the Department of Energy’s Oak Ridge National Laboratory ]ORNL], the University of Tennessee and Texas A&M University demonstrated bio-inspired devices that accelerate routes to neuromorphic, or brain-like, computing.
Results published in Nature Communications report the first example of a lipid-based “memcapacitor,” a charge storage component with memory that processes information much like synapses do in the brain. Their discovery could support the emergence of computing networks modeled on biology for a sensory approach to machine learning.
“Our goal is to develop materials and computing elements that work like biological synapses and neurons—with vast interconnectivity and flexibility—to enable autonomous systems that operate differently than current computing devices and offer new functionality and learning capabilities,” said Joseph Najem, a recent postdoctoral researcher at ORNL’s Center for Nanophase Materials Sciences, a DOE Office of Science User Facility, and current assistant professor of mechanical engineering at Penn State.
The novel approach uses soft materials to mimic biomembranes and simulate the way nerve cells communicate with one another.
The team designed an artificial cell membrane, formed at the interface of two lipid-coated water droplets in oil, to explore the material’s dynamic, electrophysiological properties. At applied voltages, charges build up on both sides of the membrane as stored energy, analogous to the way capacitors work in traditional electric circuits.
But unlike regular capacitors, the memcapacitor can “remember” a previously applied voltage and—literally—shape how information is processed. The synthetic membranes change surface area and thickness depending on electrical activity. These shapeshifting membranes could be tuned as adaptive filters for specific biophysical and biochemical signals.
“The novel functionality opens avenues for nondigital signal processing and machine learning modeled on nature,” said ORNL’s Pat Collier, a CNMS staff research scientist.
A distinct feature of all digital computers is the separation of processing and memory. Information is transferred back and forth from the hard drive and the central processor, creating an inherent bottleneck in the architecture no matter how small or fast the hardware can be.
Neuromorphic computing, modeled on the nervous system, employs architectures that are fundamentally different in that memory and signal processing are co-located in memory elements—memristors, memcapacitors and meminductors.
These “memelements” make up the synaptic hardware of systems that mimic natural information processing, learning and memory.
Systems designed with memelements offer advantages in scalability and low power consumption, but the real goal is to carve out an alternative path to artificial intelligence, said Collier.
Tapping into biology could enable new computing possibilities, especially in the area of “edge computing,” such as wearable and embedded technologies that are not connected to a cloud but instead make on-the-fly decisions based on sensory input and past experience.
Biological sensing has evolved over billions of years into a highly sensitive system with receptors in cell membranes that are able to pick out a single molecule of a specific odor or taste. “This is not something we can match digitally,” Collier said.
Digital computation is built around digital information, the binary language of ones and zeros coursing through electronic circuits. It can emulate the human brain, but its solid-state components do not compute sensory data the way a brain does.
“The brain computes sensory information pushed through synapses in a neural network that is reconfigurable and shaped by learning,” said Collier. “Incorporating biology—using biomembranes that sense bioelectrochemical information—is key to developing the functionality of neuromorphic computing.”
While numerous solid-state versions of memelements have been demonstrated, the team’s biomimetic elements represent new opportunities for potential “spiking” neural networks that can compute natural data in natural ways.
Spiking neural networks are intended to simulate the way neurons spike with electrical potential and, if the signal is strong enough, pass it on to their neighbors through synapses, carving out learning pathways that are pruned over time for efficiency.
A bio-inspired version with analog data processing is a distant aim. Current early-stage research focuses on developing the components of bio-circuitry.
“We started with the basics, a memristor that can weigh information via conductance to determine if a spike is strong enough to be broadcast through a network of synapses connecting neurons,” said Collier. “Our memcapacitor goes further in that it can actually store energy as an electric charge in the membrane, enabling the complex ‘integrate and fire’ activity of neurons needed to achieve dense networks capable of brain-like computation.”
The team’s next steps are to explore new biomaterials and study simple networks to achieve more complex brain-like functionalities with memelements.
Here’s a link to and a citation for the paper,
Dynamical nonlinear memory capacitance in biomimetic membranes by Joseph S. Najem, Md Sakib Hasan, R. Stanley Williams, Ryan J. Weiss, Garrett S. Rose, Graham J. Taylor, Stephen A. Sarles & C. Patrick Collier. Nature Communications volume 10, Article number: 3239 (2019) DOI: DOIhttps://doi.org/10.1038/s41467-019-11223-8 Published July 19, 2019
This paper is open access.
One final comment, you might recognize one of the authors (R. Stanley Williams) who in 2008 helped launch ‘memristor’ research.
I think this is my first encounter with a second-order memristor. An August 28, 2019 news item on Nanowerk announces the research (Note: A link has been removed),
Researchers from the Moscow Institute of Physics and Technology [MIPT} have created a device that acts like a synapse in the living brain, storing information and gradually forgetting it when not accessed for a long time. Known as a second-order memristor, the new device is based on hafnium oxide and offers prospects for designing analog neurocomputers imitating the way a biological brain learns.
An August 28, 2019 MIPT press release (also on EurekAlert), which originated the news item, provides an explanation for neuromorphic computing (analog neurocomputers; brainlike computing), the difference between a first-order and second-order memristor, and an in depth view of the research,
Neurocomputers, which enable artificial intelligence, emulate the way the brain works. It stores data in the form of synapses, a network of connections between the nerve cells, or neurons. Most neurocomputers have a conventional digital architecture and use mathematical models to invoke virtual neurons and synapses.
Alternatively, an actual on-chip electronic component could stand for each neuron and synapse in the network. This so-called analog approach has the potential to drastically speed up computations and reduce energy costs.
The core component of a hypothetical analog neurocomputer is the memristor. The word is a portmanteau of “memory” and “resistor,” which pretty much sums up what it is: a memory cell acting as a resistor. Loosely speaking, a high resistance encodes a zero, and a low resistance encodes a one. This is analogous to how a synapse conducts a signal between two neurons (one), while the absence of a synapse results in no signal, a zero.
But there is a catch: In an actual brain, the active synapses tend to strengthen over time, while the opposite is true for inactive ones. This phenomenon known as synaptic plasticity is one of the foundations of natural learning and memory. It explains the biology of cramming for an exam and why our seldom accessed memories fade.
Proposed in 2015, the second-order memristor is an attempt to reproduce natural memory, complete with synaptic plasticity. The first mechanism for implementing this involves forming nanosized conductive bridges across the memristor. While initially decreasing resistance, they naturally decay with time, emulating forgetfulness.
“The problem with this solution is that the device tends to change its behavior over time and breaks down after prolonged operation,” said the study’s lead author Anastasia Chouprik from MIPT’s Neurocomputing Systems Lab. “The mechanism we used to implement synaptic plasticity is more robust. In fact, after switching the state of the system 100 billion times, it was still operating normally, so my colleagues stopped the endurance test.”
Instead of nanobridges, the MIPT team relied on hafnium oxide to imitate natural memory. This material is ferroelectric: Its internal bound charge distribution — electric polarization — changes in response to an external electric field. If the field is then removed, the material retains its acquired polarization, the way a ferromagnet remains magnetized.
The physicists implemented their second-order memristor as a ferroelectric tunnel junction — two electrodes interlaid with a thin hafnium oxide film (fig. 1, right). The device can be switched between its low and high resistance states by means of electric pulses, which change the ferroelectric film’s polarization and thus its resistance.
“The main challenge that we faced was figuring out the right ferroelectric layer thickness,” Chouprik added. “Four nanometers proved to be ideal. Make it just one nanometer thinner, and the ferroelectric properties are gone, while a thicker film is too wide a barrier for the electrons to tunnel through. And it is only the tunneling current that we can modulate by switching polarization.”
What gives hafnium oxide an edge over other ferroelectric materials, such as barium titanate, is that it is already used by current silicon technology. For example, Intel has been manufacturing microchips based on a hafnium compound since 2007. This makes introducing hafnium-based devices like the memristor reported in this story far easier and cheaper than those using a brand-new material.
In a feat of ingenuity, the researchers implemented “forgetfulness” by leveraging the defects at the interface between silicon and hafnium oxide. Those very imperfections used to be seen as a detriment to hafnium-based microprocessors, and engineers had to find a way around them by incorporating other elements into the compound. Instead, the MIPT team exploited the defects, which make memristor conductivity die down with time, just like natural memories.
Vitalii Mikheev, the first author of the paper, shared the team’s future plans: “We are going to look into the interplay between the various mechanisms switching the resistance in our memristor. It turns out that the ferroelectric effect may not be the only one involved. To further improve the devices, we will need to distinguish between the mechanisms and learn to combine them.”
According to the physicists, they will move on with the fundamental research on the properties of hafnium oxide to make the nonvolatile random access memory cells more reliable. The team is also investigating the possibility of transferring their devices onto a flexible substrate, for use in flexible electronics.
Last year, the researchers offered a detailed description of how applying an electric field to hafnium oxide films affects their polarization. It is this very process that enables reducing ferroelectric memristor resistance, which emulates synapse strengthening in a biological brain. The team also works on neuromorphic computing systems with a digital architecture.
MIPT has provided this image illustrating the research,
Once you get past the technical language (there’s a lot of it), you’ll find that they make the link between biomimicry and memristors explicit. Admittedly I’m not an expert but if I understand the research correctly, the scientists are suggesting that the algorithms used in machine learning today cannot allow memristors to be properly integrated for use in true neuromorphic computing and this work from Russia and Greece points to a new paradigm. If you understand it differently, please do let me know in the comments.
Lobachevsky University scientists together with their colleagues from the National Research Center “Kurchatov Institute” (Moscow) and the National Research Center “Demokritos” (Athens) are working on the hardware implementation of a spiking neural network based on memristors.
The key elements of such a network, along with pulsed neurons, are artificial synaptic connections that can change the strength (weight) of connection between neurons during the learning (Microelectronic Engineering, “Yttria-stabilized zirconia cross-point memristive devices for neuromorphic applications”).
For this purpose, memristive devices based on metal-oxide-metal nanostructures developed at the UNN Physics and Technology Research Institute (PTRI) are suitable, but their use in specific spiking neural network architectures developed at the Kurchatov Institute requires demonstration of biologically plausible learning principles.
The biological mechanism of learning of neural systems is described by Hebb’s rule, according to which learning occurs as a result of an increase in the strength of connection (synaptic weight) between simultaneously active neurons, which indicates the presence of a causal relationship in their excitation. One of the clarifying forms of this fundamental rule is plasticity, which depends on the time of arrival of pulses (Spike-Timing Dependent Plasticity – STDP).
In accordance with STDP, synaptic weight increases if the postsynaptic neuron generates a pulse (spike) immediately after the presynaptic one, and vice versa, the synaptic weight decreases if the postsynaptic neuron generates a spike right before the presynaptic one. Moreover, the smaller the time difference Δt between the pre- and postsynaptic spikes, the more pronounced the weight change will be.
According to one of the researchers, Head of the UNN PTRI laboratory Alexei Mikhailov, in order to demonstrate the STDP principle, memristive nanostructures based on yttria-stabilized zirconia (YSZ) thin films were used. YSZ is a well-known solid-state electrolyte with high oxygen ion mobility.
“Due to a specified concentration of oxygen vacancies, which is determined by the controlled concentration of yttrium impurities, and the heterogeneous structure of the films obtained by magnetron sputtering, such memristive structures demonstrate controlled bipolar switching between different resistive states in a wide resistance range. The switching is associated with the formation and destruction of conductive channels along grain boundaries in the polycrystalline ZrO2 (Y) film,” notes Alexei Mikhailov.
An array of memristive devices for research was implemented in the form of a microchip mounted in a standard cermet casing, which facilitates the integration of the array into a neural network’s analog circuit. The full technological cycle for creating memristive microchips is currently implemented at the UNN PTRI. In the future, it is possible to scale the devices down to the minimum size of about 50 nm, as was established by Greek partners. Our studies of the dynamic plasticity of the memoristive devices, continues Alexey Mikhailov, have shown that the form of the conductance change depending on Δt is in good agreement with the STDP learning rules. It should be also noted that if the initial value of the memristor conductance is close to the maximum, it is easy to reduce the corresponding weight while it is difficult to enhance it, and in the case of a memristor with a minimum conductance in the initial state, it is difficult to reduce its weight, but it is easy to enhance it.
According to Vyacheslav Demin, director-coordinator in the area of nature-like technologies of the Kurchatov Institute, who is one of the ideologues of this work, the established pattern of change in the memristor conductance clearly demonstrates the possibility of hardware implementation of the so-called local learning rules. Such rules for changing the strength of synaptic connections depend only on the values of variables that are present locally at each time point (neuron activities and current weights).
“This essentially distinguishes such principle from the traditional learning algorithm, which is based on global rules for changing weights, using information on the error values at the current time point for each neuron of the output neural network layer (in a widely popular group of error back propagation methods). The traditional principle is not biosimilar, it requires “external” (expert) knowledge of the correct answers for each example presented to the network (that is, they do not have the property of self-learning). This principle is difficult to implement on the basis of memristors, since it requires controlled precise changes of memristor conductances, as opposed to local rules. Such precise control is not always possible due to the natural variability (a wide range of parameters) of memristors as analog elements,” says Vyacheslav Demin.
Local learning rules of the STDP type implemented in hardware on memristors provide the basis for autonomous (“unsupervised”) learning of a spiking neural network. In this case, the final state of the network does not depend on its initial state, but depends only on the learning conditions (a specific sequence of pulses). According to Vyacheslav Demin, this opens up prospects for the application of local learning rules based on memristors when solving artificial intelligence problems with the use of complex spiking neural network architectures.
There are three (or more?) possible applications including neuromorphic computing for this new optoelectronic technology which is based on black phophorus. A July 16, 2019 news item on Nanowerk announces the research,
Researchers from RMIT University [Australia] drew inspiration from an emerging tool in biotechnology – optogenetics – to develop a device that replicates the way the brain stores and loses information.
Optogenetics allows scientists to delve into the body’s electrical system with incredible precision, using light to manipulate neurons so that they can be turned on or off.
The new chip is based on an ultra-thin material that changes electrical resistance in response to different wavelengths of light, enabling it to mimic the way that neurons work to store and delete information in the brain.
Research team leader Dr Sumeet Walia said the technology moves us closer towards artificial intelligence (AI) that can harness the brain’s full sophisticated functionality.
“Our optogenetically-inspired chip imitates the fundamental biology of nature’s best computer – the human brain,” Walia said.
“Being able to store, delete and process information is critical for computing, and the brain does this extremely efficiently.
“We’re able to simulate the brain’s neural approach simply by shining different colours onto our chip.
“This technology takes us further on the path towards fast, efficient and secure light-based computing.
“It also brings us an important step closer to the realisation of a bionic brain – a brain-on-a-chip that can learn from its environment just like humans do.”
Dr Taimur Ahmed, lead author of the study published in Advanced Functional Materials, said being able to replicate neural behavior on an artificial chip offered exciting avenues for research across sectors.
“This technology creates tremendous opportunities for researchers to better understand the brain and how it’s affected by disorders that disrupt neural connections, like Alzheimer’s disease and dementia,” Ahmed said.
The researchers, from the Functional Materials and Microsystems Research Group at RMIT, have also demonstrated the chip can perform logic operations – information processing – ticking another box for brain-like functionality.
Developed at RMIT’s MicroNano Research Facility, the technology is compatible with existing electronics and has also been demonstrated on a flexible platform, for integration into wearable electronics.
How the chip works:
Neural connections happen in the brain through electrical impulses. When tiny energy spikes reach a certain threshold of voltage, the neurons bind together – and you’ve started creating a memory.
On the chip, light is used to generate a photocurrent. Switching between colors causes the current to reverse direction from positive to negative.
This direction switch, or polarity shift, is equivalent to the binding and breaking of neural connections, a mechanism that enables neurons to connect (and induce learning) or inhibit (and induce forgetting).
This is akin to optogenetics, where light-induced modification of neurons causes them to either turn on or off, enabling or inhibiting connections to the next neuron in the chain.
To develop the technology, the researchers used a material called black phosphorus (BP) that can be inherently defective in nature.
This is usually a problem for optoelectronics, but with precision engineering the researchers were able to harness the defects to create new functionality.
“Defects are usually looked on as something to be avoided, but here we’re using them to create something novel and useful,” Ahmed said.
“It’s a creative approach to finding solutions for the technical challenges we face.”
The brain’s capacity for simultaneously learning and memorizing large amounts of information while requiring little energy has inspired an entire field to pursue brain-like — or neuromorphic — computers. Researchers at Stanford University and Sandia National Laboratories previously developed one portion of such a computer: a device that acts as an artificial synapse, mimicking the way neurons communicate in the brain.
In a paper published online by the journal Science on April 25 , the team reports that a prototype array of nine of these devices performed even better than expected in processing speed, energy efficiency, reproducibility and durability.
Looking forward, the team members want to combine their artificial synapse with traditional electronics, which they hope could be a step toward supporting artificially intelligent learning on small devices.
“If you have a memory system that can learn with the energy efficiency and speed that we’ve presented, then you can put that in a smartphone or laptop,” said Scott Keene, co-author of the paper and a graduate student in the lab of Alberto Salleo, professor of materials science and engineering at Stanford who is co-senior author. “That would open up access to the ability to train our own networks and solve problems locally on our own devices without relying on data transfer to do so.”
The team’s artificial synapse is similar to a battery, modified so that the researchers can dial up or down the flow of electricity between the two terminals. That flow of electricity emulates how learning is wired in the brain. This is an especially efficient design because data processing and memory storage happen in one action, rather than a more traditional computer system where the data is processed first and then later moved to storage.
Seeing how these devices perform in an array is a crucial step because it allows the researchers to program several artificial synapses simultaneously. This is far less time consuming than having to program each synapse one-by-one and is comparable to how the brain actually works.
In previous tests of an earlier version of this device, the researchers found their processing and memory action requires about one-tenth as much energy as a state-of-the-art computing system needs in order to carry out specific tasks. Still, the researchers worried that the sum of all these devices working together in larger arrays could risk drawing too much power. So, they retooled each device to conduct less electrical current – making them much worse batteries but making the array even more energy efficient.
The 3-by-3 array relied on a second type of device – developed by Joshua Yang at the University of Massachusetts, Amherst, who is co-author of the paper – that acts as a switch for programming synapses within the array.
“Wiring everything up took a lot of troubleshooting and a lot of wires. We had to ensure all of the array components were working in concert,” said Armantas Melianas, a postdoctoral scholar in the Salleo lab. “But when we saw everything light up, it was like a Christmas tree. That was the most exciting moment.”
During testing, the array outperformed the researchers’ expectations. It performed with such speed that the team predicts the next version of these devices will need to be tested with special high-speed electronics. After measuring high energy efficiency in the 3-by-3 array, the researchers ran computer simulations of a larger 1024-by-1024 synapse array and estimated that it could be powered by the same batteries currently used in smartphones or small drones. The researchers were also able to switch the devices over a billion times – another testament to its speed – without seeing any degradation in its behavior.
“It turns out that polymer devices, if you treat them well, can be as resilient as traditional counterparts made of silicon. That was maybe the most surprising aspect from my point of view,” Salleo said. “For me, it changes how I think about these polymer devices in terms of reliability and how we might be able to use them.”
Room for creativity
The researchers haven’t yet submitted their array to tests that determine how well it learns but that is something they plan to study. The team also wants to see how their device weathers different conditions – such as high temperatures – and to work on integrating it with electronics. There are also many fundamental questions left to answer that could help the researchers understand exactly why their device performs so well.
“We hope that more people will start working on this type of device because there are not many groups focusing on this particular architecture, but we think it’s very promising,” Melianas said. “There’s still a lot of room for improvement and creativity. We only barely touched the surface.”
Adding to the body of ‘memristor’ research I have here, there’s an April 17, 2019 news item on Nanowerk announcing the development of ‘memristor’ hardware by Japanese researchers (Note: A link has been removed),
A research group from Tohoku University has developed spintronics devices which are promising for future energy-efficient and adoptive computing systems, as they behave like neurons and synapses in the human brain (Advanced Materials, “Artificial Neuron and Synapse Realized in an Antiferromagnet/Ferromagnet Heterostructure Using Dynamics of Spin–Orbit Torque Switching”).
Today’s information society is built on digital computers that have evolved drastically for half a century and are capable of executing complicated tasks reliably. The human brain, by contrast, operates under very limited power and is capable of executing complex tasks efficiently using an architecture that is vastly different from that of digital computers.
So the development of computing schemes or hardware inspired by the processing of information in the brain is of broad interest to scientists in fields ranging from physics, chemistry, material science and mathematics, to electronics and computer science.
In computing, there are various ways to implement the processing of information by a brain. Spiking neural network is a kind of implementation method which closely mimics the brain’s architecture and temporal information processing. Successful implementation of spiking neural network requires dedicated hardware with artificial neurons and synapses that are designed to exhibit the dynamics of biological neurons and synapses.
Here, the artificial neuron and synapse would ideally be made of the same material system and operated under the same working principle. However, this has been a challenging issue due to the fundamentally different nature of the neuron and synapse in biological neural networks.
The research group – which includes Professor Hideo Ohno (currently the university president), Associate Professor Shunsuke Fukami, Dr. Aleksandr Kurenkov and Professor Yoshihiko Horio – created an artificial neuron and synapse by using spintronics technology. Spintronics is an academic field that aims to simultaneously use an electron’s electric (charge) and magnetic (spin) properties.
The research group had previously developed a functional material system consisting of antiferromagnetic and ferromagnetic materials. This time, they prepared artificial neuronal and synaptic devices microfabricated from the material system, which demonstrated fundamental behavior of biological neuron and synapse – leaky integrate-and-fire and spike-timing-dependent plasticity, respectively – based on the same concept of spintronics.
The spiking neural network is known to be advantageous over today’s artificial intelligence for the processing and prediction of temporal information. Expansion of the developed technology to unit-circuit, block and system levels is expected to lead to computers that can process time-varying information such as voice and video with a small amount of power or edge devices that have the an ability to adopt users and the environment through usage.
At a guess, this was originally a photograph which has been passed through some sort of programme to give it a paintinglike quality.
Moving onto the research, I don’t see any reference to memristors (another of the ‘devices’ that mimics the human brain) so perhaps this is an entirely different way to mimic human brains? A February 5, 2019 news item on ScienceDaily announces the work from Linkoping University (Sweden),
A new transistor based on organic materials has been developed by scientists at Linköping University. It has the ability to learn, and is equipped with both short-term and long-term memory. The work is a major step on the way to creating technology that mimics the human brain.
Until now, brains have been unique in being able to create connections where there were none before. In a scientific article in Advanced Science, researchers from Linköping University describe a transistor that can create a new connection between an input and an output. They have incorporated the transistor into an electronic circuit that learns how to link a certain stimulus with an output signal, in the same way that a dog learns that the sound of a food bowl being prepared means that dinner is on the way.
A normal transistor acts as a valve that amplifies or dampens the output signal, depending on the characteristics of the input signal. In the organic electrochemical transistor that the researchers have developed, the channel in the transistor consists of an electropolymerised conducting polymer. The channel can be formed, grown or shrunk, or completely eliminated during operation. It can also be trained to react to a certain stimulus, a certain input signal, such that the transistor channel becomes more conductive and the output signal larger.
“It is the first time that real time formation of new electronic components is shown in neuromorphic devices”, says Simone Fabiano, principal investigator in organic nanoelectronics at the Laboratory of Organic Electronics, Campus Norrköping.
The channel is grown by increasing the degree of polymerisation of the material in the transistor channel, thereby increasing the number of polymer chains that conduct the signal. Alternatively, the material may be overoxidised (by applying a high voltage) and the channel becomes inactive. Temporary changes of the conductivity can also be achieved by doping or dedoping the material.
“We have shown that we can induce both short-term and permanent changes to how the transistor processes information, which is vital if one wants to mimic the ways that brain cells communicate with each other”, says Jennifer Gerasimov, postdoc in organic nanoelectronics and one of the authors of the article.
By changing the input signal, the strength of the transistor response can be modulated across a wide range, and connections can be created where none previously existed. This gives the transistor a behaviour that is comparable with that of the synapse, or the communication interface between two brain cells.
It is also a major step towards machine learning using organic electronics. Software-based artificial neural networks are currently used in machine learning to achieve what is known as “deep learning”. Software requires that the signals are transmitted between a huge number of nodes to simulate a single synapse, which takes considerable computing power and thus consumes considerable energy.
“We have developed hardware that does the same thing, using a single electronic component”, says Jennifer Gerasimov.
“Our organic electrochemical transistor can therefore carry out the work of thousands of normal transistors with an energy consumption that approaches the energy consumed when a human brain transmits signals between two cells”, confirms Simone Fabiano.
The transistor channel has not been constructed using the most common polymer used in organic electronics, PEDOT, but instead using a polymer of a newly-developed monomer, ETE-S, produced by Roger Gabrielsson, who also works at the Laboratory of Organic Electronics and is one of the authors of the article. ETE-S has several unique properties that make it perfectly suited for this application – it forms sufficiently long polymer chains, is water-soluble while the polymer form is not, and it produces polymers with an intermediate level of doping. The polymer PETE-S is produced in its doped form with an intrinsic negative charge to balance the positive charge carriers (it is p-doped).
The European Union’s Human Brain Project was announced in January 2013. It, along with the Graphene Flagship, had won a multi-year competition for the extraordinary sum of one million euros each to be paid out over a 10-year period. (My January 28, 2013 posting gives the details available at the time.)
At a little more than half-way through the project period, Ed Yong, in his July 22, 2019 article for The Atlantic, offers an update (of sorts),
Ten years ago, a neuroscientist said that within a decade he could simulate a human brain. Spoiler: It didn’t happen.
On July 22, 2009, the neuroscientist Henry Markram walked onstage at the TEDGlobal conference in Oxford, England, and told the audience that he was going to simulate the human brain, in all its staggering complexity, in a computer. His goals were lofty: “It’s perhaps to understand perception, to understand reality, and perhaps to even also understand physical reality.” His timeline was ambitious: “We can do it within 10 years, and if we do succeed, we will send to TED, in 10 years, a hologram to talk to you.” …
It’s been exactly 10 years. He did not succeed.
One could argue that the nature of pioneers is to reach far and talk big, and that it’s churlish to single out any one failed prediction when science is so full of them. (Science writers joke that breakthrough medicines and technologies always seem five to 10 years away, on a rolling window.) But Markram’s claims are worth revisiting for two reasons. First, the stakes were huge: In 2013, the European Commission awarded his initiative—the Human Brain Project (HBP)—a staggering 1 billion euro grant (worth about $1.42 billion at the time). Second, the HBP’s efforts, and the intense backlash to them, exposed important divides in how neuroscientists think about the brain and how it should be studied.
Markram’s goal wasn’t to create a simplified version of the brain, but a gloriously complex facsimile, down to the constituent neurons, the electrical activity coursing along them, and even the genes turning on and off within them. From the outset, the criticism to this approach was very widespread, and to many other neuroscientists, its bottom-up strategy seemed implausible to the point of absurdity. The brain’s intricacies—how neurons connect and cooperate, how memories form, how decisions are made—are more unknown than known, and couldn’t possibly be deciphered in enough detail within a mere decade. It is hard enough to map and model the 302 neurons of the roundworm C. elegans, let alone the 86 billion neurons within our skulls. “People thought it was unrealistic and not even reasonable as a goal,” says the neuroscientist Grace Lindsay, who is writing a book about modeling the brain. And what was the point? The HBP wasn’t trying to address any particular research question, or test a specific hypothesis about how the brain works. The simulation seemed like an end in itself—an overengineered answer to a nonexistent question, a tool in search of a use. …
Markram seems undeterred. In a recent paper, he and his colleague Xue Fan firmly situated brain simulations within not just neuroscience as a field, but the entire arc of Western philosophy and human civilization. And in an email statement, he told me, “Political resistance (non-scientific) to the project has indeed slowed us down considerably, but it has by no means stopped us nor will it.” He noted the 140 people still working on the Blue Brain Project, a recent set of positive reviews from five external reviewers, and its “exponentially increasing” ability to “build biologically accurate models of larger and larger brain regions.”
No time frame, this time, but there’s no shortage of other people ready to make extravagant claims about the future of neuroscience. In 2014, I attended TED’s main Vancouver conference and watched the opening talk, from the MIT Media Lab founder Nicholas Negroponte. In his closing words, he claimed that in 30 years, “we are going to ingest information. …
I’m happy to see the update. As I recall, there was murmuring almost immediately about the Human Brain Project (HBP). I never got details but it seemed that people were quite actively unhappy about the disbursements. Of course, this kind of uproar is not unusual when great sums of money are involved and the Graphene Flagship also had its rocky moments.
As for Yong’s contribution, I’m glad he’s debunking some of the hype and glory associated with the current drive to colonize the human brain and other efforts (e.g. genetics) which they often claim are the ‘future of medicine’.
To be fair. Yong is focused on the brain simulation aspect of the HBP (and Markram’s efforts in the Blue Brain Project) but there are other HBP efforts, as well, even if brain simulation seems to be the HBP’s main interest.
In 2013, the European Union funded the Human Brain Project, led by Markram, to the tune of $1.3 billion. Markram claimed that the project would create a simulation of the entire human brain on a supercomputer within a decade, revolutionising the treatment of Alzheimer’s disease and other brain disorders. Less than two years into it, the project was recognised to be mismanaged and its claims overblown, and Markram was asked to step down.
On 8 October 2015, the Blue Brain Project published the first digital reconstruction and simulation of the micro-circuitry of a neonatal rat somatosensory cortex.
I also looked up the Human Brain Project and, talking about their other efforts, was reminded that they have a neuromorphic computing platform, SpiNNaker (mentioned here in a January 24, 2019 posting; scroll down about 50% of the way). For anyone unfamiliar with the term, neuromorphic computing/engineering is what scientists call the effort to replicate the human brain’s ability to synthesize and process information in computing processors.
In fact, there was some discussion in 2013 that the Human Brain Project and the Graphene Flagship would have some crossover projects, e.g., trying to make computers more closely resemble human brains in terms of energy use and processing power.
The Human Brain Project’s (HBP) Silicon Brains webpage notes this about their neuromorphic computing platform,
Neuromorphic computing implements aspects of biological neural networks as analogue or digital copies on electronic circuits. The goal of this approach is twofold: Offering a tool for neuroscience to understand the dynamic processes of learning and development in the brain and applying brain inspiration to generic cognitive computing. Key advantages of neuromorphic computing compared to traditional approaches are energy efficiency, execution speed, robustness against local failures and the ability to learn.
Neuromorphic Computing in the HBP
In the HBP the neuromorphic computing Subproject carries out two major activities: Constructing two large-scale, unique neuromorphic machines and prototyping the next generation neuromorphic chips.
The large-scale neuromorphic machines are based on two complementary principles. The many-core SpiNNaker machine located in Manchester [emphasis mine] (UK) connects 1 million ARM processors with a packet-based network optimized for the exchange of neural action potentials (spikes). The BrainScaleS physical model machine located in Heidelberg (Germany) implements analogue electronic models of 4 Million neurons and 1 Billion synapses on 20 silicon wafers. Both machines are integrated into the HBP collaboratory and offer full software support for their configuration, operation and data analysis.
The most prominent feature of the neuromorphic machines is their execution speed. The SpiNNaker system runs at real-time, BrainScaleS is implemented as an accelerated system and operates at 10,000 times real-time. Simulations at conventional supercomputers typical run factors of 1000 slower than biology and cannot access the vastly different timescales involved in learning and development ranging from milliseconds to years.
Recent research in neuroscience and computing has indicated that learning and development are a key aspect for neuroscience and real world applications of cognitive computing. HBP is the only project worldwide addressing this need with dedicated novel hardware architectures.
I’ve highlighted Manchester because that’s a very important city where graphene is concerned. The UK’s National Graphene Institute is housed at the University of Manchester where graphene was first isolated in 2004 by two scientists, Andre Geim and Konstantin (Kostya) Novoselov. (For their effort, they were awarded the Nobel Prize for physics in 2010.)
Getting back to the HBP (and the Graphene Flagship for that matter), the funding should be drying up sometime around 2023 and I wonder if it will be possible to assess the impact.