Tag Archives: artificial synapses

ChatGPT and a neuromorphic (brainlike) synapse

I was teaching an introductory course about nanotechnology back in 2014 and, at the end of a session, stated (more or less) that the full potential for artificial intelligence (software) wasn’t going to be perceived until the hardware (memistors) was part of the package. (It’s interesting to revisit that in light of the recent uproar around AI (covered in my May 25, 2023 posting, which offered a survey of the situation.)

One of the major problems with artificial intelligence is its memory. The other is energy consumption. Both problems could be addressed by the integration of memristors into the hardware, giving rise to neuromorphic (brainlike) computing. (For those who don’t know, the human brain in addition to its capacity for memory is remarkably energy efficient.)

This is the first time I’ve seen research into memristors where software has been included. Disclaimer: There may be a lot more research of this type; I just haven’t seen it before. A March 24, 2023 news item on ScienceDaily announces research from Korea,

ChatGPT’s impact extends beyond the education sector and is causing significant changes in other areas. The AI language model is recognized for its ability to perform various tasks, including paper writing, translation, coding, and more, all through question-and-answer-based interactions. The AI system relies on deep learning, which requires extensive training to minimize errors, resulting in frequent data transfers between memory and processors. However, traditional digital computer systems’ von Neumann architecture separates the storage and computation of information, resulting in increased power consumption and significant delays in AI computations. Researchers have developed semiconductor technologies suitable for AI applications to address this challenge.

A March 24, 2023 Pohang University of Science & Technology (POSTECH) press release (also on EurekAlert), which originated the news item, provides more detail,

A research team at POSTECH, led by Professor Yoonyoung Chung (Department of Electrical Engineering, Department of Semiconductor Engineering), Professor Seyoung Kim (Department of Materials Science and Engineering, Department of Semiconductor Engineering), and Ph.D. candidate Seongmin Park (Department of Electrical Engineering), has developed a high-performance AI semiconductor device [emphasis mine] using indium gallium zinc oxide (IGZO), an oxide semiconductor widely used in OLED [organic light-emitting diode] displays. The new device has proven to be excellent in terms of performance and power efficiency.

Efficient AI operations, such as those of ChatGPT, require computations to occur within the memory responsible for storing information. Unfortunately, previous AI semiconductor technologies were limited in meeting all the requirements, such as linear and symmetric programming and uniformity, to improve AI accuracy.

The research team sought IGZO as a key material for AI computations that could be mass-produced and provide uniformity, durability, and computing accuracy. This compound comprises four atoms in a fixed ratio of indium, gallium, zinc, and oxygen and has excellent electron mobility and leakage current properties, which have made it a backplane of the OLED display.

Using this material, the researchers developed a novel synapse device [emphasis mine] composed of two transistors interconnected through a storage node. The precise control of this node’s charging and discharging speed has enabled the AI semiconductor to meet the diverse performance metrics required for high-level performance. Furthermore, applying synaptic devices to a large-scale AI system requires the output current of synaptic devices to be minimized. The researchers confirmed the possibility of utilizing the ultra-thin film insulators inside the transistors to control the current, making them suitable for large-scale AI.

The researchers used the newly developed synaptic device to train and classify handwritten data, achieving a high accuracy of over 98%, [emphasis mine] which verifies its potential application in high-accuracy AI systems in the future.

Professor Chung explained, “The significance of my research team’s achievement is that we overcame the limitations of conventional AI semiconductor technologies that focused solely on material development. To do this, we utilized materials already in mass production. Furthermore, Linear and symmetrical programming characteristics were obtained through a new structure using two transistors as one synaptic device. Thus, our successful development and application of this new AI semiconductor technology show great potential to improve the efficiency and accuracy of AI.”

This study was published last week [March 2023] on the inside back cover of Advanced Electronic Materials [paper edition] and was supported by the Next-Generation Intelligent Semiconductor Technology Development Program through the National Research Foundation, funded by the Ministry of Science and ICT [Information and Communication Technologies] of Korea.

Here’s a link to and a citation for the paper,

Highly Linear and Symmetric Analog Neuromorphic Synapse Based on Metal Oxide Semiconductor Transistors with Self-Assembled Monolayer for High-Precision Neural Network Computatio by Seongmin Park, Suwon Seong, Gilsu Jeon, Wonjae Ji, Kyungmi Noh, Seyoung Kim, Yoonyoung Chun. Volume 9, Issue 3 March 2023 2200554 DOI: https://doi.org/10.1002/aelm.202200554 First published online: 29 December 2022

This paper is open access.

Also, there is another approach to using materials such as indium gallium zinc oxide (IGZO) for a memristor. That would be using biological cells as my June 6, 2023 posting, which features work on biological neural networks (BNNs), suggests in relation to creating robots that can perform brainlike computing.

A nontraditional artificial synaptic device and roadmap for Chinese research into neuromorphic devices

A November 9, 2022 Science China Press press release on EurekAlert announces a new approach to developing neuromorphic (brainlike) devices,

Neuromorphic computing is an information processing model that simulates the efficiency of the human brain with multifunctionality and flexibility. Currently, artificial synaptic devices represented by memristors have been extensively used in neural morphological computing, and different types of neural networks have been developed. However, it is time-consuming and laborious to perform fixing and redeploying of weights stored by traditional artificial synaptic devices. Moreover, synaptic strength is primarily reconstructed via software programming and changing the pulse time, which can result in low efficiency and high energy consumption in neural morphology computing applications.

In a novel research article published in the Beijing-based National Science Review, Prof. Lili Wang from the Chinese Academy of Sciences and her colleagues present a novel hardware neural network based on a tunable flexible MXene energy storage (FMES) system. The system comprises flexible postsynaptic electrodes and MXene nanosheets, which are connected with the presynaptic electrodes using electrolytes. The potential changes in the ion migration process and adsorption in the supercapacitor can simulate information transmission in the synaptic gap. Additionally, the voltage of the FMES system represents the synaptic weight of the connection between two neurons.

Researchers explored the changes of paired-pulse facilitation under different resistance levels to investigate the effect of resistance on the advanced learning and memory behavior of the artificial synaptic system of FMES. The results revealed that the larger the standard deviation, the stronger the memory capacity of the system. In other words, with the continuous improvement of electrical resistance and stimulation time, the memory capacity of the artificial synaptic system of FMES is gradually improved. Therefore, the system can effectively control the accumulation and dissipation of ions by regulating the resistance value in the system without changing the external stimulus, which is expected to realize the coupling of sensing signals and storage weight.

The FMES system can be used to develop neural networks and realize various neural morphological computing tasks, making the recognition accuracy of handwritten digit sets reach 95%. Additionally, the FMES system can simulate the adaptivity of the human brain to achieve adaptive recognition of similar target data sets. Following the training process, the adaptive recognition accuracy can reach approximately 80%, and avoid the time and energy loss caused by recalculation.

“In the future, based on this research, different types of sensors can be integrated on the chip to further realize multimodal sensing computing integrated architecture.” Prof. Lili Wang stated, “The device can perform low-energy calculations, and is expected to solve the problems of high write noise, nonlinear difference, and diffusion under zero bias voltage in certain neural morphological systems.”

Here’s a link to and a citation for the paper,

Neuromorphic-computing-based adaptive learning using ion dynamics in flexible energy storage devices by Shufang Zhao, Wenhao Ran, Zheng Lou, Linlin Li, Swapnadeep Poddar, Lili Wang, Zhiyong Fan, Guozhen Shen. National Science Review, Volume 9, Issue 11, November 2022, nwac158, EOI: https://doi.org/10.1093/nsr/nwac158 Published: 13 August 2022

This paper is open access.

The future (or roadmap for) of Chinese research on neuromorphic engineering

While I was trying (unsuccessfully) to find a copy of the press release on the issuing agency’s website, I found this paper,

2022 roadmap on neuromorphic devices & applications research in China by Qing Wan, Changjin Wan, Huaqiang Wu, Yuchao Yang, Xiaohe Huang, Peng Zhou, LinChen, Tian-Yu Wang, Yi Li, Kanhao Xue, Yuhui He, Xiangshui Miao, Xi Li, Chenchen Xie, Houpeng Chen, Z. T. Song, Hong Wang, Yue Hao, Junyao Zhang, Jia Huang, Zheng Yu Ren, Li Qiang Zhu, Jianyu Du, Chen Ge, Yang Liu, Guanglong Ding, Ye Zhou, Su-Ting Han, Guosheng Wang, Xiao Yu, Bing Chen, Zhufei Chu, Lunyao Wang, Yinshui Xia, Chen Mu, Feng Lin, Chixiao Chen, Bojun Cheng, Yannan Xing, Weitao Zeng, Hong Chen, Lei Yu, Giacomo Indiveri and Ning Qiao. Neuromorphic Computing and Engineering DOI: 10.1088/2634-4386/ac7a5a *Accepted Manuscript online 20 June 2022 • © 2022 The Author(s). Published by IOP Publishing Ltd

The paper is open access.

*From the IOP’s Definitions of article versions: Accepted Manuscript is ‘the version of the article accepted for publication including all changes made as a result of the peer review process, and which may also include the addition to the article by IOP of a header, an article ID, a cover sheet and/or an ‘Accepted Manuscript’ watermark, but excluding any other editing, typesetting or other changes made by IOP and/or its licensors’.*

This is neither the published version nor the version of record.

Synaptic transistors for brainlike computers based on (more environmentally friendly) graphene

An August 9, 2022 news item on ScienceDaily describes research investigating materials other than silicon for neuromorphic (brainlike) computing purposes,

Computers that think more like human brains are inching closer to mainstream adoption. But many unanswered questions remain. Among the most pressing, what types of materials can serve as the best building blocks to unlock the potential of this new style of computing.

For most traditional computing devices, silicon remains the gold standard. However, there is a movement to use more flexible, efficient and environmentally friendly materials for these brain-like devices.

In a new paper, researchers from The University of Texas at Austin developed synaptic transistors for brain-like computers using the thin, flexible material graphene. These transistors are similar to synapses in the brain, that connect neurons to each other.

An August 8, 2022 University of Texas at Austin news release (also on EurekAlert but published August 9, 2022), which originated the news item, provides more detail about the research,

“Computers that think like brains can do so much more than today’s devices,” said Jean Anne Incorvia, an assistant professor in the Cockrell School of Engineering’s Department of Electrical and Computer Engineer and the lead author on the paper published today in Nature Communications. “And by mimicking synapses, we can teach these devices to learn on the fly, without requiring huge training methods that take up so much power.”

The Research: A combination of graphene and nafion, a polymer membrane material, make up the backbone of the synaptic transistor. Together, these materials demonstrate key synaptic-like behaviors — most importantly, the ability for the pathways to strengthen over time as they are used more often, a type of neural muscle memory. In computing, this means that devices will be able to get better at tasks like recognizing and interpreting images over time and do it faster.

Another important finding is that these transistors are biocompatible, which means they can interact with living cells and tissue. That is key for potential applications in medical devices that come into contact with the human body. Most materials used for these early brain-like devices are toxic, so they would not be able to contact living cells in any way.

Why It Matters: With new high-tech concepts like self-driving cars, drones and robots, we are reaching the limits of what silicon chips can efficiently do in terms of data processing and storage. For these next-generation technologies, a new computing paradigm is needed. Neuromorphic devices mimic processing capabilities of the brain, a powerful computer for immersive tasks.

“Biocompatibility, flexibility, and softness of our artificial synapses is essential,” said Dmitry Kireev, a post-doctoral researcher who co-led the project. “In the future, we envision their direct integration with the human brain, paving the way for futuristic brain prosthesis.”

Will It Really Happen: Neuromorphic platforms are starting to become more common. Leading chipmakers such as Intel and Samsung have either produced neuromorphic chips already or are in the process of developing them. However, current chip materials place limitations on what neuromorphic devices can do, so academic researchers are working hard to find the perfect materials for soft brain-like computers.

“It’s still a big open space when it comes to materials; it hasn’t been narrowed down to the next big solution to try,” Incorvia said. “And it might not be narrowed down to just one solution, with different materials making more sense for different applications.”

The Team: The research was led by Incorvia and Deji Akinwande, professor in the Department of Electrical and Computer Engineering. The two have collaborated many times together in the past, and Akinwande is a leading expert in graphene, using it in multiple research breakthroughs, most recently as part of a wearable electronic tattoo for blood pressure monitoring.

The idea for the project was conceived by Samuel Liu, a Ph.D. student and first author on the paper, in a class taught by Akinwande. Kireev then suggested the specific project. Harrison Jin, an undergraduate electrical and computer engineering student, measured the devices and analyzed data.

The team collaborated with T. Patrick Xiao and Christopher Bennett of Sandia National Laboratories, who ran neural network simulations and analyzed the resulting data.

Here’s a link to and a citation for the ‘graphene transistor’ paper,

Metaplastic and energy-efficient biocompatible graphene artificial synaptic transistors for enhanced accuracy neuromorphic computing by Dmitry Kireev, Samuel Liu, Harrison Jin, T. Patrick Xiao, Christopher H. Bennett, Deji Akinwande & Jean Anne C. Incorvia. Nature Communications volume 13, Article number: 4386 (2022) DOI: https://doi.org/10.1038/s41467-022-32078-6 Published: 28 July 2022

This paper is open access.

Guide for memristive hardware design

An August 15 ,2022 news item on ScienceDaily announces a type of guide for memristive hardware design,

They are many times faster than flash memory and require significantly less energy: memristive memory cells could revolutionize the energy efficiency of neuromorphic [brainlike] computers. In these computers, which are modeled on the way the human brain works, memristive cells function like artificial synapses. Numerous groups around the world are working on the use of corresponding neuromorphic circuits — but often with a lack of understanding of how they work and with faulty models. Jülich researchers have now summarized the physical principles and models in a comprehensive review article in the renowned journal Advances in Physics.

An August 15, 2022 Forschungszentrum Juelich press release (also on EurekAlert), which originated the news item, describes two papers designed to help researchers better understand and design memristive hardware,

Certain tasks – such as recognizing patterns and language – are performed highly efficiently by a human brain, requiring only about one ten-thousandth of the energy of a conventional, so-called “von Neumann” computer. One of the reasons lies in the structural differences: In a von Neumann architecture, there is a clear separation between memory and processor, which requires constant moving of large amounts of data. This is time and energy consuming – the so-called von Neumann bottleneck. In the brain, the computational operation takes place directly in the data memory and the biological synapses perform the tasks of memory and processor at the same time.

In Jülich, scientists have been working for more than 15 years on special data storage devices and components that can have similar properties to the synapses in the human brain. So-called memristive memory devices, also known as memristors, are considered to be extremely fast, energy-saving and can be miniaturized very well down to the nanometer range. The functioning of memristive cells is based on a very special effect: Their electrical resistance is not constant, but can be changed and reset again by applying an external voltage, theoretically continuously. The change in resistance is controlled by the movement of oxygen ions. If these move out of the semiconducting metal oxide layer, the material becomes more conductive and the electrical resistance drops. This change in resistance can be used to store information.

The processes that can occur in cells are very complex and vary depending on the material system. Three researchers from the Jülich Peter Grünberg Institute – Prof. Regina Dittmann, Dr. Stephan Menzel, and Prof. Rainer Waser – have therefore compiled their research results in a detailed review article, “Nanoionic memristive phenomena in metal oxides: the valence change mechanism”. They explain in detail the various physical and chemical effects in memristors and shed light on the influence of these effects on the switching properties of memristive cells and their reliability.

“If you look at current research activities in the field of neuromorphic memristor circuits, they are often based on empirical approaches to material optimization,” said Rainer Waser, director at the Peter Grünberg Institute. “Our goal with our review article is to give researchers something to work with in order to enable insight-driven material optimization.” The team of authors worked on the approximately 200-page article for ten years and naturally had to keep incorporating advances in knowledge.

“The analogous functioning of memristive cells required for their use as artificial synapses is not the normal case. Usually, there are sudden jumps in resistance, generated by the mutual amplification of ionic motion and Joule heat,” explains Regina Dittmann of the Peter Grünberg Institute. “In our review article, we provide researchers with the necessary understanding of how to change the dynamics of the cells to enable an analog operating mode.”

“You see time and again that groups simulate their memristor circuits with models that don’t take into account high dynamics of the cells at all. These circuits will never work.” said Stephan Menzel, who leads modeling activities at the Peter Grünberg Institute and has developed powerful compact models that are now in the public domain (www.emrl.de/jart.html). “In our review article, we provide the basics that are extremely helpful for a correct use of our compact models.”

Roadmap neuromorphic computing

The “Roadmap of Neuromorphic Computing and Engineering”, which was published in May 2022, shows how neuromorphic computing can help to reduce the enormous energy consumption of IT globally. In it, researchers from the Peter Grünberg Institute (PGI-7), together with leading experts in the field, have compiled the various technological possibilities, computational approaches, learning algorithms and fields of application. 

According to the study, applications in the field of artificial intelligence, such as pattern recognition or speech recognition, are likely to benefit in a special way from the use of neuromorphic hardware. This is because they are based – much more so than classical numerical computing operations – on the shifting of large amounts of data. Memristive cells make it possible to process these gigantic data sets directly in memory without transporting them back and forth between processor and memory. This could reduce the energy efficiency of artificial neural networks by orders of magnitude.

Memristive cells can also be interconnected to form high-density matrices that enable neural networks to learn locally. This so-called edge computing thus shifts computations from the data center to the factory floor, the vehicle, or the home of people in need of care. Thus, monitoring and controlling processes or initiating rescue measures can be done without sending data via a cloud. “This achieves two things at the same time: you save energy, and at the same time, personal data and data relevant to security remain on site,” says Prof. Dittmann, who played a key role in creating the roadmap as editor.

Here’s a link to and a citation for the ‘roadmap’,

2022 roadmap on neuromorphic computing and engineering by Dennis V Christensen, Regina Dittmann, Bernabe Linares-Barranco, Abu Sebastian, Manuel Le Gallo, Andrea Redaelli, Stefan Slesazeck, Thomas Mikolajick, Sabina Spiga, Stephan Menzel, Ilia Valov, Gianluca Milano, Carlo Ricciardi, Shi-Jun Liang, Feng Miao, Mario Lanza, Tyler J Quill, Scott T Keene, Alberto Salleo, Julie Grollier, Danijela Marković, Alice Mizrahi, Peng Yao, J Joshua Yang, Giacomo Indiveri, John Paul Strachan, Suman Datta, Elisa Vianello, Alexandre Valentian, Johannes Feldmann, Xuan Li, Wolfram H P Pernice, Harish Bhaskaran, Steve Furber, Emre Neftci, Franz Scherr, Wolfgang Maass, Srikanth Ramaswamy, Jonathan Tapson, Priyadarshini Panda, Youngeun Kim, Gouhei Tanaka, Simon Thorpe, Chiara Bartolozzi, Thomas A Cleland, Christoph Posch, ShihChii Liu, Gabriella Panuccio, Mufti Mahmud, Arnab Neelim Mazumder, Morteza Hosseini, Tinoosh Mohsenin, Elisa Donati, Silvia Tolu, Roberto Galeazzi, Martin Ejsing Christensen, Sune Holm, Daniele Ielmini and N Pryds. Neuromorphic Computing and Engineering , Volume 2, Number 2 DOI: 10.1088/2634-4386/ac4a83 20 May 2022 • © 2022 The Author(s)

This paper is open access.

Here’s the most recent paper,

Nanoionic memristive phenomena in metal oxides: the valence change mechanism by Regina Dittmann, Stephan Menzel & Rainer Waser. Advances in Physics
Volume 70, 2021 – Issue 2 Pages 155-349 DOI: https://doi.org/10.1080/00018732.2022.2084006 Published online: 06 Aug 2022

This paper is behind a paywall.

Neurotransistor for brainlike (neuromorphic) computing

According to researchers at Helmholtz-Zentrum Dresden-Rossendorf and the rest of the international team collaborating on the work, it’s time to look more closely at plasticity in the neuronal membrane,.

From the abstract for their paper, Intrinsic plasticity of silicon nanowire neurotransistors for dynamic memory and learning functions by Eunhye Baek, Nikhil Ranjan Das, Carlo Vittorio Cannistraci, Taiuk Rim, Gilbert Santiago Cañón Bermúdez, Khrystyna Nych, Hyeonsu Cho, Kihyun Kim, Chang-Ki Baek, Denys Makarov, Ronald Tetzlaff, Leon Chua, Larysa Baraban & Gianaurelio Cuniberti. Nature Electronics volume 3, pages 398–408 (2020) DOI: https://doi.org/10.1038/s41928-020-0412-1 Published online: 25 May 2020 Issue Date: July 2020

Neuromorphic architectures merge learning and memory functions within a single unit cell and in a neuron-like fashion. Research in the field has been mainly focused on the plasticity of artificial synapses. However, the intrinsic plasticity of the neuronal membrane is also important in the implementation of neuromorphic information processing. Here we report a neurotransistor made from a silicon nanowire transistor coated by an ion-doped sol–gel silicate film that can emulate the intrinsic plasticity of the neuronal membrane.

Caption: Neurotransistors: from silicon chips to neuromorphic architecture. Credit: TU Dresden / E. Baek Courtesy: Helmholtz-Zentrum Dresden-Rossendorf

A July 14, 2020 news item on Nanowerk announced the research (Note: A link has been removed),

Especially activities in the field of artificial intelligence, like teaching robots to walk or precise automatic image recognition, demand ever more powerful, yet at the same time more economical computer chips. While the optimization of conventional microelectronics is slowly reaching its physical limits, nature offers us a blueprint how information can be processed and stored quickly and efficiently: our own brain.

For the very first time, scientists at TU Dresden and the Helmholtz-Zentrum Dresden-Rossendorf (HZDR) have now successfully imitated the functioning of brain neurons using semiconductor materials. They have published their research results in the journal Nature Electronics (“Intrinsic plasticity of silicon nanowire neurotransistors for dynamic memory and learning functions”).

A July 14, 2020 Helmholtz-Zentrum Dresden-Rossendorf press release (also on EurekAlert), which originated the news items delves further into the research,

Today, enhancing the performance of microelectronics is usually achieved by reducing component size, especially of the individual transistors on the silicon computer chips. “But that can’t go on indefinitely – we need new approaches”, Larysa Baraban asserts. The physicist, who has been working at HZDR since the beginning of the year, is one of the three primary authors of the international study, which involved a total of six institutes. One approach is based on the brain, combining data processing with data storage in an artificial neuron.

“Our group has extensive experience with biological and chemical electronic sensors,” Baraban continues. “So, we simulated the properties of neurons using the principles of biosensors and modified a classical field-effect transistor to create an artificial neurotransistor.” The advantage of such an architecture lies in the simultaneous storage and processing of information in a single component. In conventional transistor technology, they are separated, which slows processing time and hence ultimately also limits performance.

Silicon wafer + polymer = chip capable of learning

Modeling computers on the human brain is no new idea. Scientists made attempts to hook up nerve cells to electronics in Petri dishes decades ago. “But a wet computer chip that has to be fed all the time is of no use to anybody,” says Gianaurelio Cuniberti from TU Dresden. The Professor for Materials Science and Nanotechnology is one of the three brains behind the neurotransistor alongside Ronald Tetzlaff, Professor of Fundamentals of Electrical Engineering in Dresden, and Leon Chua [emphasis mine] from the University of California at Berkeley, who had already postulated similar components in the early 1970s.

Now, Cuniberti, Baraban and their team have been able to implement it: “We apply a viscous substance – called solgel – to a conventional silicon wafer with circuits. This polymer hardens and becomes a porous ceramic,” the materials science professor explains. “Ions move between the holes. They are heavier than electrons and slower to return to their position after excitation. This delay, called hysteresis, is what causes the storage effect.” As Cuniberti explains, this is a decisive factor in the functioning of the transistor. “The more an individual transistor is excited, the sooner it will open and let the current flow. This strengthens the connection. The system is learning.”

Cuniberti and his team are not focused on conventional issues, though. “Computers based on our chip would be less precise and tend to estimate mathematical computations rather than calculating them down to the last decimal,” the scientist explains. “But they would be more intelligent. For example, a robot with such processors would learn to walk or grasp; it would possess an optical system and learn to recognize connections. And all this without having to develop any software.” But these are not the only advantages of neuromorphic computers. Thanks to their plasticity, which is similar to that of the human brain, they can adapt to changing tasks during operation and, thus, solve problems for which they were not originally programmed.

I highlighted Dr. Leon Chua’s name as he was one of the first to conceptualize the notion of a memristor (memory resistor), which is what the press release seems to be referencing with the mention of artificial synapses. Dr. Chua very kindly answered a few questions for me about his work which I published in an April 13, 2010 posting (scroll down about 40% of the way).

Neuromorphic computing with voltage usage comparable to human brains

Part of neuromorphic computing’s appeal is the promise of using less energy because, as it turns out, the human brain uses small amounts of energy very efficiently. A team of researchers at the University of Massachusetts at Amherst have developed function in the same range of voltages as the human brain. From an April 20, 2020 news item on ScienceDaily,

Only 10 years ago, scientists working on what they hoped would open a new frontier of neuromorphic computing could only dream of a device using miniature tools called memristors that would function/operate like real brain synapses.

But now a team at the University of Massachusetts Amherst has discovered, while on their way to better understanding protein nanowires, how to use these biological, electricity conducting filaments to make a neuromorphic memristor, or “memory transistor,” device. It runs extremely efficiently on very low power, as brains do, to carry signals between neurons. Details are in Nature Communications.

An April 20, 2020 University of Massachusetts at Amherst news release (also on EurekAlert), which originated the news items, dives into detail about how these researchers were able to achieve bio-voltages,

As first author Tianda Fu, a Ph.D. candidate in electrical and computer engineering, explains, one of the biggest hurdles to neuromorphic computing, and one that made it seem unreachable, is that most conventional computers operate at over 1 volt, while the brain sends signals called action potentials between neurons at around 80 millivolts – many times lower. Today, a decade after early experiments, memristor voltage has been achieved in the range similar to conventional computer, but getting below that seemed improbable, he adds.

Fu reports that using protein nanowires developed at UMass Amherst from the bacterium Geobacter by microbiologist and co-author Derek Lovely, he has now conducted experiments where memristors have reached neurological voltages. Those tests were carried out in the lab of electrical and computer engineering researcher and co-author Jun Yao.

Yao says, “This is the first time that a device can function at the same voltage level as the brain. People probably didn’t even dare to hope that we could create a device that is as power-efficient as the biological counterparts in a brain, but now we have realistic evidence of ultra-low power computing capabilities. It’s a concept breakthrough and we think it’s going to cause a lot of exploration in electronics that work in the biological voltage regime.”

Lovely points out that Geobacter’s electrically conductive protein nanowires offer many advantages over expensive silicon nanowires, which require toxic chemicals and high-energy processes to produce. Protein nanowires also are more stable in water or bodily fluids, an important feature for biomedical applications. For this work, the researchers shear nanowires off the bacteria so only the conductive protein is used, he adds.

Fu says that he and Yao had set out to put the purified nanowires through their paces, to see what they are capable of at different voltages, for example. They experimented with a pulsing on-off pattern of positive-negative charge sent through a tiny metal thread in a memristor, which creates an electrical switch.

They used a metal thread because protein nanowires facilitate metal reduction, changing metal ion reactivity and electron transfer properties. Lovely says this microbial ability is not surprising, because wild bacterial nanowires breathe and chemically reduce metals to get their energy the way we breathe oxygen.

As the on-off pulses create changes in the metal filaments, new branching and connections are created in the tiny device, which is 100 times smaller than the diameter of a human hair, Yao explains. It creates an effect similar to learning – new connections – in a real brain. He adds, “You can modulate the conductivity, or the plasticity of the nanowire-memristor synapse so it can emulate biological components for brain-inspired computing. Compared to a conventional computer, this device has a learning capability that is not software-based.”

Fu recalls, “In the first experiments we did, the nanowire performance was not satisfying, but it was enough for us to keep going.” Over two years, he saw improvement until one fateful day when his and Yao’s eyes were riveted by voltage measurements appearing on a computer screen.

“I remember the day we saw this great performance. We watched the computer as current voltage sweep was being measured. It kept doing down and down and we said to each other, ‘Wow, it’s working.’ It was very surprising and very encouraging.”

Fu, Yao, Lovely and colleagues plan to follow up this discovery with more research on mechanisms, and to “fully explore the chemistry, biology and electronics” of protein nanowires in memristors, Fu says, plus possible applications, which might include a device to monitor heart rate, for example. Yao adds, “This offers hope in the feasibility that one day this device can talk to actual neurons in biological systems.”

That last comment has me wondering about why you would want to have your device talk to actual neurons. For neuroprosthetics perhaps?

Here’s a link to and a citation for the paper,

Bioinspired bio-voltage memristors by Tianda Fu, Xiaomeng Liu, Hongyan Gao, Joy E. Ward, Xiaorong Liu, Bing Yin, Zhongrui Wang, Ye Zhuo, David J. F. Walker, J. Joshua Yang, Jianhan Chen, Derek R. Lovley & Jun Yao. Nature Communications volume 11, Article number: 1861 (2020) DOI: https://doi.org/10.1038/s41467-020-15759-y Published: 20 April 2020

This paper is open access.

There is an illustration of the work

Caption: A graphic depiction of protein nanowires (green) harvested from microbe Geobacter (orange) facilitate the electronic memristor device (silver) to function with biological voltages, emulating the neuronal components (blue junctions) in a brain. Credit: UMass Amherst/Yao lab

New design directions to increase variety, efficiency, selectivity and reliability for memristive devices

A May 11, 2020 news item on ScienceDaily provides a description of the current ‘memristor scene’ along with an announcement about a piece of recent research,

Scientists around the world are intensively working on memristive devices, which are capable in extremely low power operation and behave similarly to neurons in the brain. Researchers from the Jülich Aachen Research Alliance (JARA) and the German technology group Heraeus have now discovered how to systematically control the functional behaviour of these elements. The smallest differences in material composition are found crucial: differences so small that until now experts had failed to notice them. The researchers’ design directions could help to increase variety, efficiency, selectivity and reliability for memristive technology-based applications, for example for energy-efficient, non-volatile storage devices or neuro-inspired computers.

Memristors are considered a highly promising alternative to conventional nanoelectronic elements in computer Chips [sic]. Because of the advantageous functionalities, their development is being eagerly pursued by many companies and research institutions around the world. The Japanese corporation NEC installed already the first prototypes in space satellites back in 2017. Many other leading companies such as Hewlett Packard, Intel, IBM, and Samsung are working to bring innovative types of computer and storage devices based on memristive elements to market.

Fundamentally, memristors are simply “resistors with memory,” in which high resistance can be switched to low resistance and back again. This means in principle that the devices are adaptive, similar to a synapse in a biological nervous system. “Memristive elements are considered ideal candidates for neuro-inspired computers modelled on the brain, which are attracting a great deal of interest in connection with deep learning and artificial intelligence,” says Dr. Ilia Valov of the Peter Grünberg Institute (PGI-7) at Forschungszentrum Jülich.

In the latest issue of the open access journal Science Advances, he and his team describe how the switching and neuromorphic behaviour of memristive elements can be selectively controlled. According to their findings, the crucial factor is the purity of the switching oxide layer. “Depending on whether you use a material that is 99.999999 % pure, and whether you introduce one foreign atom into ten million atoms of pure material or into one hundred atoms, the properties of the memristive elements vary substantially” says Valov.

A May 11, 2020 Forschungszentrum Juelich press release (also on EurekAlert), which originated the news item, delves into the theme of increasing control over memristive systems,

This effect had so far been overlooked by experts. It can be used very specifically for designing memristive systems, in a similar way to doping semiconductors in information technology. “The introduction of foreign atoms allows us to control the solubility and transport properties of the thin oxide layers,” explains Dr. Christian Neumann of the technology group Heraeus. He has been contributing his materials expertise to the project ever since the initial idea was conceived in 2015.

“In recent years there has been remarkable progress in the development and use of memristive devices, however that progress has often been achieved on a purely empirical basis,” according to Valov. Using the insights that his team has gained, manufacturers could now methodically develop memristive elements selecting the functions they need. The higher the doping concentration, the slower the resistance of the elements changes as the number of incoming voltage pulses increases and decreases, and the more stable the resistance remains. “This means that we have found a way for designing types of artificial synapses with differing excitability,” explains Valov.

Design specification for artificial synapses

The brain’s ability to learn and retain information can largely be attributed to the fact that the connections between neurons are strengthened when they are frequently used. Memristive devices, of which there are different types such as electrochemical metallization cells (ECMs) or valence change memory cells (VCMs), behave similarly. When these components are used, the conductivity increases as the number of incoming voltage pulses increases. The changes can also be reversed by applying voltage pulses of the opposite polarity.

The JARA researchers conducted their systematic experiments on ECMs, which consist of a copper electrode, a platinum electrode, and a layer of silicon dioxide between them. Thanks to the cooperation with Heraeus researchers, the JARA scientists had access to different types of silicon dioxide: one with a purity of 99.999999 % – also called 8N silicon dioxide – and others containing 100 to 10,000 ppm (parts per million) of foreign atoms. The precisely doped glass used in their experiments was specially developed and manufactured by quartz glass specialist Heraeus Conamic, which also holds the patent for the procedure. Copper and protons acted as mobile doping agents, while aluminium and gallium were used as non-volatile doping.

Synapses, the connections between neurons, have the ability to transmit signals with varying degrees of strength when they are excited by a quick succession of electrical impulses. One effect of this repeated activity is to increase the concentration of calcium ions, with the result that more neurotransmitters are emitted. Depending on the activity, other effects cause long-term structural changes, which impact the strength of the transmission for several hours, or potentially even for the rest of the person’s life. Memristive elements allow the strength of the electrical transmission to be changed in a similar way to synaptic connections, by applying a voltage. In electrochemical metallization cells (ECMs), a metallic filament develops between the two metal electrodes, thus increasing conductivity. Applying voltage pulses with reversed polarity causes the filament to shrink again until the cell reaches its initial high resistance state. Copyright: Forschungszentrum Jülich / Tobias Schlößer

Record switching time confirms theory

Based on their series of experiments, the researchers were able to show that the ECMs’ switching times change as the amount of doping atoms changes. If the switching layer is made of 8N silicon dioxide, the memristive component switches in only 1.4 nanoseconds. To date, the fastest value ever measured for ECMs had been around 10 nanoseconds. By doping the oxide layer of the components with up to 10,000 ppm of foreign atoms, the switching time was prolonged into the range of milliseconds. “We can also theoretically explain our results. This is helping us to understand the physico-chemical processes on the nanoscale and apply this knowledge in the practice” says Valov. Based on generally applicable theoretical considerations, supported by experimental results, some also documented in the literature, he is convinced that the doping/impurity effect occurs and can be employed in all types memristive elements.

Top: In memristive elements (ECMs) with an undoped, high-purity switching layer of silicon oxide (SiO2), copper ions can move very fast. A filament of copper atoms forms correspondingly fast on the platinum electrode. This increases the total device conductivity respectively the capacity. Due to the high mobility of the ions, however, this filament is unstable at low forming voltages. Center: Gallium ions (Ga3+), which are introduced into the cell (non-volatile doping), bind copper ions (Cu2+) in the switching layer. The movement of the ions slows down, leading to lower switching times, but the filament, once formed remains longer stable. Bottom: Doping with aluminium ions (Al3+) slows down the process even more, since aluminium ions bind copper ions even stronger than gallium ions. Filament growth is even slower, while at the same time the stability of the filament is further increased. Depending on the chemical properties of the introduced doping elements, memristive cells – the artificial synapses – can be created with tailor-made switching and neuromorphic properties. Copyright: Forschungszentrum Jülich / Tobias Schloesser

Here’s a link to and a citation for the paper,

Design of defect-chemical properties and device performance in memristive systems by M. Lübben, F. Cüppers, J. Mohr, M. von Witzleben, U. Breuer, R. Waser, C. Neumann, and I. Valov. Science Advances 08 May 2020: Vol. 6, no. 19, eaaz9079 DOI: 10.1126/sciadv.aaz9079

This paper is open access.

For anyone curious about the German technology group, Heraeus, there’s a fascinating history in its Wikipedia entry. The technology company was formally founded in 1851 but it can be traced back to the 17th century and the founding family’s apothecary.

Connecting biological and artificial neurons (in UK, Switzerland, & Italy) over the web

Caption: The virtual lab connecting Southampton, Zurich and Padova. Credit: University of Southampton

A February 26, 2020 University of Southampton press release (also on EurekAlert) describes this work,

Research on novel nanoelectronics devices led by the University of Southampton enabled brain neurons and artificial neurons to communicate with each other. This study has for the first time shown how three key emerging technologies can work together: brain-computer interfaces, artificial neural networks and advanced memory technologies (also known as memristors). The discovery opens the door to further significant developments in neural and artificial intelligence research.

Brain functions are made possible by circuits of spiking neurons, connected together by microscopic, but highly complex links called ‘synapses’. In this new study, published in the scientific journal Nature Scientific Reports, the scientists created a hybrid neural network where biological and artificial neurons in different parts of the world were able to communicate with each other over the internet through a hub of artificial synapses made using cutting-edge nanotechnology. This is the first time the three components have come together in a unified network.

During the study, researchers based at the University of Padova in Italy cultivated rat neurons in their laboratory, whilst partners from the University of Zurich and ETH Zurich created artificial neurons on Silicon microchips. The virtual laboratory was brought together via an elaborate setup controlling nanoelectronic synapses developed at the University of Southampton. These synaptic devices are known as memristors.

The Southampton based researchers captured spiking events being sent over the internet from the biological neurons in Italy and then distributed them to the memristive synapses. Responses were then sent onward to the artificial neurons in Zurich also in the form of spiking activity. The process simultaneously works in reverse too; from Zurich to Padova. Thus, artificial and biological neurons were able to communicate bidirectionally and in real time.

Themis Prodromakis, Professor of Nanotechnology and Director of the Centre for Electronics Frontiers at the University of Southampton said “One of the biggest challenges in conducting research of this kind and at this level has been integrating such distinct cutting edge technologies and specialist expertise that are not typically found under one roof. By creating a virtual lab we have been able to achieve this.”

The researchers now anticipate that their approach will ignite interest from a range of scientific disciplines and accelerate the pace of innovation and scientific advancement in the field of neural interfaces research. In particular, the ability to seamlessly connect disparate technologies across the globe is a step towards the democratisation of these technologies, removing a significant barrier to collaboration.

Professor Prodromakis added “We are very excited with this new development. On one side it sets the basis for a novel scenario that was never encountered during natural evolution, where biological and artificial neurons are linked together and communicate across global networks; laying the foundations for the Internet of Neuro-electronics. On the other hand, it brings new prospects to neuroprosthetic technologies, paving the way towards research into replacing dysfunctional parts of the brain with AI [artificial intelligence] chips.”

I’m fascinated by this work and after taking a look at the paper, I have to say, the paper is surprisingly accessible. In other words, I think I get the general picture. For example (from the Introduction to the paper; citation and link follow further down),

… To emulate plasticity, the memristor MR1 is operated as a two-terminal device through a control system that receives pre- and post-synaptic depolarisations from one silicon neuron (ANpre) and one biological neuron (BN), respectively. …

If I understand this properly, they’ve integrated a biological neuron and an artificial neuron in a single system across three countries.

For those who care to venture forth, here’s a link and a citation for the paper,

Memristive synapses connect brain and silicon spiking neurons by Alexantrou Serb, Andrea Corna, Richard George, Ali Khiat, Federico Rocchi, Marco Reato, Marta Maschietto, Christian Mayr, Giacomo Indiveri, Stefano Vassanelli & Themistoklis Prodromakis. Scientific Reports volume 10, Article number: 2590 (2020) DOI: https://doi.org/10.1038/s41598-020-58831-9 Published 25 February 2020

The paper is open access.

A lipid-based memcapacitor,for neuromorphic computing

Caption: Researchers at ORNL’s Center for Nanophase Materials Sciences demonstrated the first example of capacitance in a lipid-based biomimetic membrane, opening nondigital routes to advanced, brain-like computation. Credit: Michelle Lehman/Oak Ridge National Laboratory, U.S. Dept. of Energy

The last time I wrote about memcapacitors (June 30, 2014 posting: Memristors, memcapacitors, and meminductors for faster computers), the ideas were largely theoretical; I believe this work is the first research I’ve seen on the topic. From an October 17, 2019 news item on ScienceDaily,

Researchers at the Department of Energy’s Oak Ridge National Laboratory ]ORNL], the University of Tennessee and Texas A&M University demonstrated bio-inspired devices that accelerate routes to neuromorphic, or brain-like, computing.

Results published in Nature Communications report the first example of a lipid-based “memcapacitor,” a charge storage component with memory that processes information much like synapses do in the brain. Their discovery could support the emergence of computing networks modeled on biology for a sensory approach to machine learning.

An October 16, 2019 ORNL news release (also on EurekAlert but published Oct. 17, 2019), which originated the news item, provides more detail about the work,

“Our goal is to develop materials and computing elements that work like biological synapses and neurons—with vast interconnectivity and flexibility—to enable autonomous systems that operate differently than current computing devices and offer new functionality and learning capabilities,” said Joseph Najem, a recent postdoctoral researcher at ORNL’s Center for Nanophase Materials Sciences, a DOE Office of Science User Facility, and current assistant professor of mechanical engineering at Penn State.

The novel approach uses soft materials to mimic biomembranes and simulate the way nerve cells communicate with one another.

The team designed an artificial cell membrane, formed at the interface of two lipid-coated water droplets in oil, to explore the material’s dynamic, electrophysiological properties. At applied voltages, charges build up on both sides of the membrane as stored energy, analogous to the way capacitors work in traditional electric circuits.

But unlike regular capacitors, the memcapacitor can “remember” a previously applied voltage and—literally—shape how information is processed. The synthetic membranes change surface area and thickness depending on electrical activity. These shapeshifting membranes could be tuned as adaptive filters for specific biophysical and biochemical signals.

“The novel functionality opens avenues for nondigital signal processing and machine learning modeled on nature,” said ORNL’s Pat Collier, a CNMS staff research scientist.

A distinct feature of all digital computers is the separation of processing and memory. Information is transferred back and forth from the hard drive and the central processor, creating an inherent bottleneck in the architecture no matter how small or fast the hardware can be.

Neuromorphic computing, modeled on the nervous system, employs architectures that are fundamentally different in that memory and signal processing are co-located in memory elements—memristors, memcapacitors and meminductors.

These “memelements” make up the synaptic hardware of systems that mimic natural information processing, learning and memory.

Systems designed with memelements offer advantages in scalability and low power consumption, but the real goal is to carve out an alternative path to artificial intelligence, said Collier.

Tapping into biology could enable new computing possibilities, especially in the area of “edge computing,” such as wearable and embedded technologies that are not connected to a cloud but instead make on-the-fly decisions based on sensory input and past experience.

Biological sensing has evolved over billions of years into a highly sensitive system with receptors in cell membranes that are able to pick out a single molecule of a specific odor or taste. “This is not something we can match digitally,” Collier said.

Digital computation is built around digital information, the binary language of ones and zeros coursing through electronic circuits. It can emulate the human brain, but its solid-state components do not compute sensory data the way a brain does.

“The brain computes sensory information pushed through synapses in a neural network that is reconfigurable and shaped by learning,” said Collier. “Incorporating biology—using biomembranes that sense bioelectrochemical information—is key to developing the functionality of neuromorphic computing.”

While numerous solid-state versions of memelements have been demonstrated, the team’s biomimetic elements represent new opportunities for potential “spiking” neural networks that can compute natural data in natural ways.

Spiking neural networks are intended to simulate the way neurons spike with electrical potential and, if the signal is strong enough, pass it on to their neighbors through synapses, carving out learning pathways that are pruned over time for efficiency.

A bio-inspired version with analog data processing is a distant aim. Current early-stage research focuses on developing the components of bio-circuitry.

“We started with the basics, a memristor that can weigh information via conductance to determine if a spike is strong enough to be broadcast through a network of synapses connecting neurons,” said Collier. “Our memcapacitor goes further in that it can actually store energy as an electric charge in the membrane, enabling the complex ‘integrate and fire’ activity of neurons needed to achieve dense networks capable of brain-like computation.”

The team’s next steps are to explore new biomaterials and study simple networks to achieve more complex brain-like functionalities with memelements.

Here’s a link to and a citation for the paper,

Dynamical nonlinear memory capacitance in biomimetic membranes by Joseph S. Najem, Md Sakib Hasan, R. Stanley Williams, Ryan J. Weiss, Garrett S. Rose, Graham J. Taylor, Stephen A. Sarles & C. Patrick Collier. Nature Communications volume 10, Article number: 3239 (2019) DOI: DOIhttps://doi.org/10.1038/s41467-019-11223-8 Published July 19, 2019

This paper is open access.

One final comment, you might recognize one of the authors (R. Stanley Williams) who in 2008 helped launch ‘memristor’ research.

Electronics begone! Enter: the light-based brainlike computing chip

At this point, it’s possible I’m wrong but I think this is the first ‘memristor’ type device (also called a neuromorphic chip) based on light rather than electronics that I’ve featured here on this blog. In other words, it’s not, technically speaking, a memristor but it does have the same properties so it is a neuromorphic chip.

Caption: The optical microchips that the researchers are working on developing are about the size of a one-cent piece. Credit: WWU Muenster – Peter Leßmann

A May 8, 2019 news item on Nanowerk announces this new approach to neuromorphic hardware (Note: A link has been removed),

Researchers from the Universities of Münster (Germany), Oxford and Exeter (both UK) have succeeded in developing a piece of hardware which could pave the way for creating computers which resemble the human brain.

The scientists produced a chip containing a network of artificial neurons that works with light and can imitate the behaviour of neurons and their synapses. The network is able to “learn” information and use this as a basis for computing and recognizing patterns. As the system functions solely with light and not with electrons, it can process data many times faster than traditional systems. …

A May 8, 2019 University of Münster press release (also on EurekAlert), which originated the news item, reveals the full story,

A technology that functions like a brain? In these times of artificial intelligence, this no longer seems so far-fetched – for example, when a mobile phone can recognise faces or languages. With more complex applications, however, computers still quickly come up against their own limitations. One of the reasons for this is that a computer traditionally has separate memory and processor units – the consequence of which is that all data have to be sent back and forth between the two. In this respect, the human brain is way ahead of even the most modern computers because it processes and stores information in the same place – in the synapses, or connections between neurons, of which there are a million-billion in the brain. An international team of researchers from the Universities of Münster (Germany), Oxford and Exeter (both UK) have now succeeded in developing a piece of hardware which could pave the way for creating computers which resemble the human brain. The scientists managed to produce a chip containing a network of artificial neurons that works with light and can imitate the behaviour of neurons and their synapses.

The researchers were able to demonstrate, that such an optical neurosynaptic network is able to “learn” information and use this as a basis for computing and recognizing patterns – just as a brain can. As the system functions solely with light and not with traditional electrons, it can process data many times faster. “This integrated photonic system is an experimental milestone,” says Prof. Wolfram Pernice from Münster University and lead partner in the study. “The approach could be used later in many different fields for evaluating patterns in large quantities of data, for example in medical diagnoses.” The study is published in the latest issue of the “Nature” journal.

The story in detail – background and method used

Most of the existing approaches relating to so-called neuromorphic networks are based on electronics, whereas optical systems – in which photons, i.e. light particles, are used – are still in their infancy. The principle which the German and British scientists have now presented works as follows: optical waveguides that can transmit light and can be fabricated into optical microchips are integrated with so-called phase-change materials – which are already found today on storage media such as re-writable DVDs. These phase-change materials are characterised by the fact that they change their optical properties dramatically, depending on whether they are crystalline – when their atoms arrange themselves in a regular fashion – or amorphous – when their atoms organise themselves in an irregular fashion. This phase-change can be triggered by light if a laser heats the material up. “Because the material reacts so strongly, and changes its properties dramatically, it is highly suitable for imitating synapses and the transfer of impulses between two neurons,” says lead author Johannes Feldmann, who carried out many of the experiments as part of his PhD thesis at the Münster University.

In their study, the scientists succeeded for the first time in merging many nanostructured phase-change materials into one neurosynaptic network. The researchers developed a chip with four artificial neurons and a total of 60 synapses. The structure of the chip – consisting of different layers – was based on the so-called wavelength division multiplex technology, which is a process in which light is transmitted on different channels within the optical nanocircuit.

In order to test the extent to which the system is able to recognise patterns, the researchers “fed” it with information in the form of light pulses, using two different algorithms of machine learning. In this process, an artificial system “learns” from examples and can, ultimately, generalise them. In the case of the two algorithms used – both in so-called supervised and in unsupervised learning – the artificial network was ultimately able, on the basis of given light patterns, to recognise a pattern being sought – one of which was four consecutive letters.

“Our system has enabled us to take an important step towards creating computer hardware which behaves similarly to neurons and synapses in the brain and which is also able to work on real-world tasks,” says Wolfram Pernice. “By working with photons instead of electrons we can exploit to the full the known potential of optical technologies – not only in order to transfer data, as has been the case so far, but also in order to process and store them in one place,” adds co-author Prof. Harish Bhaskaran from the University of Oxford.

A very specific example is that with the aid of such hardware cancer cells could be identified automatically. Further work will need to be done, however, before such applications become reality. The researchers need to increase the number of artificial neurons and synapses and increase the depth of neural networks. This can be done, for example, with optical chips manufactured using silicon technology. “This step is to be taken in the EU joint project ‘Fun-COMP’ by using foundry processing for the production of nanochips,” says co-author and leader of the Fun-COMP project, Prof. C. David Wright from the University of Exeter.

Here’s a link to and a citation for the paper,

All-optical spiking neurosynaptic networks with self-learning capabilities by J. Feldmann, N. Youngblood, C. D. Wright, H. Bhaskaran & W. H. P. Pernice. Nature volume 569, pages208–214 (2019) DOI: https://doi.org/10.1038/s41586-019-1157-8 Issue Date: 09 May 2019

This paper is behind a paywall.

For the curious, I found a little more information about Fun-COMP (functionally-scaled computer technology). It’s a European Commission (EC) Horizon 2020 project coordinated through the University of Exeter. For information with details such as the total cost, contribution from the EC, the list of partnerships and more there is the Fun-COMP webpage on fabiodisconzi.com.