Tag Archives: brainlike computing

Swiss researchers, memristors, perovskite crystals, and neuromorphic (brainlike) computing

A May 18, 2022 news item on Nanowerk highlights research into making memristors more ‘flexible’, (Note: There’s an almost identical May 18, 2022 news item on ScienceDaily but the issuing agency is listed as ETH Zurich rather than Empa as listed on Nanowerk),

Compared with computers, the human brain is incredibly energy-efficient. Scientists are therefore drawing on how the brain and its interconnected neurons function for inspiration in designing innovative computing technologies. They foresee that these brain-inspired computing systems, will be more energy-efficient than conventional ones, as well as better at performing machine-learning tasks.

Much like neurons, which are responsible for both data storage and data processing in the brain, scientists want to combine storage and processing in a single type of electronic component, known as a memristor. Their hope is that this will help to achieve greater efficiency because moving data between the processor and the storage, as conventional computers do, is the main reason for the high energy consumption in machine-learning applications.

Researchers at ETH Zurich, Empa and the University of Zurich have now developed an innovative concept for a memristor that can be used in a far wider range of applications than existing memristors.

“There are different operation modes for memristors, and it is advantageous to be able to use all these modes depending on an artificial neural network’s architecture,” explains ETH Zurich postdoc Rohit John. “But previous conventional memristors had to be configured for one of these modes in advance.”

The new memristors can now easily switch between two operation modes while in use: a mode in which the signal grows weaker over time and dies (volatile mode), and one in which the signal remains constant (non-volatile mode).

Once you get past the first two paragraphs in the Nanowerk news item, you find the ETH Zurich and Empa May 18, 2022 press releases by Fabio Begamin, in both cases, are identical (ETH is listed as the authoring agency on EurekAlert), (Note: A link has been removed in the following),

Just like in the brain

“These two operation modes are also found in the human brain,” John says. On the one hand, stimuli at the synapses are transmitted from neuron to neuron with biochemical neurotransmitters. These stimuli start out strong and then gradually become weaker. On the other hand, new synaptic connections to other neurons form in the brain while we learn. These connections are longer-​lasting.

John, who is a postdoc in the group headed by ETH Professor Maksym Kovalenko, was awarded an ETH fellowship for outstanding postdoctoral researchers in 2020. John conducted this research together with Yiğit Demirağ, a doctoral student in Professor Giacomo Indiveri’s group at the Institute for Neuroinformatics of the University of Zurich and ETH Zurich.

Semiconductor known from solar cells

The memristors the researchers have developed are made of halide perovskite nanocrystals, a semiconductor material known primarily from its use in photovoltaic cells. “The ‘nerve conduction’ in these new memristors is mediated by temporarily or permanently stringing together silver ions from an electrode to form a nanofilament penetrating the perovskite structure through which current can flow,” explains Kovalenko.

This process can be regulated to make the silver-​ion filament either thin, so that it gradually breaks back down into individual silver ions (volatile mode), or thick and permanent (non-​volatile mode). This is controlled by the intensity of the current conducted on the memristor: applying a weak current activates the volatile mode, while a strong current activates the non-​volatile mode.

New toolkit for neuroinformaticians

“To our knowledge, this is the first memristor that can be reliably switched between volatile and non-​volatile modes on demand,” Demirağ says. This means that in the future, computer chips can be manufactured with memristors that enable both modes. This is a significance advance because it is usually not possible to combine several different types of memristors on one chip.

Within the scope of the study, which they published in the journal Nature Communications, the researchers tested 25 of these new memristors and carried out 20,000 measurements with them. In this way, they were able to simulate a computational problem on a complex network. The problem involved classifying a number of different neuron spikes as one of four predefined patterns.

Before these memristors can be used in computer technology, they will need to undergo further optimisation.  However, such components are also important for research in neuroinformatics, as Indiveri points out: “These components come closer to real neurons than previous ones. As a result, they help researchers to better test hypotheses in neuroinformatics and hopefully gain a better understanding of the computing principles of real neuronal circuits in humans and animals.”

Here’s a link to and a citation for the paper,

Reconfigurable halide perovskite nanocrystal memristors for neuromorphic computing by Rohit Abraham John, Yiğit Demirağ, Yevhen Shynkarenko, Yuliia Berezovska, Natacha Ohannessian, Melika Payvand, Peng Zeng, Maryna I. Bodnarchuk, Frank Krumeich, Gökhan Kara, Ivan Shorubalko, Manu V. Nair, Graham A. Cooke, Thomas Lippert, Giacomo Indiveri & Maksym V. Kovalenko. Nature Communications volume 13, Article number: 2074 (2022) DOI: https://doi.org/10.1038/s41467-022-29727-1 Published: 19 April 2022

This paper is open access.

Kempner Institute for the Study of Natural and Artificial Intelligence launched at Harvard University and University of Manchester pushes the boundaries of smart robotics and AI

Before getting to the two news items, it might be a good idea to note that ‘artificial intelligence (AI)’ and ‘robot’ are not synonyms although they are often used that way, even by people who should know better. (sigh … I do it too)

A robot may or may not be animated with artificial intelligence while artificial intelligence algorithms may be installed on a variety of devices such as a phone or a computer or a thermostat or a … .

It’s something to bear in mind when reading about the two new institutions being launched. Now, on to Harvard University.

Kempner Institute for the Study of Natural and Artificial Intelligence

A September 23, 2022 Chan Zuckerberg Initiative (CZI) news release (also on EurekAlert) announces a symposium to launch a new institute close to Mark Zuckerberg’s heart,

On Thursday [September 22, 2022], leadership from the Chan Zuckerberg Initiative (CZI) and Harvard University celebrated the launch of the Kempner Institute for the Study of Natural and Artificial Intelligence at Harvard University with a symposium on Harvard’s campus. Speakers included CZI Head of Science Stephen Quake, President of Harvard University Lawrence Bacow, Provost of Harvard University Alan Garber, and Kempner Institute co-directors Bernardo Sabatini and Sham Kakade. The event also included remarks and panels from industry leaders in science, technology, and artificial intelligence, including Bill Gates, Eric Schmidt, Andy Jassy, Daniel Huttenlocher, Sam Altman, Joelle Pineau, Sangeeta Bhatia, and Yann LeCun, among many others.

The Kempner Institute will seek to better understand the basis of intelligence in natural and artificial systems. Its bold premise is that the two fields are intimately interconnected; the next generation of AI will require the same principles that our brains use for fast, flexible natural reasoning, and understanding how our brains compute and reason requires theories developed for AI. The Kempner Institute will study AI systems, including artificial neural networks, to develop both principled theories [emphasis mine] and a practical understanding of how these systems operate and learn. It will also focus on research topics such as learning and memory, perception and sensation, brain function, and metaplasticity. The Institute will recruit and train future generations of researchers from undergraduates and graduate students to post-docs and faculty — actively recruiting from underrepresented groups at every stage of the pipeline — to study intelligence from biological, cognitive, engineering, and computational perspectives.

CZI Co-Founder and Co-CEO Mark Zuckerberg [chairman and chief executive officer of Meta/Facebook] said: “The Kempner Institute will be a one-of-a-kind institute for studying intelligence and hopefully one that helps us discover what intelligent systems really are, how they work, how they break and how to repair them. There’s a lot of exciting implications because once you understand how something is supposed to work and how to repair it once it breaks, you can apply that to the broader mission the Chan Zuckerberg Initiative has to empower scientists to help cure, prevent or manage all diseases.”

CZI Co-Founder and Co-CEO Priscilla Chan said: “Just attending this school meant the world to me. But to stand on this stage and to be able to give something back is truly a dream come true … All of this progress starts with building one fundamental thing: a Kempner community that’s diverse, multi-disciplinary and multi-generational, because incredible ideas can come from anyone. If you bring together people from all different disciplines to look at a problem and give them permission to articulate their perspective, you might start seeing insights or solutions in a whole different light. And those new perspectives lead to new insights and discoveries and generate new questions that can lead an entire field to blossom. So often, that momentum is what breaks the dam and tears down old orthodoxies, unleashing new floods of new ideas that allow us to progress together as a society.”

CZI Head of Science Stephen Quake said: “It’s an honor to partner with Harvard in building this extraordinary new resource for students and science. This is a once-in-a-generation moment for life sciences and medicine. We are living in such an extraordinary and exciting time for science. Many breakthrough discoveries are going to happen not only broadly but right here on this campus and at this institute.”

CZI’s 10-year vision is to advance research and develop technologies to observe, measure, and analyze any biological process within the human body — across spatial scales and in real time. CZI’s goal is to accelerate scientific progress by funding scientific research to advance entire fields; working closely with scientists and engineers at partner institutions like the Chan Zuckerberg Biohub and Chan Zuckerberg Institute for Advanced Biological Imaging to do the research that can’t be done in conventional environments; and building and democratizing next-generation software and hardware tools to drive biological insights and generate more accurate and biologically important sources of data.

President of Harvard University Lawrence Bacow said: “Here we are with this incredible opportunity that Priscilla Chan and Mark Zuckerberg have given us to imagine taking what we know about the brain, neuroscience and how to model intelligence and putting them together in ways that can inform both, and can truly advance our understanding of intelligence from multiple perspectives.”

Kempner Institute Co-Director and Gordon McKay Professor of Computer Science and of Statistics at the Harvard John A. Paulson School of Engineering and Applied Sciences Sham Kakade said: “Now we begin assembling a world-leading research and educational program at Harvard that collectively tries to understand the fundamental mechanisms of intelligence and seeks to apply these new technologies for the benefit of humanity … We hope to create a vibrant environment for all of us to engage in broader research questions … We want to train the next generation of leaders because those leaders will go on to do the next set of great things.”

Kempner Institute Co-Director and the Alice and Rodman W. Moorhead III Professor of Neurobiology at Harvard Medical School Bernardo Sabatini said: “We’re blending research, education and computation to nurture, raise up and enable any scientist who is interested in unraveling the mysteries of the brain. This field is a nascent and interdisciplinary one, so we’re going to have to teach neuroscience to computational biologists, who are going to have to teach machine learning to cognitive scientists and math to biologists. We’re going to do whatever is necessary to help each individual thrive and push the field forward … Success means we develop mathematical theories that explain how our brains compute and learn, and these theories should be specific enough to be testable and useful enough to start to explain diseases like schizophrenia, dyslexia or autism.”

About the Chan Zuckerberg Initiative

The Chan Zuckerberg Initiative was founded in 2015 to help solve some of society’s toughest challenges — from eradicating disease and improving education, to addressing the needs of our communities. Through collaboration, providing resources and building technology, our mission is to help build a more inclusive, just and healthy future for everyone. For more information, please visit chanzuckerberg.com.

Principled theories, eh. I don’t see a single mention of ethicists or anyone in the social sciences or the humanities or the arts. How are scientists and engineers who have no training in or education in or, even, an introduction to ethics or social impacts or psychology going to manage this?

Mark Zuckerberg’s approach to these issues was something along the lines of “it’s easier to ask for forgiveness than to ask for permission.” I understand there have been changes but it took far too long to recognize the damage let alone attempt to address it.

If you want to gain a little more insight into the Kempner Institute, there’s a December 7, 2021 article by Alvin Powell announcing the institute for the Harvard Gazette,

The institute will be funded by a $500 million gift from Priscilla Chan and Mark Zuckerberg, which was announced Tuesday [December 7, 2021] by the Chan Zuckerberg Initiative. The gift will support 10 new faculty appointments, significant new computing infrastructure, and resources to allow students to flow between labs in pursuit of ideas and knowledge. The institute’s name honors Zuckerberg’s mother, Karen Kempner Zuckerberg, and her parents — Zuckerberg’s grandparents — Sidney and Gertrude Kempner. Chan and Zuckerberg have given generously to Harvard in the past, supporting students, faculty, and researchers in a range of areas, including around public service, literacy, and cures.

“The Kempner Institute at Harvard represents a remarkable opportunity to bring together approaches and expertise in biological and cognitive science with machine learning, statistics, and computer science to make real progress in understanding how the human brain works to improve how we address disease, create new therapies, and advance our understanding of the human body and the world more broadly,” said President Larry Bacow.

Q&A

Bernardo Sabatini and Sham Kakade [Institute co-directors]

GAZETTE: Tell me about the new institute. What is its main reason for being?

SABATINI: The institute is designed to take from two fields and bring them together, hopefully to create something that’s essentially new, though it’s been tried in a couple of places. Imagine that you have over here cognitive scientists and neurobiologists who study the human brain, including the basic biological mechanisms of intelligence and decision-making. And then over there, you have people from computer science, from mathematics and statistics, who study artificial intelligence systems. Those groups don’t talk to each other very much.

We want to recruit from both populations to fill in the middle and to create a new population, through education, through graduate programs, through funding programs — to grow from academic infancy — those equally versed in neuroscience and in AI systems, who can be leaders for the next generation.

Over the millions of years that vertebrates have been evolving, the human brain has developed specializations that are fundamental for learning and intelligence. We need to know what those are to understand their benefits and to ask whether they can make AI systems better. At the same time, as people who study AI and machine learning (ML) develop mathematical theories as to how those systems work and can say that a network of the following structure with the following properties learns by calculating the following function, then we can take those theories and ask, “Is that actually how the human brain works?”

KAKADE: There’s a question of why now? In the technological space, the advancements are remarkable even to me, as a researcher who knows how these things are being made. I think there’s a long way to go, but many of us feel that this is the right time to study intelligence more broadly. You might also ask: Why is this mission unique and why is this institute different from what’s being done in academia and in industry? Academia is good at putting out ideas. Industry is good at turning ideas into reality. We’re in a bit of a sweet spot. We have the scale to study approaches at a very different level: It’s not going to be just individual labs pursuing their own ideas. We may not be as big as the biggest companies, but we can work on the types of problems that they work on, such as having the compute resources to work on large language models. Industry has exciting research, but the spectrum of ideas produced is very different, because they have different objectives.

For the die-hards, there’s a September 23, 2022 article by Clea Simon in Harvard Gazette, which updates the 2021 story,

Next, Manchester, England.

Manchester Centre for Robotics and AI

Robotots take a break at a lab at The University of Manchester – picture courtesy of Marketing Manchester [downloaded from https://www.manchester.ac.uk/discover/news/manchester-ai-summit-aims-to-attract-experts-in-advanced-engineering-and-robotics/]

A November 22, 2022 University of Manchester press release (also on EurekAlert) announces both a meeting and a new centre, Note: Links to the Centre have been retained; all others have been removed,

How humans and super smart robots will live and work together in the future will be among the key issues being scrutinised by experts at a new centre of excellence for AI and autonomous machines based at The University of Manchester.

The Manchester Centre for Robotics and AI will be a new specialist multi-disciplinary centre to explore developments in smart robotics through the lens of artificial intelligence (AI) and autonomous machinery.

The University of Manchester has built a modern reputation of excellence in AI and robotics, partly based on the legacy of pioneering thought leadership begun in this field in Manchester by legendary codebreaker Alan Turing.

Manchester’s new multi-disciplinary centre is home to world-leading research from across the academic disciplines – and this group will hold its first conference on Wednesday, Nov 23, at the University’s new engineering and materials facilities.

A  highlight will be a joint talk by robotics expert Dr Andy Weightman and theologian Dr Scott Midson which is expected to put a spotlight on ‘posthumanism’, a future world where humans won’t be the only highly intelligent decision-makers.

Dr Weightman, who researches home-based rehabilitation robotics for people with neurological impairment, and Dr Midson, who researches theological and philosophical critiques of posthumanism, will discuss how interdisciplinary research can help with the special challenges of rehabilitation robotics – and, ultimately, what it means to be human “in the face of the promises and challenges of human enhancement through robotic and autonomous machines”.

Other topics that the centre will have a focus on will include applications of robotics in extreme environments.

For the past decade, a specialist Manchester team led by Professor Barry Lennox has designed robots to work safely in nuclear decommissioning sites in the UK. A ground-breaking robot called Lyra that has been developed by Professor Lennox’s team – and recently deployed at the Dounreay site in Scotland, the “world’s deepest nuclear clean up site” – has been listed in Time Magazine’s Top 200 innovations of 2022.

Angelo Cangelosi, Professor of Machine Learning and Robotics at Manchester, said the University offers a world-leading position in the field of autonomous systems – a technology that will be an integral part of our future world. 

Professor Cangelosi, co-Director of Manchester’s Centre for Robotics and AI, said: “We are delighted to host our inaugural conference which will provide a special showcase for our diverse academic expertise to design robotics for a variety of real world applications.

“Our research and innovation team are at the interface between robotics, autonomy and AI – and their knowledge is drawn from across the University’s disciplines, including biological and medical sciences – as well the humanities and even theology. [emphases mine]

“This rich diversity offers Manchester a distinctive approach to designing robots and autonomous systems for real world applications, especially when combined with our novel use of AI-based knowledge.”

Delegates will have a chance to observe a series of robots and autonomous machines being demoed at the new conference.

The University of Manchester’s Centre for Robotics and AI will aim to: 

  • design control systems with a focus on bio-inspired solutions to mechatronics, eg the use of biomimetic sensors, actuators and robot platforms; 
  • develop new software engineering and AI methodologies for verification in autonomous systems, with the aim to design trustworthy autonomous systems; 
  • research human-robot interaction, with a pioneering focus on the use of brain-inspired approaches [emphasis mine] to robot control, learning and interaction; and 
  • research the ethics and human-centred robotics issues, for the understanding of the impact of the use of robots and autonomous systems with individuals and society. 

In some ways, the Kempner Institute and the Manchester Centre for Robotics and AI have very similar interests, especially where the brain is concerned. What fascinates me is the Manchester Centre’s inclusion of theologian Dr Scott Midson and the discussion (at the meeting) of ‘posthumanism’. The difference is between actual engagement at the symposium (the centre) and mere mention in a news release (the institute).

I wish the best for both institutions.

Tiny nanomagnets interact like neurons in the brain for low energy artificial intelligence (brainlike) computing

Saving energy is one of the main drivers for the current race to make neuromorphic (brainlike) computers as this May 5, 2022 news item on Nanowerk comments, Note: Links have been removed,

Researchers have shown it is possible to perform artificial intelligence using tiny nanomagnets that interact like neurons in the brain.

The new method, developed by a team led by Imperial College London researchers, could slash the energy cost of artificial intelligence (AI), which is currently doubling globally every 3.5 months. [emphasis mine]

In a paper published in Nature Nanotechnology (“Reconfigurable training and reservoir computing in an artificial spin-vortex ice via spin-wave fingerprinting”), the international team have produced the first proof that networks of nanomagnets can be used to perform AI-like processing. The researchers showed nanomagnets can be used for ‘time-series prediction’ tasks, such as predicting and regulating insulin levels in diabetic patients.

A May 5, 2022 Imperial College London (ICL) press release (also on EurekAlert) by Hayley Dunning, which originated the news item delves further into the research,

Artificial intelligence that uses ‘neural networks’ aims to replicate the way parts of the brain work, where neurons talk to each other to process and retain information. A lot of the maths used to power neural networks was originally invented by physicists to describe the way magnets interact, but at the time it was too difficult to use magnets directly as researchers didn’t know how to put data in and get information out.

Instead, software run on traditional silicon-based computers was used to simulate the magnet interactions, in turn simulating the brain. Now, the team have been able to use the magnets themselves to process and store data – cutting out the middleman of the software simulation and potentially offering enormous energy savings.

Nanomagnetic states

Nanomagnets can come in various ‘states’, depending on their direction. Applying a magnetic field to a network of nanomagnets changes the state of the magnets based on the properties of the input field, but also on the states of surrounding magnets.

The team, led by Imperial Department of Physics researchers, were then able to design a technique to count the number of magnets in each state once the field has passed through, giving the ‘answer’.

Co-first author of the study Dr Jack Gartside said: “We’ve been trying to crack the problem of how to input data, ask a question, and get an answer out of magnetic computing for a long time. Now we’ve proven it can be done, it paves the way for getting rid of the computer software that does the energy-intensive simulation.”

Co-first author Kilian Stenning added: “How the magnets interact gives us all the information we need; the laws of physics themselves become the computer.”

Team leader Dr Will Branford said: “It has been a long-term goal to realise computer hardware inspired by the software algorithms of Sherrington and Kirkpatrick. It was not possible using the spins on atoms in conventional magnets, but by scaling up the spins into nanopatterned arrays we have been able to achieve the necessary control and readout.”

Slashing energy cost

AI is now used in a range of contexts, from voice recognition to self-driving cars. But training AI to do even relatively simple tasks can take huge amounts of energy. For example, training AI to solve a Rubik’s cube took the energy equivalent of two nuclear power stations running for an hour.

Much of the energy used to achieve this in conventional, silicon-chip computers is wasted in inefficient transport of electrons during processing and memory storage. Nanomagnets however don’t rely on the physical transport of particles like electrons, but instead process and transfer information in the form of a ‘magnon’ wave, where each magnet affects the state of neighbouring magnets.

This means much less energy is lost, and that the processing and storage of information can be done together, rather than being separate processes as in conventional computers. This innovation could make nanomagnetic computing up to 100,000 times more efficient than conventional computing.

AI at the edge

The team will next teach the system using real-world data, such as ECG signals, and hope to make it into a real computing device. Eventually, magnetic systems could be integrated into conventional computers to improve energy efficiency for intense processing tasks.

Their energy efficiency also means they could feasibly be powered by renewable energy, and used to do ‘AI at the edge’ – processing the data where it is being collected, such as weather stations in Antarctica, rather than sending it back to large data centres.

It also means they could be used on wearable devices to process biometric data on the body, such as predicting and regulating insulin levels for diabetic people or detecting abnormal heartbeats.

Here’s a link to and a citation for the paper,

Reconfigurable training and reservoir computing in an artificial spin-vortex ice via spin-wave fingerprinting by Jack C. Gartside, Kilian D. Stenning, Alex Vanstone, Holly H. Holder, Daan M. Arroo, Troy Dion, Francesco Caravelli, Hidekazu Kurebayashi & Will R. Branford. Nature Nanotechnology (2022) DOI: https://doi.org/10.1038/s41565-022-01091-7 Published 05 May 2022

This paper is behind a paywall.

Quantum memristors

This March 24, 2022 news item on Nanowerk announcing work on a quantum memristor seems to have had a rough translation from German to English,

In recent years, artificial intelligence has become ubiquitous, with applications such as speech interpretation, image recognition, medical diagnosis, and many more. At the same time, quantum technology has been proven capable of computational power well beyond the reach of even the world’s largest supercomputer.

Physicists at the University of Vienna have now demonstrated a new device, called quantum memristor, which may allow to combine these two worlds, thus unlocking unprecedented capabilities. The experiment, carried out in collaboration with the National Research Council (CNR) and the Politecnico di Milano in Italy, has been realized on an integrated quantum processor operating on single photons.

Caption: Abstract representation of a neural network which is made of photons and has memory capability potentially related to artificial intelligence. Credit: © Equinox Graphics, University of Vienna

A March 24, 2022 University of Vienna (Universität Wien) press release (also on EurekAlert), which originated the news item, explains why this work has an impact on artificial intelligence,

At the heart of all artificial intelligence applications are mathematical models called neural networks. These models are inspired by the biological structure of the human brain, made of interconnected nodes. Just like our brain learns by constantly rearranging the connections between neurons, neural networks can be mathematically trained by tuning their internal structure until they become capable of human-level tasks: recognizing our face, interpreting medical images for diagnosis, even driving our cars. Having integrated devices capable of performing the computations involved in neural networks quickly and efficiently has thus become a major research focus, both academic and industrial.

One of the major game changers in the field was the discovery of the memristor, made in 2008. This device changes its resistance depending on a memory of the past current, hence the name memory-resistor, or memristor. Immediately after its discovery, scientists realized that (among many other applications) the peculiar behavior of memristors was surprisingly similar to that of neural synapses. The memristor has thus become a fundamental building block of neuromorphic architectures.

A group of experimental physicists from the University of Vienna, the National Research Council (CNR) and the Politecnico di Milano led by Prof. Philip Walther and Dr. Roberto Osellame, have now demonstrated that it is possible to engineer a device that has the same behavior as a memristor, while acting on quantum states and being able to encode and transmit quantum information. In other words, a quantum memristor. Realizing such device is challenging because the dynamics of a memristor tends to contradict the typical quantum behavior. 

By using single photons, i.e. single quantum particles of lights, and exploiting their unique ability to propagate simultaneously in a superposition of two or more paths, the physicists have overcome the challenge. In their experiment, single photons propagate along waveguides laser-written on a glass substrate and are guided on a superposition of several paths. One of these paths is used to measure the flux of photons going through the device and this quantity, through a complex electronic feedback scheme, modulates the transmission on the other output, thus achieving the desired memristive behavior. Besides demonstrating the quantum memristor, the researchers have provided simulations showing that optical networks with quantum memristor can be used to learn on both classical and quantum tasks, hinting at the fact that the quantum memristor may be the missing link between artificial intelligence and quantum computing.

“Unlocking the full potential of quantum resources within artificial intelligence is one of the greatest challenges of the current research in quantum physics and computer science”, says Michele Spagnolo, who is first author of the publication in the journal “Nature Photonics”. The group of Philip Walther of the University of Vienna has also recently demonstrated that robots can learn faster when using quantum resources and borrowing schemes from quantum computation. This new achievement represents one more step towards a future where quantum artificial intelligence become reality.

Here’s a link to and a citation for the paper,

Experimental photonic quantum memristor by Michele Spagnolo, Joshua Morris, Simone Piacentini, Michael Antesberger, Francesco Massa, Andrea Crespi, Francesco Ceccarelli, Roberto Osellame & Philip Walther. Nature Photonics volume 16, pages 318–323 (2022) DOI: https://doi.org/10.1038/s41566-022-00973-5 Published 24 March 2022 Issue Date April 2022

This paper is open access.

Honey-based neuromorphic chips for brainlike computers?

Photo by Mariana Ibanez on Unsplash Courtesy Washington State University

An April 5, 2022 news item on Nanowerk explains the connection between honey and a neuromorphic (brainlike) computer chip, Note: Links have been removed,

Honey might be a sweet solution for developing environmentally friendly components for neuromorphic computers, systems designed to mimic the neurons and synapses found in the human brain.

Hailed by some as the future of computing, neuromorphic systems are much faster and use much less power than traditional computers. Washington State University engineers have demonstrated one way to make them more organic too.

In a study published in Journal of Physics D (“Memristive synaptic device based on a natural organic material—honey for spiking neural network in biodegradable neuromorphic systems”), the researchers show that honey can be used to make a memristor, a component similar to a transistor that can not only process but also store data in memory.

An April 5, 2022 Washington State University (WSU) news release (also on EurekAlert) by Sara Zaske, which originated the news item, describes the purpose for the work and details about making chips from honey,

“This is a very small device with a simple structure, but it has very similar functionalities to a human neuron,” said Feng Zhao, associate professor of WSU’s School of Engineering and Computer Science and corresponding author on the study.“This means if we can integrate millions or billions of these honey memristors together, then they can be made into a neuromorphic system that functions much like a human brain.”

For the study, Zhao and first author Brandon Sueoka, a WSU graduate student in Zhao’s lab, created memristors by processing honey into a solid form and sandwiching it between two metal electrodes, making a structure similar to a human synapse. They then tested the honey memristors’ ability to mimic the work of synapses with high switching on and off speeds of 100 and 500 nanoseconds respectively. The memristors also emulated the synapse functions known as spike-timing dependent plasticity and spike-rate dependent plasticity, which are responsible for learning processes in human brains and retaining new information in neurons.

The WSU engineers created the honey memristors on a micro-scale, so they are about the size of a human hair. The research team led by Zhao plans to develop them on a nanoscale, about 1/1000 of a human hair, and bundle many millions or even billions together to make a full neuromorphic computing system.

Currently, conventional computer systems are based on what’s called the von Neumann architecture. Named after its creator, this architecture involves an input, usually from a keyboard and mouse, and an output, such as the monitor. It also has a CPU, or central processing unit, and RAM, or memory storage. Transferring data through all these mechanisms from input to processing to memory to output takes a lot of power at least compared to the human brain, Zhao said. For instance, the Fugaku supercomputer uses upwards of 28 megawatts, roughly equivalent to 28 million watts, to run while the brain uses only around 10 to 20 watts.

The human brain has more than 100 billion neurons with more than 1,000 trillion synapses, or connections, among them. Each neuron can both process and store data, which makes the brain much more efficient than a traditional computer, and developers of neuromorphic computing systems aim to mimic that structure.

Several companies, including Intel and IBM, have released neuromorphic chips which have the equivalent of more than 100 million “neurons” per chip, but this is not yet near the number in the brain. Many developers are also still using the same nonrenewable and toxic materials that are currently used in conventional computer chips.

Many researchers, including Zhao’s team, are searching for biodegradable and renewable solutions for use in this promising new type of computing. Zhao is also leading investigations into using proteins and other sugars such as those found in Aloe vera leaves in this capacity, but he sees strong potential in honey.

“Honey does not spoil,” he said. “It has a very low moisture concentration, so bacteria cannot survive in it. This means these computer chips will be very stable and reliable for a very long time.”

The honey memristor chips developed at WSU should tolerate the lower levels of heat generated by neuromorphic systems which do not get as hot as traditional computers. The honey memristors will also cut down on electronic waste.

“When we want to dispose of devices using computer chips made of honey, we can easily dissolve them in water,” he said. “Because of these special properties, honey is very useful for creating renewable and biodegradable neuromorphic systems.”

This also means, Zhao cautioned, that just like conventional computers, users will still have to avoid spilling their coffee on them.

Nice note of humour at the end. There are a few questions, I wonder if the variety of honey (clover, orange blossom, blackberry, etc.) has an impact on the chip’s speed and/or longevity. Also, if someone spilled coffee and the chip melted and a child decided to lap it up, what would happen?

Here’s a link to and a citation for the paper,

Memristive synaptic device based on a natural organic material—honey for spiking neural network in biodegradable neuromorphic systems. Brandon Sueoka and Feng Zhao. Journal of Physics D: Applied Physics, Volume 55, Number 22 (225105) Published 7 March 2022 • © 2022 IOP Publishing Ltd

This paper is behind a paywall.

Simulating neurons and synapses with memristive devices

I’ve been meaning to get to this research on ‘neuromorphic memory’ for a while. From a May 20, 2022 news item on Nanowerk,

Researchers have reported a nano-sized neuromorphic memory device that emulates neurons and synapses simultaneously in a unit cell, another step toward completing the goal of neuromorphic computing designed to rigorously mimic the human brain with semiconductor devices.

Neuromorphic computing aims to realize artificial intelligence (AI) by mimicking the mechanisms of neurons and synapses that make up the human brain. Inspired by the cognitive functions of the human brain that current computers cannot provide, neuromorphic devices have been widely investigated.

However, current Complementary Metal-Oxide Semiconductor (CMOS)-based neuromorphic circuits simply connect artificial neurons and synapses without synergistic interactions, and the concomitant implementation of neurons and synapses still remains a challenge.

A May 20, 2022 Korea Advanced Institute of Science and Technology (KAIST) press release (also on EurekAlert), which originated the news item, delves further into the research,

To address these issues, a research team led by Professor Keon Jae Lee from the Department of Materials Science and Engineering implemented the biological working mechanisms of humans by introducing the neuron-synapse interactions in a single memory cell, rather than the conventional approach of electrically connecting artificial neuronal and synaptic devices.

Similar to commercial graphics cards, the artificial synaptic devices previously studied often used to accelerate parallel computations, which shows clear differences from the operational mechanisms of the human brain. The research team implemented the synergistic interactions between neurons and synapses in the neuromorphic memory device, emulating the mechanisms of the biological neural network. In addition, the developed neuromorphic device can replace complex CMOS neuron circuits with a single device, providing high scalability and cost efficiency. 

The human brain consists of a complex network of 100 billion neurons and 100 trillion synapses. The functions and structures of neurons and synapses can flexibly change according to the external stimuli, adapting to the surrounding environment. The research team developed a neuromorphic device in which short-term and long-term memories coexist using volatile and non-volatile memory devices that mimic the characteristics of neurons and synapses, respectively. A threshold switch device is used as volatile memory and phase-change memory is used as a non-volatile device. Two thin-film devices are integrated without intermediate electrodes, implementing the functional adaptability of neurons and synapses in the neuromorphic memory.

Professor Keon Jae Lee explained, “Neurons and synapses interact with each other to establish cognitive functions such as memory and learning, so simulating both is an essential element for brain-inspired artificial intelligence. The developed neuromorphic memory device also mimics the retraining effect that allows quick learning of the forgotten information by implementing a positive feedback effect between neurons and synapses.”

Here’s a link to and a citation for the paper,

Simultaneous emulation of synaptic and intrinsic plasticity using a memristive synapse by Sang Hyun Sung, Tae Jin Kim, Hyera Shin, Tae Hong Im & Keon Jae Lee. Nature Communications volume 13, Article number: 2811 (2022) DOI https://doi.org/10.1038/s41467-022-30432-2 Published 19 May 2022

This paper is open access.

Neuromorphic hardware could yield computational advantages for more than just artificial intelligence

Neuromorphic (brainlike) computing doesn’t have to be used for cognitive tasks only according to a research team at the US Dept. of Energy’s Sandia National Laboratories as per their March 11, 2022 news release by Neal Singer (also on EurekAlert but published March 10, 2022), Note: Links have been removed,

With the insertion of a little math, Sandia National Laboratories researchers have shown that neuromorphic computers, which synthetically replicate the brain’s logic, can solve more complex problems than those posed by artificial intelligence and may even earn a place in high-performance computing.

A random walk diffusion model based on data from Sandia National Laboratories algorithms running on an Intel Loihi neuromorphic platform. Video courtesy of Sandia National Laboratories. …

The findings, detailed in a recent article in the journal Nature Electronics, show that neuromorphic simulations employing the statistical method called random walks can track X-rays passing through bone and soft tissue, disease passing through a population, information flowing through social networks and the movements of financial markets, among other uses, said Sandia theoretical neuroscientist and lead researcher James Bradley Aimone.

“Basically, we have shown that neuromorphic hardware can yield computational advantages relevant to many applications, not just artificial intelligence to which it’s obviously kin,” said Aimone. “Newly discovered applications range from radiation transport and molecular simulations to computational finance, biology modeling and particle physics.”

In optimal cases, neuromorphic computers will solve problems faster and use less energy than conventional computing, he said.

The bold assertions should be of interest to the high-performance computing community because finding capabilities to solve statistical problems is of increasing concern, Aimone said.

“These problems aren’t really well-suited for GPUs [graphics processing units], which is what future exascale systems are likely going to rely on,” Aimone said. “What’s exciting is that no one really has looked at neuromorphic computing for these types of applications before.”

Sandia engineer and paper author Brian Franke said, “The natural randomness of the processes you list will make them inefficient when directly mapped onto vector processors like GPUs on next-generation computational efforts. Meanwhile, neuromorphic architectures are an intriguing and radically different alternative for particle simulation that may lead to a scalable and energy-efficient approach for solving problems of interest to us.”

Franke models photon and electron radiation to understand their effects on components.

The team successfully applied neuromorphic-computing algorithms to model random walks of gaseous molecules diffusing through a barrier, a basic chemistry problem, using the 50-million-chip Loihi platform Sandia received approximately a year and a half ago from Intel Corp., said Aimone. “Then we showed that our algorithm can be extended to more sophisticated diffusion processes useful in a range of applications.”

The claims are not meant to challenge the primacy of standard computing methods used to run utilities, desktops and phones. “There are, however, areas in which the combination of computing speed and lower energy costs may make neuromorphic computing the ultimately desirable choice,” he said.

Showing a neuromorphic advantage, both the IBM TrueNorth and Intel Loihi neuromorphic chips observed by Sandia National Laboratories researchers were significantly more energy efficient than conventional computing hardware. The graph shows Loihi can perform about 10 times more calculations per unit of energy than a conventional processor. Energy is the limiting factor — more chips can be inserted to run things in parallel, thus faster, but the same electric bill occurs whether it is one computer doing everything or 10,000 computers doing the work. Image courtesy of Sandia National Laboratories. Click on the thumbnail for a high-resolution image.

Unlike the difficulties posed by adding qubits to quantum computers — another interesting method of moving beyond the limitations of conventional computing — chips containing artificial neurons are cheap and easy to install, Aimone said.

There can still be a high cost for moving data on or off the neurochip processor. “As you collect more, it slows down the system, and eventually it won’t run at all,” said Sandia mathematician and paper author William Severa. “But we overcame this by configuring a small group of neurons that effectively computed summary statistics, and we output those summaries instead of the raw data.”

Severa wrote several of the experiment’s algorithms.

Like the brain, neuromorphic computing works by electrifying small pin-like structures, adding tiny charges emitted from surrounding sensors until a certain electrical level is reached. Then the pin, like a biological neuron, flashes a tiny electrical burst, an action known as spiking. Unlike the metronomical regularity with which information is passed along in conventional computers, said Aimone, the artificial neurons of neuromorphic computing flash irregularly, as biological ones do in the brain, and so may take longer to transmit information. But because the process only depletes energies from sensors and neurons if they contribute data, it requires less energy than formal computing, which must poll every processor whether contributing or not. The conceptually bio-based process has another advantage: Its computing and memory components exist in the same structure, while conventional computing uses up energy by distant transfer between these two functions. The slow reaction time of the artificial neurons initially may slow down its solutions, but this factor disappears as the number of neurons is increased so more information is available in the same time period to be totaled, said Aimone.

The process begins by using a Markov chain — a mathematical construct where, like a Monopoly gameboard, the next outcome depends only on the current state and not the history of all previous states. That randomness contrasts, said Sandia mathematician and paper author Darby Smith, with most linked events. For example, he said, the number of days a patient must remain in the hospital are at least partially determined by the preceding length of stay.

Beginning with the Markov random basis, the researchers used Monte Carlo simulations, a fundamental computational tool, to run a series of random walks that attempt to cover as many routes as possible.

“Monte Carlo algorithms are a natural solution method for radiation transport problems,” said Franke. “Particles are simulated in a process that mirrors the physical process.”

The energy of each walk was recorded as a single energy spike by an artificial neuron reading the result of each walk in turn. “This neural net is more energy efficient in sum than recording each moment of each walk, as ordinary computing must do. This partially accounts for the speed and efficiency of the neuromorphic process,” said Aimone. More chips will help the process move faster using the same amount of energy, he said.

The next version of Loihi, said Sandia researcher Craig Vineyard, will increase its current chip scale from 128,000 neurons per chip to up to one million. Larger scale systems then combine multiple chips to a board.

“Perhaps it makes sense that a technology like Loihi may find its way into a future high-performance computing platform,” said Aimone. “This could help make HPC much more energy efficient, climate-friendly and just all around more affordable.”

Here’s a link to and a citation for the paper,

Neuromorphic scaling advantages for energy-efficient random walk computations by J. Darby Smith, Aaron J. Hill, Leah E. Reeder, Brian C. Franke, Richard B. Lehoucq, Ojas Parekh, William Severa & James B. Aimone. Nature Electronics volume 5, pages 102–112 (2022) DOI: https://doi.org/10.1038/s41928-021-00705-7 Issue Date February 2022 Published 14 February 2022

This paper is open access.

An ‘artificial brain’ and life-long learning

Talk of artificial brains (also known as, brainlike computing or neuromorphic computing) usually turns to memory fairly quickly. This February 3, 2022 news item on ScienceDaily does too although the focus is on how memory and forgetting affect the ability to learn,

When the human brain learns something new, it adapts. But when artificial intelligence learns something new, it tends to forget information it already learned.

As companies use more and more data to improve how AI recognizes images, learns languages and carries out other complex tasks, a paper publishing in Science this week shows a way that computer chips could dynamically rewire themselves to take in new data like the brain does, helping AI to keep learning over time.

“The brains of living beings can continuously learn throughout their lifespan. We have now created an artificial platform for machines to learn throughout their lifespan,” said Shriram Ramanathan, a professor in Purdue University’s [Indiana, US] School of Materials Engineering who specializes in discovering how materials could mimic the brain to improve computing.

Unlike the brain, which constantly forms new connections between neurons to enable learning, the circuits on a computer chip don’t change. A circuit that a machine has been using for years isn’t any different than the circuit that was originally built for the machine in a factory.

This is a problem for making AI more portable, such as for autonomous vehicles or robots in space that would have to make decisions on their own in isolated environments. If AI could be embedded directly into hardware rather than just running on software as AI typically does, these machines would be able to operate more efficiently.

A February 3, 2022 Purdue University news release (also on EurekAlert), which originated the news item, provides more technical detail about the work (Note: Links have been removed),

In this study, Ramanathan and his team built a new piece of hardware that can be reprogrammed on demand through electrical pulses. Ramanathan believes that this adaptability would allow the device to take on all of the functions that are necessary to build a brain-inspired computer.

“If we want to build a computer or a machine that is inspired by the brain, then correspondingly, we want to have the ability to continuously program, reprogram and change the chip,” Ramanathan said.

Toward building a brain in chip form

The hardware is a small, rectangular device made of a material called perovskite nickelate,  which is very sensitive to hydrogen. Applying electrical pulses at different voltages allows the device to shuffle a concentration of hydrogen ions in a matter of nanoseconds, creating states that the researchers found could be mapped out to corresponding functions in the brain.

When the device has more hydrogen near its center, for example, it can act as a neuron, a single nerve cell. With less hydrogen at that location, the device serves as a synapse, a connection between neurons, which is what the brain uses to store memory in complex neural circuits.

Through simulations of the experimental data, the Purdue team’s collaborators at Santa Clara University and Portland State University showed that the internal physics of this device creates a dynamic structure for an artificial neural network that is able to more efficiently recognize electrocardiogram patterns and digits compared to static networks. This neural network uses “reservoir computing,” which explains how different parts of a brain communicate and transfer information.

Researchers from The Pennsylvania State University also demonstrated in this study that as new problems are presented, a dynamic network can “pick and choose” which circuits are the best fit for addressing those problems.

Since the team was able to build the device using standard semiconductor-compatible fabrication techniques and operate the device at room temperature, Ramanathan believes that this technique can be readily adopted by the semiconductor industry.

“We demonstrated that this device is very robust,” said Michael Park, a Purdue Ph.D. student in materials engineering. “After programming the device over a million cycles, the reconfiguration of all functions is remarkably reproducible.”

The researchers are working to demonstrate these concepts on large-scale test chips that would be used to build a brain-inspired computer.

Experiments at Purdue were conducted at the FLEX Lab and Birck Nanotechnology Center of Purdue’s Discovery Park. The team’s collaborators at Argonne National Laboratory, the University of Illinois, Brookhaven National Laboratory and the University of Georgia conducted measurements of the device’s properties.

Here’s a link to and a citation for the paper,

Reconfigurable perovskite nickelate electronics for artificial intelligence by Hai-Tian Zhang, Tae Joon Park, A. N. M. Nafiul Islam, Dat S. J. Tran, Sukriti Manna, Qi Wang, Sandip Mondal, Haoming Yu, Suvo Banik, Shaobo Cheng, Hua Zhou, Sampath Gamage, Sayantan Mahapatra, Yimei Zhu, Yohannes Abate, Nan Jiang, Subramanian K. R. S. Sankaranarayanan, Abhronil Sengupta, Christof Teuscher, Shriram Ramanathan. Science • 3 Feb 2022 • Vol 375, Issue 6580 • pp. 533-539 • DOI: 10.1126/science.abj7943

This paper is behind a paywall.

2D materials for a computer’s artificial brain synapses

A January 28, 2022 news item on Nanowerk describes for some of the latest work on hardware that could enable neuromorphic (brainlike) computing. Note: A link has been removed,

Researchers from KTH Royal Institute of Technology [Sweden] and Stanford University [US] have fabricated a material for computer components that enable the commercial viability of computers that mimic the human brain (Advanced Functional Materials, “High-Speed Ionic Synaptic Memory Based on 2D Titanium Carbide MXene”).

A January 31, 2022 KTH Royal Institute of Technology press release (also on EurekAlert but published January 28, 2022), which originated the news item, delves further into the research,

Electrochemical random access (ECRAM) memory components made with 2D titanium carbide showed outstanding potential for complementing classical transistor technology, and contributing toward commercialization of powerful computers that are modeled after the brain’s neural network. Such neuromorphic computers can be thousands times more energy efficient than today’s computers.

These advances in computing are possible because of some fundamental differences from the classic computing architecture in use today, and the ECRAM, a component that acts as a sort of synaptic cell in an artificial neural network, says KTH Associate Professor Max Hamedi.

“Instead of transistors that are either on or off, and the need for information to be carried back and forth between the processor and memory—these new computers rely on components that can have multiple states, and perform in-memory computation,” Hamedi says.

The scientists at KTH and Stanford have focused on testing better materials for building an ECRAM, a component in which switching occurs by inserting ions into an oxidation channel, in a sense similar to our brain which also works with ions. What has been needed to make these chips commercially viable are materials that overcome the slow kinetics of metal oxides and the poor temperature stability of plastics.                   

The key material in the ECRAM units that the researchers fabricated is referred to as MXene—a two-dimensional (2D) compound, barely a few atoms thick, consisting of titanium carbide (Ti3C2Tx). The MXene combines the high speed of organic chemistry with the integration compatibility of inorganic materials in a single device operating at the nexus of electrochemistry and electronics, Hamedi says.

Co-author Professor Alberto Salleo at Stanford University, says that MXene ECRAMs combine the speed, linearity, write noise, switching energy, and endurance metrics essential for parallel acceleration of artificial neural networks.

“MXenes are an exciting materials family for this particular application as they combine the temperature stability needed for integration with conventional electronics with the availability of a vast composition space to optimize performance, Salleo says”

While there are many other barriers to overcome before consumers can buy their own neuromorphic computers, Hamedi says the 2D ECRAMs represent a breakthrough at least in the area of neuromorphic materials, potentially leading to artificial intelligence that can adapt to confusing input and nuance, the way the brain does with thousands time smaller energy consumption. This can also enable portable devices capable of much heavier computing tasks without having to rely on the cloud.

Here’s a link to and a citation for the paper,

High-Speed Ionic Synaptic Memory Based on 2D Titanium Carbide MXene by
Armantas Melianas, Min-A Kang, Armin VahidMohammadi, Tyler James Quill, Weiqian Tian, Yury Gogotsi, Alberto Salleo, Mahiar Max Hamedi. Advanced Functional Materials DOI: https://doi.org/10.1002/adfm.202109970 First published: 21 November 2021

This paper is open access.

Organic neuromorphic electronics

A December 13, 2021 news item on ScienceDaily describes some research from Germany’s Max Planck Institute for Polymer Research,

The human brain works differently from a computer – while the brain works with biological cells and electrical impulses, a computer uses silicon-based transistors. Scientists have equipped a toy robot with a smart and adaptive electrical circuit made of soft organic materials, similarly to the biological matter. With this bio-inspired approach, they were able to teach the robot to navigate independently through a maze using visual signs for guidance.

A December 13, 2021 Max Planck Institute for Polymer Research press release (also on EurekAlert), which originated the news item, fills in a few details,

The processor is the brain of a computer – an often-quoted phrase. But processors work fundamentally differently than the human brain. Transistors perform logic operations by means of electronic signals. In contrast, the brain works with nerve cells, so-called neurons, which are connected via biological conductive paths, so-called synapses. At a higher level, this signaling is used by the brain to control the body and perceive the surrounding environment. The reaction of the body/brain system when certain stimuli are perceived – for example, via the eyes, ears or sense of touch – is triggered through a learning process. For example, children learn not to reach twice for a hot stove: one input stimulus leads to a learning process with a clear behavioral outcome.

Scientists working with Paschalis Gkoupidenis, group leader in Paul Blom’s department at the Max Planck Institute for Polymer Research, have now applied this basic principle of learning through experience in a simplified form and steered a robot through a maze using a so-called organic neuromorphic circuit. The work was an extensive collaboration between the Universities of Eindhoven [Eindhoven University of Technology; Netherlands], Stanford [University; California, US], Brescia [University; Italy], Oxford [UK] and KAUST [King Abdullah University of Science and Technology, Saudi Arabia].

“We wanted to use this simple setup to show how powerful such ‘organic neuromorphic devices’ can be in real-world conditions,” says Imke Krauhausen, a doctoral student in Gkoupidenis’ group and at TU Eindhoven (van de Burgt group), and first author of the scientific paper.

To achieve the navigation of the robot inside the maze, the researchers fed the smart adaptive circuit with sensory signals coming from the environment. The path of maze towards the exit is indicated visually at each maze intersects. Initially, the robot often misinterprets the visual signs, thus it makes the wrong “turning” decisions at the maze intersects and loses the way out. When the robot takes these decisions and follows wrong dead-end paths, it is being discouraged to take these wrong decisions by receiving corrective stimuli. The corrective stimuli, for example when the robot hits a wall, are directly applied at the organic circuit via electrical signals induced by a touch sensor attached to the robot. With each subsequent execution of the experiment, the robot gradually learns to make the right “turning” decisions at the intersects, i. e. to avoid receiving corrective stimuli, and after a few trials it finds the way out of the maze. This learning process happens exclusively on the organic adaptive circuit. 

“We were really glad to see that the robot can pass through the maze after some runs by learning on a simple organic circuit. We have shown here a first, very simple setup. In the distant future, however, we hope that organic neuromorphic devices could also be used for local and distributed computing/learning. This will open up entirely new possibilities for applications in real-world robotics, human-machine interfaces and point-of-care diagnostics. Novel platforms for rapid prototyping and education, at the intersection of materials science and robotics, are also expected to emerge.” Gkoupidenis says.

Here’s a link to and a citation for the paper,

Organic neuromorphic electronics for sensorimotor integration and learning in robotics by Imke Krauhausen, Dimitrios A. Koutsouras, Armantas Melianas, Scott T. Keene, Katharina Lieberth, Hadrien Ledanseur, Rajendar Sheelamanthula, Alexander Giovannitti, Fabrizio Torricelli, Iain Mcculloch, Paul W. M. Blom, Alberto Salleo, Yoeri van de Burgt and Paschalis Gkoupidenis. Science Advances • 10 Dec 2021 • Vol 7, Issue 50 • DOI: 10.1126/sciadv.abl5068

This paper is open access.