Tag Archives: brain-inspired computing

NorthPole: a brain-inspired chip design for saving energy

One of the main attractions of brain-inspired computing is that it requires less energy than is used in conventional computing. The latest entry into the brain-inspired computing stakes was announced in an October 19, 2023 American Association for the Advancement of Science (AAAS) news release on EurekAlert,

Researchers present NorthPole – a brain-inspired chip architecture that blends computation with memory to process data efficiently at low-energy costs. Since its inception, computing has been processor-centric, with memory separated from compute. However, shuttling large amounts of data between memory and compute comes at a high price in terms of both energy consumption and processing bandwidth and speed. This is particularly evident in the case of emerging and advanced real-time artificial intelligence (AI) applications like facial recognition, object detection, and behavior monitoring, which require fast access to vast amounts of data. As a result, most contemporary computer architectures are rapidly reaching physical and processing bottlenecks and risk becoming economically, technically, and environmentally unsustainable, given the growing energy costs involved. Inspired by the neural architecture of the organic brain, Dharmendra Modha and colleagues developed NorthPole – a neural inference architecture that intertwines compute with memory on a single chip. According to the authors, NorthPole “reimagines the interaction between compute and memory” by blending brain-inspired computing and semiconductor technology. It achieves higher performance, energy-efficiency, and area-efficiency compared to other comparable architectures, including those that use more advanced technology processes. And, because NorthPole is a digital system, it is not subject to the device noise and systemic biases and drifts that afflict analog systems. Modha et al. demonstrate NorthPole’s capabilities by testing it on the ResNet50 benchmark image classification network, where it achieved 25 times higher energy metric of frames per second (FPS) per watt, a 5 times higher space metric of FPS per transistor, and a 22 times lower time metric of latency relative to comparable technology. In a related Perspective, Subramanian Iyer and Vwani Roychowdhury discuss NorthPole’s advancements and limitations in greater detail.

By the way, the NorthPole chip is a result of IBM research as noted in Charles Q. Choi’s October 23, 2023 article for IEEE Spectrum magazine (IEEE is the Institute of Electrical and Electronics Engineers), Note: Links have been removed,

A brain-inspired chip from IBM, dubbed NorthPole, is more than 20 times as fast as—and roughly 25 times as energy efficient as—any microchip currently on the market when it comes to artificial intelligence tasks. According to a study from IBM, applications for the new silicon chip may include autonomous vehicles and robotics.

Brain-inspired computer hardware aims to mimic a human brain’s exceptional ability to rapidly perform computations in an extraordinarily energy-efficient manner. These machines are often used to implement neural networks, which similarly imitate the way a brain learns and operates.

“The brain is vastly more energy-efficient than modern computers, in part because it stores memory with compute in every neuron,” says study lead author Dharmendra Modha, IBM’s chief scientist for brain-inspired computing.

“NorthPole merges the boundaries between brain-inspired computing and silicon-optimized computing, between compute and memory, between hardware and software,” Modha says.

The scientists note that IBM fabricated NorthPole with a 12-nm node process. The current state of the art for CPUs is 3 nm, and IBM has spent years researching 2-nm nodes. This suggests further gains with this brain-inspired strategy may prove readily available, the company says.

The NorthPole chip is preceded by another IBM brain-inspired chip, TrueNorth. (Use the term “TrueNorth” in the blog search engine, if you want to see more about that and other brain-inspired chips.)

Choi’s October 23, 2023 article features technical information but a surprising amount is accessible to an interested reader who’s not an engineer.

There’s a video, which seems to have been produced by IBM,

Here’s a link to and a citation for the paper,

Neural inference at the frontier of energy, space, and time by Dharmendra S. Modha, Filipp Akopyan, Alexander Andreopoulos, Rathinakumar Appuswamy, John V. Arthur, Andrew S. Cassidy, Pallab Datta, Michael V. DeBole, Steven K. Esser, Carlos Ortega Otero, Jun Sawada, Brian Taba, Arnon Amir, Deepika Bablani, Peter J. Carlson, Myron D. Flickner, Rajamohan Gandhasri, Guillaume J. Garreau, Megumi Ito, Jennifer L. Klamo, Jeffrey A. Kusnitz, Nathaniel J. McClatchey, Jeffrey L. McKinstry, Yutaka Nakamura, Tapan K. Nayak, William P. Risk, Kai Schleupen, Ben Shaw, Jay Sivagnaname, Daniel F. Smith, Ignacio Terrizzano, and Takanori Ueda. Science 19 Oct 2023 Vol 382, Issue 6668 pp. 329-335 DOI: 10.1126/science.adh1174

This paper is behind a paywall.

Brain-inspired computer with optimized neural networks

Caption: Left to right: The experiment was performed on a prototype of the BrainScales-2 chip; Schematic representation of a neural network; Results for simple and complex tasks. Credit: Heidelberg University

I don’t often stumble across research from the European Union’s flagship Human Brain Project. So, this is a delightful occurrence especially with my interest in neuromorphic computing. From a July 22, 2020 Human Brain Project press release (also on EurekAlert),

Many computational properties are maximized when the dynamics of a network are at a “critical point”, a state where systems can quickly change their overall characteristics in fundamental ways, transitioning e.g. between order and chaos or stability and instability. Therefore, the critical state is widely assumed to be optimal for any computation in recurrent neural networks, which are used in many AI [artificial intelligence] applications.

Researchers from the HBP [Human Brain Project] partner Heidelberg University and the Max-Planck-Institute for Dynamics and Self-Organization challenged this assumption by testing the performance of a spiking recurrent neural network on a set of tasks with varying complexity at – and away from critical dynamics. They instantiated the network on a prototype of the analog neuromorphic BrainScaleS-2 system. BrainScaleS is a state-of-the-art brain-inspired computing system with synaptic plasticity implemented directly on the chip. It is one of two neuromorphic systems currently under development within the European Human Brain Project.

First, the researchers showed that the distance to criticality can be easily adjusted in the chip by changing the input strength, and then demonstrated a clear relation between criticality and task-performance. The assumption that criticality is beneficial for every task was not confirmed: whereas the information-theoretic measures all showed that network capacity was maximal at criticality, only the complex, memory intensive tasks profited from it, while simple tasks actually suffered. The study thus provides a more precise understanding of how the collective network state should be tuned to different task requirements for optimal performance.

Mechanistically, the optimal working point for each task can be set very easily under homeostatic plasticity by adapting the mean input strength. The theory behind this mechanism was developed very recently at the Max Planck Institute. “Putting it to work on neuromorphic hardware shows that these plasticity rules are very capable in tuning network dynamics to varying distances from criticality”, says senior author Viola Priesemann, group leader at MPIDS. Thereby tasks of varying complexity can be solved optimally within that space.

The finding may also explain why biological neural networks operate not necessarily at criticality, but in the dynamically rich vicinity of a critical point, where they can tune their computation properties to task requirements. Furthermore, it establishes neuromorphic hardware as a fast and scalable avenue to explore the impact of biological plasticity rules on neural computation and network dynamics.

“As a next step, we now study and characterize the impact of the spiking network’s working point on classifying artificial and real-world spoken words”, says first author Benjamin Cramer of Heidelberg University.

Here’s a link to and a citation for the paper,

Control of criticality and computation in spiking neuromorphic networks with plasticity by Benjamin Cramer, David Stöckel, Markus Kreft, Michael Wibral, Johannes Schemmel, Karlheinz Meier & Viola Priesemann. Nature Communications volume 11, Article number: 2853 (2020) DOI: https://doi.org/10.1038/s41467-020-16548-3 Published: 05 June 2020

This paper is open access.

Artificial intelligence (AI) consumes a lot of energy but tree-like memory may help conserve it

A simulation of a quantum material’s properties reveals its ability to learn numbers, a test of artificial intelligence. (Purdue University image/Shakti Wadekar)

A May 7, 2020 Purdue University news release (also on EurekAlert) describes a new approach for energy-efficient hardware in support of artificial intelligence (AI) systems,

To just solve a puzzle or play a game, artificial intelligence can require software running on thousands of computers. That could be the energy that three nuclear plants produce in one hour.

A team of engineers has created hardware that can learn skills using a type of AI that currently runs on software platforms. Sharing intelligence features between hardware and software would offset the energy needed for using AI in more advanced applications such as self-driving cars or discovering drugs.

“Software is taking on most of the challenges in AI. If you could incorporate intelligence into the circuit components in addition to what is happening in software, you could do things that simply cannot be done today,” said Shriram Ramanathan, a professor of materials engineering at Purdue University.

AI hardware development is still in early research stages. Researchers have demonstrated AI in pieces of potential hardware, but haven’t yet addressed AI’s large energy demand.

As AI penetrates more of daily life, a heavy reliance on software with massive energy needs is not sustainable, Ramanathan said. If hardware and software could share intelligence features, an area of silicon might be able to achieve more with a given input of energy.

Ramanathan’s team is the first to demonstrate artificial “tree-like” memory in a piece of potential hardware at room temperature. Researchers in the past have only been able to observe this kind of memory in hardware at temperatures that are too low for electronic devices.

The results of this study are published in the journal Nature Communications.

The hardware that Ramanathan’s team developed is made of a so-called quantum material. These materials are known for having properties that cannot be explained by classical physics. Ramanathan’s lab has been working to better understand these materials and how they might be used to solve problems in electronics.

Software uses tree-like memory to organize information into various “branches,” making that information easier to retrieve when learning new skills or tasks.

The strategy is inspired by how the human brain categorizes information and makes decisions.

“Humans memorize things in a tree structure of categories. We memorize ‘apple’ under the category of ‘fruit’ and ‘elephant’ under the category of ‘animal,’ for example,” said Hai-Tian Zhang, a Lillian Gilbreth postdoctoral fellow in Purdue’s College of Engineering. “Mimicking these features in hardware is potentially interesting for brain-inspired computing.”

The team introduced a proton to a quantum material called neodymium nickel oxide. They discovered that applying an electric pulse to the material moves around the proton. Each new position of the proton creates a different resistance state, which creates an information storage site called a memory state. Multiple electric pulses create a branch made up of memory states.

“We can build up many thousands of memory states in the material by taking advantage of quantum mechanical effects. The material stays the same. We are simply shuffling around protons,” Ramanathan said.

Through simulations of the properties discovered in this material, the team showed that the material is capable of learning the numbers 0 through 9. The ability to learn numbers is a baseline test of artificial intelligence.

The demonstration of these trees at room temperature in a material is a step toward showing that hardware could offload tasks from software.

“This discovery opens up new frontiers for AI that have been largely ignored because implementing this kind of intelligence into electronic hardware didn’t exist,” Ramanathan said.

The material might also help create a way for humans to more naturally communicate with AI.

“Protons also are natural information transporters in human beings. A device enabled by proton transport may be a key component for eventually achieving direct communication with organisms, such as through a brain implant,” Zhang said.

Here’s a link to and a citation for the published study,

Perovskite neural trees by Hai-Tian Zhang, Tae Joon Park, Shriram Ramanathan. Nature Communications volume 11, Article number: 2245 (2020) DOI: https://doi.org/10.1038/s41467-020-16105-y Published: 07 May 2020

This paper is open access.

Brain-inspired electronics with organic memristors for wearable computing

I went down a rabbit hole while trying to figure out the difference between ‘organic’ memristors and standard memristors. I have put the results of my investigation at the end of this post. First, there’s the news.

An April 21, 2020 news item on ScienceDaily explains why researchers are so focused on memristors and brainlike computing,

The advent of artificial intelligence, machine learning and the internet of things is expected to change modern electronics and bring forth the fourth Industrial Revolution. The pressing question for many researchers is how to handle this technological revolution.

“It is important for us to understand that the computing platforms of today will not be able to sustain at-scale implementations of AI algorithms on massive datasets,” said Thirumalai Venkatesan, one of the authors of a paper published in Applied Physics Reviews, from AIP Publishing.

“Today’s computing is way too energy-intensive to handle big data. We need to rethink our approaches to computation on all levels: materials, devices and architecture that can enable ultralow energy computing.”

An April 21, 2020 American Institute of Physics (AIP) news release (also on EurekAlert), which originated the news item, describes the authors’ approach to the problems with organic memristors,

Brain-inspired electronics with organic memristors could offer a functionally promising and cost- effective platform, according to Venkatesan. Memristive devices are electronic devices with an inherent memory that are capable of both storing data and performing computation. Since memristors are functionally analogous to the operation of neurons, the computing units in the brain, they are optimal candidates for brain-inspired computing platforms.

Until now, oxides have been the leading candidate as the optimum material for memristors. Different material systems have been proposed but none have been successful so far.

“Over the last 20 years, there have been several attempts to come up with organic memristors, but none of those have shown any promise,” said Sreetosh Goswami, lead author on the paper. “The primary reason behind this failure is their lack of stability, reproducibility and ambiguity in mechanistic understanding. At a device level, we are now able to solve most of these problems,”

This new generation of organic memristors is developed based on metal azo complex devices, which are the brainchild of Sreebata Goswami, a professor at the Indian Association for the Cultivation of Science in Kolkata and another author on the paper.

“In thin films, the molecules are so robust and stable that these devices can eventually be the right choice for many wearable and implantable technologies or a body net, because these could be bendable and stretchable,” said Sreebata Goswami. A body net is a series of wireless sensors that stick to the skin and track health.

The next challenge will be to produce these organic memristors at scale, said Venkatesan.

“Now we are making individual devices in the laboratory. We need to make circuits for large-scale functional implementation of these devices.”

Caption: The device structure at a molecular level. The gold nanoparticles on the bottom electrode enhance the field enabling an ultra-low energy operation of the molecular device. Credit Sreetosh Goswami, Sreebrata Goswami and Thirumalai Venky Venkatesan

Here’s a link to and a citation for the paper,

An organic approach to low energy memory and brain inspired electronics by Sreetosh Goswami, Sreebrata Goswami, and T. Venkatesan. Applied Physics Reviews 7, 021303 (2020) DOI: https://doi.org/10.1063/1.5124155

This paper is open access.

Basics about memristors and organic memristors

This undated article on Nanowerk provides a relatively complete and technical description of memristors in general (Note: A link has been removed),

A memristor (named as a portmanteau of memory and resistor) is a non-volatile electronic memory device that was first theorized by Leon Ong Chua in 1971 as the fourth fundamental two-terminal circuit element following the resistor, the capacitor, and the inductor (IEEE Transactions on Circuit Theory, “Memristor-The missing circuit element”).

Its special property is that its resistance can be programmed (resistor function) and subsequently remains stored (memory function). Unlike other memories that exist today in modern electronics, memristors are stable and remember their state even if the device loses power.

However, it was only almost 40 years later that the first practical device was fabricated. This was in 2008, when a group led by Stanley Williams at HP Research Labs realized that switching of the resistance between a conducting and less conducting state in metal-oxide thin-film devices was showing Leon Chua’s memristor behavior. …

The article on Nanowerk includes an embedded video presentation on memristors given by Stanley Williams (also known as R. Stanley Williams).

Mention of an ‘organic’memristor can be found in an October 31, 2017 article by Ryan Whitwam,

The memristor is composed of the transition metal ruthenium complexed with “azo-aromatic ligands.” [emphasis mine] The theoretical work enabling this material was performed at Yale, and the organic molecules were synthesized at the Indian Association for the Cultivation of Sciences. …

I highlighted ‘ligands’ because that appears to be the difference. However, there is more than one type of ligand on Wikipedia.

First, there’s the Ligand (biochemistry) entry (Note: Links have been removed),

In biochemistry and pharmacology, a ligand is a substance that forms a complex with a biomolecule to serve a biological purpose. …

Then, there’s the Ligand entry,

In coordination chemistry, a ligand[help 1] is an ion or molecule (functional group) that binds to a central metal atom to form a coordination complex …

Finally, there’s the Ligand (disambiguation) entry (Note: Links have been removed),

  • Ligand, an atom, ion, or functional group that donates one or more of its electrons through a coordinate covalent bond to one or more central atoms or ions
  • Ligand (biochemistry), a substance that binds to a protein
  • a ‘guest’ in host–guest chemistry

I did take a look at the paper and did not see any references to proteins or other biomolecules that I could recognize as such. I’m not sure why the researchers are describing their device as an ‘organic’ memristor but this may reflect a shortcoming in the definitions I have found or shortcomings in my reading of the paper rather than an error on their parts.

Hopefully, more research will be forthcoming and it will be possible to better understand the terminology.

Brainy and brainy: a novel synaptic architecture and a neuromorphic computing platform called SpiNNaker

I have two items about brainlike computing. The first item hearkens back to memristors, a topic I have been following since 2008. (If you’re curious about the various twists and turns just enter  the term ‘memristor’ in this blog’s search engine.) The latest on memristors is from a team than includes IBM (US), École Politechnique Fédérale de Lausanne (EPFL; Swizterland), and the New Jersey Institute of Technology (NJIT; US). The second bit comes from a Jülich Research Centre team in Germany and concerns an approach to brain-like computing that does not include memristors.

Multi-memristive synapses

In the inexorable march to make computers function more like human brains (neuromorphic engineering/computing), an international team has announced its latest results in a July 10, 2018 news item on Nanowerk,

Two New Jersey Institute of Technology (NJIT) researchers, working with collaborators from the IBM Research Zurich Laboratory and the École Polytechnique Fédérale de Lausanne, have demonstrated a novel synaptic architecture that could lead to a new class of information processing systems inspired by the brain.

The findings are an important step toward building more energy-efficient computing systems that also are capable of learning and adaptation in the real world. …

A July 10, 2018 NJIT news release (also on EurekAlert) by Tracey Regan, which originated by the news item, adds more details,

The researchers, Bipin Rajendran, an associate professor of electrical and computer engineering, and S. R. Nandakumar, a graduate student in electrical engineering, have been developing brain-inspired computing systems that could be used for a wide range of big data applications.

Over the past few years, deep learning algorithms have proven to be highly successful in solving complex cognitive tasks such as controlling self-driving cars and language understanding. At the heart of these algorithms are artificial neural networks – mathematical models of the neurons and synapses of the brain – that are fed huge amounts of data so that the synaptic strengths are autonomously adjusted to learn the intrinsic features and hidden correlations in these data streams.

However, the implementation of these brain-inspired algorithms on conventional computers is highly inefficient, consuming huge amounts of power and time. This has prompted engineers to search for new materials and devices to build special-purpose computers that can incorporate the algorithms. Nanoscale memristive devices, electrical components whose conductivity depends approximately on prior signaling activity, can be used to represent the synaptic strength between the neurons in artificial neural networks.

While memristive devices could potentially lead to faster and more power-efficient computing systems, they are also plagued by several reliability issues that are common to nanoscale devices. Their efficiency stems from their ability to be programmed in an analog manner to store multiple bits of information; however, their electrical conductivities vary in a non-deterministic and non-linear fashion.

In the experiment, the team showed how multiple nanoscale memristive devices exhibiting these characteristics could nonetheless be configured to efficiently implement artificial intelligence algorithms such as deep learning. Prototype chips from IBM containing more than one million nanoscale phase-change memristive devices were used to implement a neural network for the detection of hidden patterns and correlations in time-varying signals.

“In this work, we proposed and experimentally demonstrated a scheme to obtain high learning efficiencies with nanoscale memristive devices for implementing learning algorithms,” Nandakumar says. “The central idea in our demonstration was to use several memristive devices in parallel to represent the strength of a synapse of a neural network, but only chose one of them to be updated at each step based on the neuronal activity.”

Here’s a link to and a citation for the paper,

Neuromorphic computing with multi-memristive synapses by Irem Boybat, Manuel Le Gallo, S. R. Nandakumar, Timoleon Moraitis, Thomas Parnell, Tomas Tuma, Bipin Rajendran, Yusuf Leblebici, Abu Sebastian, & Evangelos Eleftheriou. Nature Communications volume 9, Article number: 2514 (2018) DOI: https://doi.org/10.1038/s41467-018-04933-y Published 28 June 2018

This is an open access paper.

Also they’ve got a couple of very nice introductory paragraphs which I’m including here, (from the June 28, 2018 paper in Nature Communications; Note: Links have been removed),

The human brain with less than 20 W of power consumption offers a processing capability that exceeds the petaflops mark, and thus outperforms state-of-the-art supercomputers by several orders of magnitude in terms of energy efficiency and volume. Building ultra-low-power cognitive computing systems inspired by the operating principles of the brain is a promising avenue towards achieving such efficiency. Recently, deep learning has revolutionized the field of machine learning by providing human-like performance in areas, such as computer vision, speech recognition, and complex strategic games1. However, current hardware implementations of deep neural networks are still far from competing with biological neural systems in terms of real-time information-processing capabilities with comparable energy consumption.

One of the reasons for this inefficiency is that most neural networks are implemented on computing systems based on the conventional von Neumann architecture with separate memory and processing units. There are a few attempts to build custom neuromorphic hardware that is optimized to implement neural algorithms2,3,4,5. However, as these custom systems are typically based on conventional silicon complementary metal oxide semiconductor (CMOS) circuitry, the area efficiency of such hardware implementations will remain relatively low, especially if in situ learning and non-volatile synaptic behavior have to be incorporated. Recently, a new class of nanoscale devices has shown promise for realizing the synaptic dynamics in a compact and power-efficient manner. These memristive devices store information in their resistance/conductance states and exhibit conductivity modulation based on the programming history6,7,8,9. The central idea in building cognitive hardware based on memristive devices is to store the synaptic weights as their conductance states and to perform the associated computational tasks in place.

The two essential synaptic attributes that need to be emulated by memristive devices are the synaptic efficacy and plasticity. …

It gets more complicated from there.

Now onto the next bit.

SpiNNaker

At a guess, those capitalized N’s are meant to indicate ‘neural networks’. As best I can determine, SpiNNaker is not based on the memristor. Moving on, a July 11, 2018 news item on phys.org announces work from a team examining how neuromorphic hardware and neuromorphic software work together,

A computer built to mimic the brain’s neural networks produces similar results to that of the best brain-simulation supercomputer software currently used for neural-signaling research, finds a new study published in the open-access journal Frontiers in Neuroscience. Tested for accuracy, speed and energy efficiency, this custom-built computer named SpiNNaker, has the potential to overcome the speed and power consumption problems of conventional supercomputers. The aim is to advance our knowledge of neural processing in the brain, to include learning and disorders such as epilepsy and Alzheimer’s disease.

A July 11, 2018 Frontiers Publishing news release on EurekAlert, which originated the news item, expands on the latest work,

“SpiNNaker can support detailed biological models of the cortex–the outer layer of the brain that receives and processes information from the senses–delivering results very similar to those from an equivalent supercomputer software simulation,” says Dr. Sacha van Albada, lead author of this study and leader of the Theoretical Neuroanatomy group at the Jülich Research Centre, Germany. “The ability to run large-scale detailed neural networks quickly and at low power consumption will advance robotics research and facilitate studies on learning and brain disorders.”

The human brain is extremely complex, comprising 100 billion interconnected brain cells. We understand how individual neurons and their components behave and communicate with each other and on the larger scale, which areas of the brain are used for sensory perception, action and cognition. However, we know less about the translation of neural activity into behavior, such as turning thought into muscle movement.

Supercomputer software has helped by simulating the exchange of signals between neurons, but even the best software run on the fastest supercomputers to date can only simulate 1% of the human brain.

“It is presently unclear which computer architecture is best suited to study whole-brain networks efficiently. The European Human Brain Project and Jülich Research Centre have performed extensive research to identify the best strategy for this highly complex problem. Today’s supercomputers require several minutes to simulate one second of real time, so studies on processes like learning, which take hours and days in real time are currently out of reach.” explains Professor Markus Diesmann, co-author, head of the Computational and Systems Neuroscience department at the Jülich Research Centre.

He continues, “There is a huge gap between the energy consumption of the brain and today’s supercomputers. Neuromorphic (brain-inspired) computing allows us to investigate how close we can get to the energy efficiency of the brain using electronics.”

Developed over the past 15 years and based on the structure and function of the human brain, SpiNNaker — part of the Neuromorphic Computing Platform of the Human Brain Project — is a custom-built computer composed of half a million of simple computing elements controlled by its own software. The researchers compared the accuracy, speed and energy efficiency of SpiNNaker with that of NEST–a specialist supercomputer software currently in use for brain neuron-signaling research.

“The simulations run on NEST and SpiNNaker showed very similar results,” reports Steve Furber, co-author and Professor of Computer Engineering at the University of Manchester, UK. “This is the first time such a detailed simulation of the cortex has been run on SpiNNaker, or on any neuromorphic platform. SpiNNaker comprises 600 circuit boards incorporating over 500,000 small processors in total. The simulation described in this study used just six boards–1% of the total capability of the machine. The findings from our research will improve the software to reduce this to a single board.”

Van Albada shares her future aspirations for SpiNNaker, “We hope for increasingly large real-time simulations with these neuromorphic computing systems. In the Human Brain Project, we already work with neuroroboticists who hope to use them for robotic control.”

Before getting to the link and citation for the paper, here’s a description of SpiNNaker’s hardware from the ‘Spiking neural netowrk’ Wikipedia entry, Note: Links have been removed,

Neurogrid, built at Stanford University, is a board that can simulate spiking neural networks directly in hardware. SpiNNaker (Spiking Neural Network Architecture) [emphasis mine], designed at the University of Manchester, uses ARM processors as the building blocks of a massively parallel computing platform based on a six-layer thalamocortical model.[5]

Now for the link and citation,

Performance Comparison of the Digital Neuromorphic Hardware SpiNNaker and the Neural Network Simulation Software NEST for a Full-Scale Cortical Microcircuit Model by
Sacha J. van Albada, Andrew G. Rowley, Johanna Senk, Michael Hopkins, Maximilian Schmidt, Alan B. Stokes, David R. Lester, Markus Diesmann, and Steve B. Furber. Neurosci. 12:291. doi: 10.3389/fnins.2018.00291 Published: 23 May 2018

As noted earlier, this is an open access paper.

Organismic learning—learning to forget

This approach to mimicking the human brain differs from the memristor. (You can find several pieces about memrisors here including this August 24, 2017 post about a derivative, a neuristor).  This approach comes from scientists at Purdue University and employs a quantum material. From an Aug. 15, 2017 news item on phys.org,

A new computing technology called “organismoids” mimics some aspects of human thought by learning how to forget unimportant memories while retaining more vital ones.

“The human brain is capable of continuous lifelong learning,” said Kaushik Roy, Purdue University’s Edward G. Tiedemann Jr. Distinguished Professor of Electrical and Computer Engineering. “And it does this partially by forgetting some information that is not critical. I learn slowly, but I keep forgetting other things along the way, so there is a graceful degradation in my accuracy of detecting things that are old. What we are trying to do is mimic that behavior of the brain to a certain extent, to create computers that not only learn new information but that also learn what to forget.”

The work was performed by researchers at Purdue, Rutgers University, the Massachusetts Institute of Technology, Brookhaven National Laboratory and Argonne National Laboratory.

Central to the research is a ceramic “quantum material” called samarium nickelate, which was used to create devices called organismoids, said Shriram Ramanathan, a Purdue professor of materials engineering.

A video describing the work has been produced,

An August 14, 2017 Purdue University news release by Emil Venere, which originated the news item,  details the work,

“These devices possess certain characteristics of living beings and enable us to advance new learning algorithms that mimic some aspects of the human brain,” Roy said. “The results have far reaching implications for the fields of quantum materials as well as brain-inspired computing.”

When exposed to hydrogen gas, the material undergoes a massive resistance change, as its crystal lattice is “doped” by hydrogen atoms. The material is said to breathe, expanding when hydrogen is added and contracting when the hydrogen is removed.

“The main thing about the material is that when this breathes in hydrogen there is a spectacular quantum mechanical effect that allows the resistance to change by orders of magnitude,” Ramanathan said. “This is very unusual, and the effect is reversible because this dopant can be weakly attached to the lattice, so if you remove the hydrogen from the environment you can change the electrical resistance.”

When hydrogen is exposed to the material, it splits into a proton and an electron, and the electron attaches to the nickel, temporarily causing the material to become an insulator.

“Then, when the hydrogen comes out, this material becomes conducting again,” Ramanathan said. “What we show in this paper is the extent of conduction and insulation can be very carefully tuned.”

This changing conductance and the “decay of that conductance over time” is similar to a key animal behavior called habituation.

“Many animals, even organisms that don’t have a brain, possess this fundamental survival skill,” Roy said. “And that’s why we call this organismic behavior. If I see certain information on a regular basis, I get habituated, retaining memory of it. But if I haven’t seen such information over a long time, then it slowly starts decaying. So, the behavior of conductance going up and down in exponential fashion can be used to create a new computing model that will incrementally learn and at same time forget things in a proper way.”

The researchers have developed a “neural learning model” they have termed adaptive synaptic plasticity.

“This could be really important because it’s one of the first examples of using quantum materials directly for solving a major problem in neural learning,” Ramanathan said.

The researchers used the organismoids to implement the new model for synaptic plasticity.

“Using this effect we are able to model something that is a real problem in neuromorphic computing,” Roy said. “For example, if I have learned your facial features I can still go out and learn someone else’s features without really forgetting yours. However, this is difficult for computing models to do. When learning your features, they can forget the features of the original person, a problem called catastrophic forgetting.”

Neuromorphic computing is not intended to replace conventional general-purpose computer hardware, based on complementary metal-oxide-semiconductor transistors, or CMOS. Instead, it is expected to work in conjunction with CMOS-based computing. Whereas CMOS technology is especially adept at performing complex mathematical computations, neuromorphic computing might be able to perform roles such as facial recognition, reasoning and human-like decision making.

Roy’s team performed the research work on the plasticity model, and other collaborators concentrated on the physics of how to explain the process of doping-driven change in conductance central to the paper. The multidisciplinary team includes experts in materials, electrical engineering, physics, and algorithms.

“It’s not often that a materials science person can talk to a circuits person like professor Roy and come up with something meaningful,” Ramanathan said.

Organismoids might have applications in the emerging field of spintronics. Conventional computers use the presence and absence of an electric charge to represent ones and zeroes in a binary code needed to carry out computations. Spintronics, however, uses the “spin state” of electrons to represent ones and zeros.

It could bring circuits that resemble biological neurons and synapses in a compact design not possible with CMOS circuits. Whereas it would take many CMOS devices to mimic a neuron or synapse, it might take only a single spintronic device.

In future work, the researchers may demonstrate how to achieve habituation in an integrated circuit instead of exposing the material to hydrogen gas.

Here’s a link to and a citation for the paper,

Habituation based synaptic plasticity and organismic learning in a quantum perovskite by Fan Zuo, Priyadarshini Panda, Michele Kotiuga, Jiarui Li, Mingu Kang, Claudio Mazzoli, Hua Zhou, Andi Barbour, Stuart Wilkins, Badri Narayanan, Mathew Cherukara, Zhen Zhang, Subramanian K. R. S. Sankaranarayanan, Riccardo Comin, Karin M. Rabe, Kaushik Roy, & Shriram Ramanathan. Nature Communications 8, Article number: 240 (2017) doi:10.1038/s41467-017-00248-6 Published online: 14 August 2017

This paper is open access.