Tag Archives: Philipp Plank

Save energy with neuromorphic (brainlike) hardware

It seems the appetite for computing power is bottomless, which presents a problem in a world where energy resources are increasingly constrained. A May 24, 2022 news item on ScienceDaily announces research into neuromorphic computing which hints the energy efficiency long promised by the technology may be realized in the foreseeable future,

For the first time TU Graz’s [Graz University of Technology; Austria] Institute of Theoretical Computer Science and Intel Labs demonstrated experimentally that a large neural network can process sequences such as sentences while consuming four to sixteen times less energy while running on neuromorphic hardware than non-neuromorphic hardware. The new research based on Intel Labs’ Loihi neuromorphic research chip that draws on insights from neuroscience to create chips that function similar to those in the biological brain.

Rich Uhlig, managing director of Intel Labs, holds one of Intel’s Nahuku boards, each of which contains 8 to 32 Intel Loihi neuromorphic chips. Intel’s latest neuromorphic system, Pohoiki Beach, is made up of multiple Nahuku boards and contains 64 Loihi chips. Pohoiki Beach was introduced in July 2019. (Credit: Tim Herman/Intel Corporation)

A May 24, 2022 Graz University of Technology (TU Graz) press release (also on EurekAlert), which originated the news item, delves further into the research, Note: Links have been removed,

The research was funded by The Human Brain Project (HBP), one of the largest research projects in the world with more than 500 scientists and engineers across Europe studying the human brain. The results of the research are published in the research paper “Memory for AI Applications in Spike-based Neuromorphic Hardware” [sic] (DOI 10.1038/s42256-022-00480-w) which in published in Nature Machine Intelligence.  

Human brain as a role model

Smart machines and intelligent computers that can autonomously recognize and infer objects and relationships between different objects are the subjects of worldwide artificial intelligence (AI) research. Energy consumption is a major obstacle on the path to a broader application of such AI methods. It is hoped that neuromorphic technology will provide a push in the right direction. Neuromorphic technology is modelled after the human brain, which is highly efficient in using energy. To process information, its hundred billion neurons consume only about 20 watts, not much more energy than an average energy-saving light bulb.

In the research, the group focused on algorithms that work with temporal processes. For example, the system had to answer questions about a previously told story and grasp the relationships between objects or people from the context. The hardware tested consisted of 32 Loihi chips.

Loihi research chip: up to sixteen times more energy-efficient than non-neuromorphic hardware

“Our system is four to sixteen times more energy-efficient than other AI models on conventional hardware,” says Philipp Plank, a doctoral student at TU Graz’s Institute of Theoretical Computer Science. Plank expects further efficiency gains as these models are migrated to the next generation of Loihi hardware, which significantly improves the performance of chip-to-chip communication.

“Intel’s Loihi research chips promise to bring gains in AI, especially by lowering their high energy cost,“ said Mike Davies, director of Intel’s Neuromorphic Computing Lab. “Our work with TU Graz provides more evidence that neuromorphic technology can improve the energy efficiency of today’s deep learning workloads by re-thinking their implementation from the perspective of biology.”

Mimicking human short-term memory

In their neuromorphic network, the group reproduced a presumed memory mechanism of the brain, as Wolfgang Maass, Philipp Plank’s doctoral supervisor at the Institute of Theoretical Computer Science, explains: “Experimental studies have shown that the human brain can store information for a short period of time even without neural activity, namely in so-called ‘internal variables’ of neurons. Simulations suggest that a fatigue mechanism of a subset of neurons is essential for this short-term memory.”

Direct proof is lacking because these internal variables cannot yet be measured, but it does mean that the network only needs to test which neurons are currently fatigued to reconstruct what information it has previously processed. In other words, previous information is stored in the non-activity of neurons, and non-activity consumes the least energy.

Symbiosis of recurrent and feed-forward network

The researchers link two types of deep learning networks for this purpose. Feedback neural networks are responsible for “short-term memory.” Many such so-called recurrent modules filter out possible relevant information from the input signal and store it. A feed-forward network then determines which of the relationships found are very important for solving the task at hand. Meaningless relationships are screened out, the neurons only fire in those modules where relevant information has been found. This process ultimately leads to energy savings.

“Recurrent neural structures are expected to provide the greatest gains for applications running on neuromorphic hardware in the future,” said Davies. “Neuromorphic hardware like Loihi is uniquely suited to facilitate the fast, sparse and unpredictable patterns of network activity that we observe in the brain and need for the most energy efficient AI applications.”

This research was financially supported by Intel and the European Human Brain Project, which connects neuroscience, medicine, and brain-inspired technologies in the EU. For this purpose, the project is creating a permanent digital research infrastructure, EBRAINS. This research work is anchored in the Fields of Expertise Human and Biotechnology and Information, Communication & Computing, two of the five Fields of Expertise of TU Graz.

Here’s a link to and a citation for the paper,

A Long Short-Term Memory for AI Applications in Spike-based Neuromorphic Hardware by Arjun Rao, Philipp Plank, Andreas Wild & Wolfgang Maass. Nature Machine Intelligence (2022) DOI: https://doi.org/10.1038/s42256-022-00480-w Published: 19 May 2022

This paper is behind a paywall.

For anyone interested in the EBRAINS project, here’s a description from their About page,

EBRAINS provides digital tools and services which can be used to address challenges in brain research and brain-inspired technology development. Its components are designed with, by, and for researchers. The tools assist scientists to collect, analyse, share, and integrate brain data, and to perform modelling and simulation of brain function.

EBRAINS’ goal is to accelerate the effort to understand human brain function and disease.

This EBRAINS research infrastructure is the entry point for researchers to discover EBRAINS services. The services are being developed and powered by the EU-funded Human Brain Project.

You can register to use the EBRAINS research infrastructure HERE

One last note, the Human Brain Project is a major European Union (EU)-funded science initiative (1B Euros) announced in 2013 and to be paid out over 10 years.