NorthPole: a brain-inspired chip design for saving energy

One of the main attractions of brain-inspired computing is that it requires less energy than is used in conventional computing. The latest entry into the brain-inspired computing stakes was announced in an October 19, 2023 American Association for the Advancement of Science (AAAS) news release on EurekAlert,

Researchers present NorthPole – a brain-inspired chip architecture that blends computation with memory to process data efficiently at low-energy costs. Since its inception, computing has been processor-centric, with memory separated from compute. However, shuttling large amounts of data between memory and compute comes at a high price in terms of both energy consumption and processing bandwidth and speed. This is particularly evident in the case of emerging and advanced real-time artificial intelligence (AI) applications like facial recognition, object detection, and behavior monitoring, which require fast access to vast amounts of data. As a result, most contemporary computer architectures are rapidly reaching physical and processing bottlenecks and risk becoming economically, technically, and environmentally unsustainable, given the growing energy costs involved. Inspired by the neural architecture of the organic brain, Dharmendra Modha and colleagues developed NorthPole – a neural inference architecture that intertwines compute with memory on a single chip. According to the authors, NorthPole “reimagines the interaction between compute and memory” by blending brain-inspired computing and semiconductor technology. It achieves higher performance, energy-efficiency, and area-efficiency compared to other comparable architectures, including those that use more advanced technology processes. And, because NorthPole is a digital system, it is not subject to the device noise and systemic biases and drifts that afflict analog systems. Modha et al. demonstrate NorthPole’s capabilities by testing it on the ResNet50 benchmark image classification network, where it achieved 25 times higher energy metric of frames per second (FPS) per watt, a 5 times higher space metric of FPS per transistor, and a 22 times lower time metric of latency relative to comparable technology. In a related Perspective, Subramanian Iyer and Vwani Roychowdhury discuss NorthPole’s advancements and limitations in greater detail.

By the way, the NorthPole chip is a result of IBM research as noted in Charles Q. Choi’s October 23, 2023 article for IEEE Spectrum magazine (IEEE is the Institute of Electrical and Electronics Engineers), Note: Links have been removed,

A brain-inspired chip from IBM, dubbed NorthPole, is more than 20 times as fast as—and roughly 25 times as energy efficient as—any microchip currently on the market when it comes to artificial intelligence tasks. According to a study from IBM, applications for the new silicon chip may include autonomous vehicles and robotics.

Brain-inspired computer hardware aims to mimic a human brain’s exceptional ability to rapidly perform computations in an extraordinarily energy-efficient manner. These machines are often used to implement neural networks, which similarly imitate the way a brain learns and operates.

“The brain is vastly more energy-efficient than modern computers, in part because it stores memory with compute in every neuron,” says study lead author Dharmendra Modha, IBM’s chief scientist for brain-inspired computing.

“NorthPole merges the boundaries between brain-inspired computing and silicon-optimized computing, between compute and memory, between hardware and software,” Modha says.

The scientists note that IBM fabricated NorthPole with a 12-nm node process. The current state of the art for CPUs is 3 nm, and IBM has spent years researching 2-nm nodes. This suggests further gains with this brain-inspired strategy may prove readily available, the company says.

The NorthPole chip is preceded by another IBM brain-inspired chip, TrueNorth. (Use the term “TrueNorth” in the blog search engine, if you want to see more about that and other brain-inspired chips.)

Choi’s October 23, 2023 article features technical information but a surprising amount is accessible to an interested reader who’s not an engineer.

There’s a video, which seems to have been produced by IBM,

Here’s a link to and a citation for the paper,

Neural inference at the frontier of energy, space, and time by Dharmendra S. Modha, Filipp Akopyan, Alexander Andreopoulos, Rathinakumar Appuswamy, John V. Arthur, Andrew S. Cassidy, Pallab Datta, Michael V. DeBole, Steven K. Esser, Carlos Ortega Otero, Jun Sawada, Brian Taba, Arnon Amir, Deepika Bablani, Peter J. Carlson, Myron D. Flickner, Rajamohan Gandhasri, Guillaume J. Garreau, Megumi Ito, Jennifer L. Klamo, Jeffrey A. Kusnitz, Nathaniel J. McClatchey, Jeffrey L. McKinstry, Yutaka Nakamura, Tapan K. Nayak, William P. Risk, Kai Schleupen, Ben Shaw, Jay Sivagnaname, Daniel F. Smith, Ignacio Terrizzano, and Takanori Ueda. Science 19 Oct 2023 Vol 382, Issue 6668 pp. 329-335 DOI: 10.1126/science.adh1174

This paper is behind a paywall.

Leave a Reply

Your email address will not be published. Required fields are marked *