Tag Archives: U.S. Army Research Laboratory

IBM to build brain-inspired AI supercomputing system equal to 64 million neurons for US Air Force

This is the second IBM computer announcement I’ve stumbled onto within the last 4 weeks or so,  which seems like a veritable deluge given the last time I wrote about IBM’s computing efforts was in an Oct. 8, 2015 posting about carbon nanotubes,. I believe that up until now that was my  most recent posting about IBM and computers.

Moving onto the news, here’s more from a June 23, 3017 news item on Nanotechnology Now,

IBM (NYSE: IBM) and the U.S. Air Force Research Laboratory (AFRL) today [June 23, 2017] announced they are collaborating on a first-of-a-kind brain-inspired supercomputing system powered by a 64-chip array of the IBM TrueNorth Neurosynaptic System. The scalable platform IBM is building for AFRL will feature an end-to-end software ecosystem designed to enable deep neural-network learning and information discovery. The system’s advanced pattern recognition and sensory processing power will be the equivalent of 64 million neurons and 16 billion synapses, while the processor component will consume the energy equivalent of a dim light bulb – a mere 10 watts to power.

A June 23, 2017 IBM news release, which originated the news item, describes the proposed collaboration, which is based on IBM’s TrueNorth brain-inspired chip architecture (see my Aug. 8, 2014 posting for more about TrueNorth),

IBM researchers believe the brain-inspired, neural network design of TrueNorth will be far more efficient for pattern recognition and integrated sensory processing than systems powered by conventional chips. AFRL is investigating applications of the system in embedded, mobile, autonomous settings where, today, size, weight and power (SWaP) are key limiting factors.

The IBM TrueNorth Neurosynaptic System can efficiently convert data (such as images, video, audio and text) from multiple, distributed sensors into symbols in real time. AFRL will combine this “right-brain” perception capability of the system with the “left-brain” symbol processing capabilities of conventional computer systems. The large scale of the system will enable both “data parallelism” where multiple data sources can be run in parallel against the same neural network and “model parallelism” where independent neural networks form an ensemble that can be run in parallel on the same data.

“AFRL was the earliest adopter of TrueNorth for converting data into decisions,” said Daniel S. Goddard, director, information directorate, U.S. Air Force Research Lab. “The new neurosynaptic system will be used to enable new computing capabilities important to AFRL’s mission to explore, prototype and demonstrate high-impact, game-changing technologies that enable the Air Force and the nation to maintain its superior technical advantage.”

“The evolution of the IBM TrueNorth Neurosynaptic System is a solid proof point in our quest to lead the industry in AI hardware innovation,” said Dharmendra S. Modha, IBM Fellow, chief scientist, brain-inspired computing, IBM Research – Almaden. “Over the last six years, IBM has expanded the number of neurons per system from 256 to more than 64 million – an 800 percent annual increase over six years.’’

The system fits in a 4U-high (7”) space in a standard server rack and eight such systems will enable the unprecedented scale of 512 million neurons per rack. A single processor in the system consists of 5.4 billion transistors organized into 4,096 neural cores creating an array of 1 million digital neurons that communicate with one another via 256 million electrical synapses.    For CIFAR-100 dataset, TrueNorth achieves near state-of-the-art accuracy, while running at >1,500 frames/s and using 200 mW (effectively >7,000 frames/s per Watt) – orders of magnitude lower speed and energy than a conventional computer running inference on the same neural network.

The IBM TrueNorth Neurosynaptic System was originally developed under the auspices of Defense Advanced Research Projects Agency’s (DARPA) Systems of Neuromorphic Adaptive Plastic Scalable Electronics (SyNAPSE) program in collaboration with Cornell University. In 2016, the TrueNorth Team received the inaugural Misha Mahowald Prize for Neuromorphic Engineering and TrueNorth was accepted into the Computer History Museum.  Research with TrueNorth is currently being performed by more than 40 universities, government labs, and industrial partners on five continents.

There is an IBM video accompanying this news release, which seems more promotional than informational,

The IBM scientist featured in the video has a Dec. 19, 2016 posting on an IBM research blog which provides context for this collaboration with AFRL,

2016 was a big year for brain-inspired computing. My team and I proved in our paper “Convolutional networks for fast, energy-efficient neuromorphic computing” that the value of this breakthrough is that it can perform neural network inference at unprecedented ultra-low energy consumption. Simply stated, our TrueNorth chip’s non-von Neumann architecture mimics the brain’s neural architecture — giving it unprecedented efficiency and scalability over today’s computers.

The brain-inspired TrueNorth processor [is] a 70mW reconfigurable silicon chip with 1 million neurons, 256 million synapses, and 4096 parallel and distributed neural cores. For systems, we present a scale-out system loosely coupling 16 single-chip boards and a scale-up system tightly integrating 16 chips in a 4´4 configuration by exploiting TrueNorth’s native tiling.

For the scale-up systems we summarize our approach to physical placement of neural network, to reduce intra- and inter-chip network traffic. The ecosystem is in use at over 30 universities and government / corporate labs. Our platform is a substrate for a spectrum of applications from mobile and embedded computing to cloud and supercomputers.
TrueNorth Ecosystem for Brain-Inspired Computing: Scalable Systems, Software, and Applications

TrueNorth, once loaded with a neural network model, can be used in real-time as a sensory streaming inference engine, performing rapid and accurate classifications while using minimal energy. TrueNorth’s 1 million neurons consume only 70 mW, which is like having a neurosynaptic supercomputer the size of a postage stamp that can run on a smartphone battery for a week.

Recently, in collaboration with Lawrence Livermore National Laboratory, U.S. Air Force Research Laboratory, and U.S. Army Research Laboratory, we published our fifth paper at IEEE’s prestigious Supercomputing 2016 conference that summarizes the results of the team’s 12.5-year journey (see the associated graphic) to unlock this value proposition. [keep scrolling for the graphic]

Applying the mind of a chip

Three of our partners, U.S. Army Research Lab, U.S. Air Force Research Lab and Lawrence Livermore National Lab, contributed sections to the Supercomputing paper each showcasing a different TrueNorth system, as summarized by my colleagues Jun Sawada, Brian Taba, Pallab Datta, and Ben Shaw:

U.S. Army Research Lab (ARL) prototyped a computational offloading scheme to illustrate how TrueNorth’s low power profile enables computation at the point of data collection. Using the single-chip NS1e board and an Android tablet, ARL researchers created a demonstration system that allows visitors to their lab to hand write arithmetic expressions on the tablet, with handwriting streamed to the NS1e for character recognition, and recognized characters sent back to the tablet for arithmetic calculation.

Of course, the point here is not to make a handwriting calculator, it is to show how TrueNorth’s low power and real time pattern recognition might be deployed at the point of data collection to reduce latency, complexity and transmission bandwidth, as well as back-end data storage requirements in distributed systems.

U.S. Air Force Research Lab (AFRL) contributed another prototype application utilizing a TrueNorth scale-out system to perform a data-parallel text extraction and recognition task. In this application, an image of a document is segmented into individual characters that are streamed to AFRL’s NS1e16 TrueNorth system for parallel character recognition. Classification results are then sent to an inference-based natural language model to reconstruct words and sentences. This system can process 16,000 characters per second! AFRL plans to implement the word and sentence inference algorithms on TrueNorth, as well.

Lawrence Livermore National Lab (LLNL) has a 16-chip NS16e scale-up system to explore the potential of post-von Neumann computation through larger neural models and more complex algorithms, enabled by the native tiling characteristics of the TrueNorth chip. For the Supercomputing paper, they contributed a single-chip application performing in-situ process monitoring in an additive manufacturing process. LLNL trained a TrueNorth network to recognize seven classes related to track weld quality in welds produced by a selective laser melting machine. Real-time weld quality determination allows for closed-loop process improvement and immediate rejection of defective parts. This is one of several applications LLNL is developing to showcase TrueNorth as a scalable platform for low-power, real-time inference.

[downloaded from https://www.ibm.com/blogs/research/2016/12/the-brains-architecture-efficiency-on-a-chip/] Courtesy: IBM

I gather this 2017 announcement is the latest milestone on the TrueNorth journey.

Photo-acoustic alarms for poison gas

Alexander Graham Bell discovered the photoacoustic effect which researchers at the US Army Research Laboratory are attempting to exploit for the purpose of sensing poison gases. From the Aug. 14, 2012 news item on ScienceDaily,

To warn of chemical attacks and help save lives, it’s vital to quickly determine if even trace levels of potentially deadly chemicals — such as the nerve gas sarin and other odorless, colorless agents — are present. U.S. Army researchers have developed a new chemical sensor that can simultaneously identify a potentially limitless numbers of agents, in real time.

The new system is based on a phenomenon known as the photoacoustic effect, which was discovered by Alexander Graham Bell, in which the absorption of light by materials generates characteristic acoustic waves. By using a laser and very sensitive microphones — in a technique called laser photoacoustic spectroscopy (LPAS) — vanishingly low concentrations of gases, at parts per billion or even parts per trillion levels, can be detected. The drawback of traditional LPAS systems, however, is that they can identify only one chemical at a time.

Here’s how the researchers dealt with the limitation of being able to identify only one chemical at a time (from the news item),

[Kristan Gurton, an experimental physicist at the U.S. Army Research Laboratory (ARL) in Adelphi, Md] “As I started looking into the chemical/biological detection problem, it became apparent that multiple LPAS absorption measurements — representing an ‘absorption spectrum’ — might provide the added information required in any detection and identification scheme.”

To create such a multi-wavelength LPAS system, Gurton, along with co-authors Melvin Felton and Richard Tober of the ARL, designed a sensor known as a photoacoustic cell. This hollow, cylindrical device holds the gas being sampled and contains microphones that can listen for the characteristic signal when light is applied to the sample.

In this experiment, the researchers used a specialized cell that allows different gases to flow through the device for testing. As the vapor of five nerve agent mimics was flowed in, three laser beams, each modulated at a different frequency in the acoustic range, were propagated through the cell.

“A portion of the laser power is absorbed, usually via molecular transitions, and this absorption results in localized heating of the gas,” Gurton explains. Molecular transitions occur when the electrons in a molecule are excited from one energy level to a higher energy level. “Since gas dissipates thermal energy fairly quickly, the modulated laser results in a rapid heat/cooling cycle that produces a faint acoustic wave,” which is picked up by the microphone. Each laser in the system will produce a single tone, so, for example, six laser sources have six possible tones. “Different agents will affect the relative ‘loudness’ of each tone,” he says, “so for one gas, some tones will be louder than others, and it is these differences that allow for species identification.”

The signals produced by each laser were separated using multiple “lock-in” amplifiers — which can extract signals from noisy environments — each tuned for a specific laser frequency. Then, by comparing the results to a database of absorption information for a range of chemical species, the system identified each of the five gases.

Because it is optically based, the method allows for instant identification of agents, as long as the signal-to-noise ratio, which depends on both laser power and the concentration of the compound being measured, is sufficiently high, and the material in question is in the database.

But they still need to invent a device before they can take this process out of the laboratory,

Before a device based on the technique could be used in the field, Gurton says, a quantum cascade (QC) laser array with at least six “well-chosen” mid-infrared (MidIR) laser wavelengths would need to be available.

Here’s the citation for the article, which is behind a paywall,

Kristan P. Gurton, Melvin Felton, and Richard Tober. Selective real-time detection of gaseous nerve agent simulants using multiwavelength photoacoustics. Opt. Lett., 37, 3474-3476 (2012) [link]

There are more details in the ScienceDaily news item or you can check out the Aug. 14, 2012 (?) news release from the Optical Society of America.

I wonder what this research sounds like, unfortunately they didn’t include any audio files with the news release from the Optics Society of America or the news item on ScienceDaily.