Tag Archives: neuromorphic engineering

Paving the way for hardware neural networks?

I’m glad the Imperial College of London (ICL; UK) translated this research into something I can, more or less, understand because the research team’s title for their paper would have left me ‘confuzzled’ .Thank you for this November 20, 2017 ICL press release (also on EurekAlert) by Hayley Dunning,

Researchers have shown how to write any magnetic pattern desired onto nanowires, which could help computers mimic how the brain processes information.

Much current computer hardware, such as hard drives, use magnetic memory devices. These rely on magnetic states – the direction microscopic magnets are pointing – to encode and read information.

Exotic magnetic states – such as a point where three south poles meet – represent complex systems. These may act in a similar way to many complex systems found in nature, such as the way our brains process information.

Computing systems that are designed to process information in similar ways to our brains are known as ‘neural networks’. There are already powerful software-based neural networks – for example one recently beat the human champion at the game ‘Go’ – but their efficiency is limited as they run on conventional computer hardware.

Now, researchers from Imperial College London have devised a method for writing magnetic information in any pattern desired, using a very small magnetic probe called a magnetic force microscope.

With this new writing method, arrays of magnetic nanowires may be able to function as hardware neural networks – potentially more powerful and efficient than software-based approaches.

The team, from the Departments of Physics and Materials at Imperial, demonstrated their system by writing patterns that have never been seen before. They published their results today [November 20, 2017] in Nature Nanotechnology.

Interlocking hexagon patterns with complex magnetisation

‘Hexagonal artificial spin ice ground state’ – a pattern never demonstrated before. Coloured arrows show north or south polarisation

Dr Jack Gartside, first author from the Department of Physics, said: “With this new writing method, we open up research into ‘training’ these magnetic nanowires to solve useful problems. If successful, this will bring hardware neural networks a step closer to reality.”

As well as applications in computing, the method could be used to study fundamental aspects of complex systems, by creating magnetic states that are far from optimal (such as three south poles together) and seeing how the system responds.

Here’s a link to and a citation for the paper,

Realization of ground state in artificial kagome spin ice via topological defect-driven magnetic writing by Jack C. Gartside, Daan M. Arroo, David M. Burn, Victoria L. Bemmer, Andy Moskalenko, Lesley F. Cohen & Will R. Branford. Nature Nanotechnology (2017) doi:10.1038/s41565-017-0002-1 Published online: 20 November 2017

This paper is behind a paywall.

*Odd spacing eliminated and a properly embedded video added on February 6, 2018 at 18:16 hours PT.

Nano-neurons from a French-Japanese-US research team

This news about nano-neurons comes from a Nov. 8, 2017 news item on defenceweb.co.za,

Researchers from the Joint Physics Unit CNRS/Thales, the Nanosciences and Nanotechnologies Centre (CNRS/Université Paris Sud), in collaboration with American and Japanese researchers, have developed the world’s first artificial nano-neuron with the ability to recognise numbers spoken by different individuals. Just like the recent development of electronic synapses described in a Nature article, this electronic nano-neuron is a breakthrough in artificial intelligence and its potential applications.

A Sept. 19, 2017 Thales press release, which originated the news item, expands on the theme,

The latest artificial intelligence algorithms are able to recognise visual and vocal cues with high levels of performance. But running these programs on conventional computers uses 10,000 times more energy than the human brain. To reduce electricity consumption, a new type of computer is needed. It is inspired by the human brain and comprises vast numbers of miniaturised neurons and synapses. Until now, however, it had not been possible to produce a stable enough artificial nano-neuron which would process the information reliably.

Today [Sept. 19, 2017 or July 27, 2017 when the paper was published in Nature?]], for the first time, researchers have developed a nano-neuron with the ability to recognise numbers spoken by different individuals with 99.6% accuracy. This breakthrough relied on the use of an exceptionally stable magnetic oscillator. Each gyration of this nano-compass generates an electrical output, which effectively imitates the electrical impulses produced by biological neurons. In the next few years, these magnetic nano-neurons could be interconnected via artificial synapses, such as those recently developed, for real-time big data analytics and classification.

The project is a collaborative initiative between fundamental research laboratories and applied research partners. The long-term goal is to produce extremely energy-efficient miniaturised chips with the intelligence needed to learn from and adapt to the constantly ever-changing and ambiguous situations of the real world. These electronic chips will have many practical applications, such as providing smart guidance to robots or autonomous vehicles, helping doctors in their diagnosis’ and improving medical prostheses. This project included researchers from the Joint Physics Unit CNRS/Thales, the AIST, the CNS-NIST, and the Nanosciences and Nanotechnologies Centre (CNRS/Université Paris-Sud).

About the CNRS
The French National Centre for Scientific Research is Europe’s largest public research institution. It produces knowledge for the benefit of society. With nearly 32,000 employees, a budget exceeding 3.2 billion euros in 2016, and offices throughout France, the CNRS is present in all scientific fields through its 1100 laboratories. With 21 Nobel laureates and 12 Fields Medal winners, the organization has a long tradition of excellence. It carries out research in mathematics, physics, information sciences and technologies, nuclear and particle physics, Earth sciences and astronomy, chemistry, biological sciences, the humanities and social sciences, engineering and the environment.

About the Université Paris-Saclay (France)
To meet global demand for higher education, research and innovation, 19 of France’s most renowned establishments have joined together to form the Université Paris-Saclay. The new university provides world-class teaching and research opportunities, from undergraduate courses to graduate schools and doctoral programmes, across most disciplines including life and natural sciences as well as social sciences. With 9,000 masters students, 5,500 doctoral candidates, an equivalent number of engineering students and an extensive undergraduate population, some 65,000 people now study at member establishments.

About the Center for Nanoscale Science & Technology (Maryland, USA)
The CNST is a national user facility purposely designed to accelerate innovation in nanotechnology-based commerce. Its mission is to operate a national, shared resource for nanoscale fabrication and measurement and develop innovative nanoscale measurement and fabrication capabilities to support researchers from industry, academia, NIST and other government agencies in advancing nanoscale technology from discovery to production. The Center, located in the Advanced Measurement Laboratory Complex on NIST’s Gaithersburg, MD campus, disseminates new nanoscale measurement methods by incorporating them into facility operations, collaborating and partnering with others and providing international leadership in nanotechnology.

About the National Institute of Advanced Industrial Science and Technology (Japan)
The National Institute of Advanced Industrial Science and Technology (AIST), one of the largest public research institutes in Japan, focuses on the creation and practical realization of technologies useful to Japanese industry and society, and on bridging the gap between innovative technological seeds and commercialization. For this, AIST is organized into 7 domains (Energy and Environment, Life Science and Biotechnology, Information Technology and Human Factors, Materials and Chemistry, Electronics and Manufacturing, Geological

About the Centre for Nanoscience and Nanotechnology (France)
Established on 1 June 2016, the Centre for Nanosciences and Nanotechnologies (C2N) was launched in the wake of the joint CNRS and Université Paris-Sud decision to merge and gather on the same campus site the Laboratory for Photonics and Nanostructures (LPN) and the Institut d’Electronique Fondamentale (IEF). Its location in the École Polytechnique district of the Paris-Saclay campus will be completed in 2017 while the new C2N buildings are under construction. The centre conducts research in material science, nanophotonics, nanoelectronics, nanobiotechnologies and microsystems, as well as in nanotechnologies.

There is a video featuring researcher Julie Grollier discussing their work but you will need your French language skills,

(If you’re interested, there is an English language video published on youtube on Feb. 19, 2017 with Julie Grollier speaking more generally about the field at the World Economic Forum about neuromorphic computing,  https://www.youtube.com/watch?v=Sm2BGkTYFeQ

Here’s a link to and a citation for the team’s July 2017 paper,

Neuromorphic computing with nanoscale spintronic oscillators by Jacob Torrejon, Mathieu Riou, Flavio Abreu Araujo, Sumito Tsunegi, Guru Khalsa, Damien Querlioz, Paolo Bortolotti, Vincent Cros, Kay Yakushiji, Akio Fukushima, Hitoshi Kubota, Shinji Yuasa, Mark D. Stiles, & Julie Grollier. Nature 547, 428–431 (27 July 2017) doi:10.1038/nature23011 Published online 26 July 2017

This paper is behind a paywall.

Mott memristor

Mott memristors (mentioned in my Aug. 24, 2017 posting about neuristors and brainlike computing) gets more fulsome treatment in an Oct. 9, 2017 posting by Samuel K. Moore on the Nanoclast blog (found on the IEEE [Institute of Electrical and Electronics Engineers] website) Note: 1: Links have been removed; Note 2 : I quite like Moore’s writing style but he’s not for the impatient reader,

When you’re really harried, you probably feel like your head is brimful of chaos. You’re pretty close. Neuroscientists say your brain operates in a regime termed the “edge of chaos,” and it’s actually a good thing. It’s a state that allows for fast, efficient analog computation of the kind that can solve problems that grow vastly more difficult as they become bigger in size.

The trouble is, if you’re trying to replicate that kind of chaotic computation with electronics, you need an element that both acts chaotically—how and when you want it to—and could scale up to form a big system.

“No one had been able to show chaotic dynamics in a single scalable electronic device,” says Suhas Kumar, a researcher at Hewlett Packard Labs, in Palo Alto, Calif. Until now, that is.

He, John Paul Strachan, and R. Stanley Williams recently reported in the journal Nature that a particular configuration of a certain type of memristor contains that seed of controlled chaos. What’s more, when they simulated wiring these up into a type of circuit called a Hopfield neural network, the circuit was capable of solving a ridiculously difficult problem—1,000 instances of the traveling salesman problem—at a rate of 10 trillion operations per second per watt.

(It’s not an apples-to-apples comparison, but the world’s most powerful supercomputer as of June 2017 managed 93,015 trillion floating point operations per second but consumed 15 megawatts doing it. So about 6 billion operations per second per watt.)

The device in question is called a Mott memristor. Memristors generally are devices that hold a memory, in the form of resistance, of the current that has flowed through them. The most familiar type is called resistive RAM (or ReRAM or RRAM, depending on who’s asking). Mott memristors have an added ability in that they can also reflect a temperature-driven change in resistance.

The HP Labs team made their memristor from an 8-nanometer-thick layer of niobium dioxide (NbO2) sandwiched between two layers of titanium nitride. The bottom titanium nitride layer was in the form of a 70-nanometer wide pillar. “We showed that this type of memristor can generate chaotic and nonchaotic signals,” says Williams, who invented the memristor based on theory by Leon Chua.

(The traveling salesman problem is one of these. In it, the salesman must find the shortest route that lets him visit all of his customers’ cities, without going through any of them twice. It’s a difficult problem because it becomes exponentially more difficult to solve with each city you add.)

Here’s what the niobium dioxide-based Mott memristor looks like,

Photo: Suhas Kumar/Hewlett Packard Labs
A micrograph shows the construction of a Mott memristor composed of an 8-nanometer-thick layer of niobium dioxide between two layers of titanium nitride.

Here’s a link to and a citation for the paper,

Chaotic dynamics in nanoscale NbO2 Mott memristors for analogue computing by Suhas Kumar, John Paul Strachan & R. Stanley Williams. Nature 548, 318–321 (17 August 2017) doi:10.1038/nature23307 Published online: 09 August 2017

This paper is behind a paywall.

Organismic learning—learning to forget

This approach to mimicking the human brain differs from the memristor. (You can find several pieces about memrisors here including this August 24, 2017 post about a derivative, a neuristor).  This approach comes from scientists at Purdue University and employs a quantum material. From an Aug. 15, 2017 news item on phys.org,

A new computing technology called “organismoids” mimics some aspects of human thought by learning how to forget unimportant memories while retaining more vital ones.

“The human brain is capable of continuous lifelong learning,” said Kaushik Roy, Purdue University’s Edward G. Tiedemann Jr. Distinguished Professor of Electrical and Computer Engineering. “And it does this partially by forgetting some information that is not critical. I learn slowly, but I keep forgetting other things along the way, so there is a graceful degradation in my accuracy of detecting things that are old. What we are trying to do is mimic that behavior of the brain to a certain extent, to create computers that not only learn new information but that also learn what to forget.”

The work was performed by researchers at Purdue, Rutgers University, the Massachusetts Institute of Technology, Brookhaven National Laboratory and Argonne National Laboratory.

Central to the research is a ceramic “quantum material” called samarium nickelate, which was used to create devices called organismoids, said Shriram Ramanathan, a Purdue professor of materials engineering.

A video describing the work has been produced,

An August 14, 2017 Purdue University news release by Emil Venere, which originated the news item,  details the work,

“These devices possess certain characteristics of living beings and enable us to advance new learning algorithms that mimic some aspects of the human brain,” Roy said. “The results have far reaching implications for the fields of quantum materials as well as brain-inspired computing.”

When exposed to hydrogen gas, the material undergoes a massive resistance change, as its crystal lattice is “doped” by hydrogen atoms. The material is said to breathe, expanding when hydrogen is added and contracting when the hydrogen is removed.

“The main thing about the material is that when this breathes in hydrogen there is a spectacular quantum mechanical effect that allows the resistance to change by orders of magnitude,” Ramanathan said. “This is very unusual, and the effect is reversible because this dopant can be weakly attached to the lattice, so if you remove the hydrogen from the environment you can change the electrical resistance.”

When hydrogen is exposed to the material, it splits into a proton and an electron, and the electron attaches to the nickel, temporarily causing the material to become an insulator.

“Then, when the hydrogen comes out, this material becomes conducting again,” Ramanathan said. “What we show in this paper is the extent of conduction and insulation can be very carefully tuned.”

This changing conductance and the “decay of that conductance over time” is similar to a key animal behavior called habituation.

“Many animals, even organisms that don’t have a brain, possess this fundamental survival skill,” Roy said. “And that’s why we call this organismic behavior. If I see certain information on a regular basis, I get habituated, retaining memory of it. But if I haven’t seen such information over a long time, then it slowly starts decaying. So, the behavior of conductance going up and down in exponential fashion can be used to create a new computing model that will incrementally learn and at same time forget things in a proper way.”

The researchers have developed a “neural learning model” they have termed adaptive synaptic plasticity.

“This could be really important because it’s one of the first examples of using quantum materials directly for solving a major problem in neural learning,” Ramanathan said.

The researchers used the organismoids to implement the new model for synaptic plasticity.

“Using this effect we are able to model something that is a real problem in neuromorphic computing,” Roy said. “For example, if I have learned your facial features I can still go out and learn someone else’s features without really forgetting yours. However, this is difficult for computing models to do. When learning your features, they can forget the features of the original person, a problem called catastrophic forgetting.”

Neuromorphic computing is not intended to replace conventional general-purpose computer hardware, based on complementary metal-oxide-semiconductor transistors, or CMOS. Instead, it is expected to work in conjunction with CMOS-based computing. Whereas CMOS technology is especially adept at performing complex mathematical computations, neuromorphic computing might be able to perform roles such as facial recognition, reasoning and human-like decision making.

Roy’s team performed the research work on the plasticity model, and other collaborators concentrated on the physics of how to explain the process of doping-driven change in conductance central to the paper. The multidisciplinary team includes experts in materials, electrical engineering, physics, and algorithms.

“It’s not often that a materials science person can talk to a circuits person like professor Roy and come up with something meaningful,” Ramanathan said.

Organismoids might have applications in the emerging field of spintronics. Conventional computers use the presence and absence of an electric charge to represent ones and zeroes in a binary code needed to carry out computations. Spintronics, however, uses the “spin state” of electrons to represent ones and zeros.

It could bring circuits that resemble biological neurons and synapses in a compact design not possible with CMOS circuits. Whereas it would take many CMOS devices to mimic a neuron or synapse, it might take only a single spintronic device.

In future work, the researchers may demonstrate how to achieve habituation in an integrated circuit instead of exposing the material to hydrogen gas.

Here’s a link to and a citation for the paper,

Habituation based synaptic plasticity and organismic learning in a quantum perovskite by Fan Zuo, Priyadarshini Panda, Michele Kotiuga, Jiarui Li, Mingu Kang, Claudio Mazzoli, Hua Zhou, Andi Barbour, Stuart Wilkins, Badri Narayanan, Mathew Cherukara, Zhen Zhang, Subramanian K. R. S. Sankaranarayanan, Riccardo Comin, Karin M. Rabe, Kaushik Roy, & Shriram Ramanathan. Nature Communications 8, Article number: 240 (2017) doi:10.1038/s41467-017-00248-6 Published online: 14 August 2017

This paper is open access.

IBM to build brain-inspired AI supercomputing system equal to 64 million neurons for US Air Force

This is the second IBM computer announcement I’ve stumbled onto within the last 4 weeks or so,  which seems like a veritable deluge given the last time I wrote about IBM’s computing efforts was in an Oct. 8, 2015 posting about carbon nanotubes,. I believe that up until now that was my  most recent posting about IBM and computers.

Moving onto the news, here’s more from a June 23, 3017 news item on Nanotechnology Now,

IBM (NYSE: IBM) and the U.S. Air Force Research Laboratory (AFRL) today [June 23, 2017] announced they are collaborating on a first-of-a-kind brain-inspired supercomputing system powered by a 64-chip array of the IBM TrueNorth Neurosynaptic System. The scalable platform IBM is building for AFRL will feature an end-to-end software ecosystem designed to enable deep neural-network learning and information discovery. The system’s advanced pattern recognition and sensory processing power will be the equivalent of 64 million neurons and 16 billion synapses, while the processor component will consume the energy equivalent of a dim light bulb – a mere 10 watts to power.

A June 23, 2017 IBM news release, which originated the news item, describes the proposed collaboration, which is based on IBM’s TrueNorth brain-inspired chip architecture (see my Aug. 8, 2014 posting for more about TrueNorth),

IBM researchers believe the brain-inspired, neural network design of TrueNorth will be far more efficient for pattern recognition and integrated sensory processing than systems powered by conventional chips. AFRL is investigating applications of the system in embedded, mobile, autonomous settings where, today, size, weight and power (SWaP) are key limiting factors.

The IBM TrueNorth Neurosynaptic System can efficiently convert data (such as images, video, audio and text) from multiple, distributed sensors into symbols in real time. AFRL will combine this “right-brain” perception capability of the system with the “left-brain” symbol processing capabilities of conventional computer systems. The large scale of the system will enable both “data parallelism” where multiple data sources can be run in parallel against the same neural network and “model parallelism” where independent neural networks form an ensemble that can be run in parallel on the same data.

“AFRL was the earliest adopter of TrueNorth for converting data into decisions,” said Daniel S. Goddard, director, information directorate, U.S. Air Force Research Lab. “The new neurosynaptic system will be used to enable new computing capabilities important to AFRL’s mission to explore, prototype and demonstrate high-impact, game-changing technologies that enable the Air Force and the nation to maintain its superior technical advantage.”

“The evolution of the IBM TrueNorth Neurosynaptic System is a solid proof point in our quest to lead the industry in AI hardware innovation,” said Dharmendra S. Modha, IBM Fellow, chief scientist, brain-inspired computing, IBM Research – Almaden. “Over the last six years, IBM has expanded the number of neurons per system from 256 to more than 64 million – an 800 percent annual increase over six years.’’

The system fits in a 4U-high (7”) space in a standard server rack and eight such systems will enable the unprecedented scale of 512 million neurons per rack. A single processor in the system consists of 5.4 billion transistors organized into 4,096 neural cores creating an array of 1 million digital neurons that communicate with one another via 256 million electrical synapses.    For CIFAR-100 dataset, TrueNorth achieves near state-of-the-art accuracy, while running at >1,500 frames/s and using 200 mW (effectively >7,000 frames/s per Watt) – orders of magnitude lower speed and energy than a conventional computer running inference on the same neural network.

The IBM TrueNorth Neurosynaptic System was originally developed under the auspices of Defense Advanced Research Projects Agency’s (DARPA) Systems of Neuromorphic Adaptive Plastic Scalable Electronics (SyNAPSE) program in collaboration with Cornell University. In 2016, the TrueNorth Team received the inaugural Misha Mahowald Prize for Neuromorphic Engineering and TrueNorth was accepted into the Computer History Museum.  Research with TrueNorth is currently being performed by more than 40 universities, government labs, and industrial partners on five continents.

There is an IBM video accompanying this news release, which seems more promotional than informational,

The IBM scientist featured in the video has a Dec. 19, 2016 posting on an IBM research blog which provides context for this collaboration with AFRL,

2016 was a big year for brain-inspired computing. My team and I proved in our paper “Convolutional networks for fast, energy-efficient neuromorphic computing” that the value of this breakthrough is that it can perform neural network inference at unprecedented ultra-low energy consumption. Simply stated, our TrueNorth chip’s non-von Neumann architecture mimics the brain’s neural architecture — giving it unprecedented efficiency and scalability over today’s computers.

The brain-inspired TrueNorth processor [is] a 70mW reconfigurable silicon chip with 1 million neurons, 256 million synapses, and 4096 parallel and distributed neural cores. For systems, we present a scale-out system loosely coupling 16 single-chip boards and a scale-up system tightly integrating 16 chips in a 4´4 configuration by exploiting TrueNorth’s native tiling.

For the scale-up systems we summarize our approach to physical placement of neural network, to reduce intra- and inter-chip network traffic. The ecosystem is in use at over 30 universities and government / corporate labs. Our platform is a substrate for a spectrum of applications from mobile and embedded computing to cloud and supercomputers.
TrueNorth Ecosystem for Brain-Inspired Computing: Scalable Systems, Software, and Applications

TrueNorth, once loaded with a neural network model, can be used in real-time as a sensory streaming inference engine, performing rapid and accurate classifications while using minimal energy. TrueNorth’s 1 million neurons consume only 70 mW, which is like having a neurosynaptic supercomputer the size of a postage stamp that can run on a smartphone battery for a week.

Recently, in collaboration with Lawrence Livermore National Laboratory, U.S. Air Force Research Laboratory, and U.S. Army Research Laboratory, we published our fifth paper at IEEE’s prestigious Supercomputing 2016 conference that summarizes the results of the team’s 12.5-year journey (see the associated graphic) to unlock this value proposition. [keep scrolling for the graphic]

Applying the mind of a chip

Three of our partners, U.S. Army Research Lab, U.S. Air Force Research Lab and Lawrence Livermore National Lab, contributed sections to the Supercomputing paper each showcasing a different TrueNorth system, as summarized by my colleagues Jun Sawada, Brian Taba, Pallab Datta, and Ben Shaw:

U.S. Army Research Lab (ARL) prototyped a computational offloading scheme to illustrate how TrueNorth’s low power profile enables computation at the point of data collection. Using the single-chip NS1e board and an Android tablet, ARL researchers created a demonstration system that allows visitors to their lab to hand write arithmetic expressions on the tablet, with handwriting streamed to the NS1e for character recognition, and recognized characters sent back to the tablet for arithmetic calculation.

Of course, the point here is not to make a handwriting calculator, it is to show how TrueNorth’s low power and real time pattern recognition might be deployed at the point of data collection to reduce latency, complexity and transmission bandwidth, as well as back-end data storage requirements in distributed systems.

U.S. Air Force Research Lab (AFRL) contributed another prototype application utilizing a TrueNorth scale-out system to perform a data-parallel text extraction and recognition task. In this application, an image of a document is segmented into individual characters that are streamed to AFRL’s NS1e16 TrueNorth system for parallel character recognition. Classification results are then sent to an inference-based natural language model to reconstruct words and sentences. This system can process 16,000 characters per second! AFRL plans to implement the word and sentence inference algorithms on TrueNorth, as well.

Lawrence Livermore National Lab (LLNL) has a 16-chip NS16e scale-up system to explore the potential of post-von Neumann computation through larger neural models and more complex algorithms, enabled by the native tiling characteristics of the TrueNorth chip. For the Supercomputing paper, they contributed a single-chip application performing in-situ process monitoring in an additive manufacturing process. LLNL trained a TrueNorth network to recognize seven classes related to track weld quality in welds produced by a selective laser melting machine. Real-time weld quality determination allows for closed-loop process improvement and immediate rejection of defective parts. This is one of several applications LLNL is developing to showcase TrueNorth as a scalable platform for low-power, real-time inference.

[downloaded from https://www.ibm.com/blogs/research/2016/12/the-brains-architecture-efficiency-on-a-chip/] Courtesy: IBM

I gather this 2017 announcement is the latest milestone on the TrueNorth journey.

Self-learning neuromorphic chip

There aren’t many details about this chip and so far as I can tell this technology is not based on a memristor. From a May 16, 2017 news item on plys.org,

Today [May 16, 2017], at the imec technology forum (ITF2017), imec demonstrated the world’s first self-learning neuromorphic chip. The brain-inspired chip, based on OxRAM technology, has the capability of self-learning and has been demonstrated to have the ability to compose music.

Here’s a sample,

A May 16, 2017 imec press release, which originated the news item, expands on the theme,

The human brain is a dream for computer scientists: it has a huge computing power while consuming only a few tens of Watts. Imec researchers are combining state-of-the-art hardware and software to design chips that feature these desirable characteristics of a self-learning system. Imec’s ultimate goal is to design the process technology and building blocks to make artificial intelligence to be energy efficient so that that it can be integrated into sensors. Such intelligent sensors will drive the internet of things forward. This would not only allow machine learning to be present in all sensors but also allow on-field learning capability to further improve the learning.

By co-optimizing the hardware and the software, the chip features machine learning and intelligence characteristics on a small area, while consuming only very little power. The chip is self-learning, meaning that is makes associations between what it has experienced and what it experiences. The more it experiences, the stronger the connections will be. The chip presented today has learned to compose new music and the rules for the composition are learnt on the fly.

It is imec’s ultimate goal to further advance both hardware and software to achieve very low-power, high-performance, low-cost and highly miniaturized neuromorphic chips that can be applied in many domains ranging for personal health, energy, traffic management etc. For example, neuromorphic chips integrated into sensors for health monitoring would enable to identify a particular heartrate change that could lead to heart abnormalities, and would learn to recognize slightly different ECG patterns that vary between individuals. Such neuromorphic chips would thus enable more customized and patient-centric monitoring.

“Because we have hardware, system design and software expertise under one roof, imec is ideally positioned to drive neuromorphic computing forward,” says Praveen Raghavan, distinguished member of the technical Staff at imec. “Our chip has evolved from co-optimizing logic, memory, algorithms and system in a holistic way. This way, we succeeded in developing the building blocks for such a self-learning system.”

About ITF

The Imec Technology Forum (ITF) is imec’s series of internationally acclaimed events with a clear focus on the technologies that will drive groundbreaking innovation in healthcare, smart cities and mobility, ICT, logistics and manufacturing, and energy.

At ITF, some of the world’s greatest minds in technology take the stage. Their talks cover a wide range of domains – such as advanced chip scaling, smart imaging, sensor and communication systems, the IoT, supercomputing, sustainable energy and battery technology, and much more. As leading innovators in their fields, they also present early insights in market trends, evolutions, and breakthroughs in nanoelectronics and digital technology: What will be successful and what not, in five or even ten years from now? How will technology evolve, and how fast? And who can help you implement your technology roadmaps?

About imec

Imec is the world-leading research and innovation hub in nano-electronics and digital technologies. The combination of our widely-acclaimed leadership in microchip technology and profound software and ICT expertise is what makes us unique. By leveraging our world-class infrastructure and local and global ecosystem of partners across a multitude of industries, we create groundbreaking innovation in application domains such as healthcare, smart cities and mobility, logistics and manufacturing, and energy.

As a trusted partner for companies, start-ups and universities we bring together close to 3,500 brilliant minds from over 75 nationalities. Imec is headquartered in Leuven, Belgium and also has distributed R&D groups at a number of Flemish universities, in the Netherlands, Taiwan, USA, China, and offices in India and Japan. In 2016, imec’s revenue (P&L) totaled 496 million euro. Further information on imec can be found at www.imec.be.

Imec is a registered trademark for the activities of IMEC International (a legal entity set up under Belgian law as a “stichting van openbaar nut”), imec Belgium (IMEC vzw supported by the Flemish Government), imec the Netherlands (Stichting IMEC Nederland, part of Holst Centre which is supported by the Dutch Government), imec Taiwan (IMEC Taiwan Co.) and imec China (IMEC Microelectronics (Shanghai) Co. Ltd.) and imec India (Imec India Private Limited), imec Florida (IMEC USA nanoelectronics design center).

I don’t usually include the ‘abouts’ but I was quite intrigued by imec. For anyone curious about the ITF (imec Forums), here’s a website with a listing all of the previously held and upcoming 2017 forums.

Predicting how a memristor functions

An April 3, 2017 news item on Nanowerk announces a new memristor development (Note: A link has been removed),

Researchers from the CNRS [Centre national de la recherche scientifique; France] , Thales, and the Universities of Bordeaux, Paris-Sud, and Evry have created an artificial synapse capable of learning autonomously. They were also able to model the device, which is essential for developing more complex circuits. The research was published in Nature Communications (“Learning through ferroelectric domain dynamics in solid-state synapses”)

An April 3, 2017 CNRS press release, which originated the news item, provides a nice introduction to the memristor concept before providing a few more details about this latest work (Note: A link has been removed),

One of the goals of biomimetics is to take inspiration from the functioning of the brain [also known as neuromorphic engineering or neuromorphic computing] in order to design increasingly intelligent machines. This principle is already at work in information technology, in the form of the algorithms used for completing certain tasks, such as image recognition; this, for instance, is what Facebook uses to identify photos. However, the procedure consumes a lot of energy. Vincent Garcia (Unité mixte de physique CNRS/Thales) and his colleagues have just taken a step forward in this area by creating directly on a chip an artificial synapse that is capable of learning. They have also developed a physical model that explains this learning capacity. This discovery opens the way to creating a network of synapses and hence intelligent systems requiring less time and energy.

Our brain’s learning process is linked to our synapses, which serve as connections between our neurons. The more the synapse is stimulated, the more the connection is reinforced and learning improved. Researchers took inspiration from this mechanism to design an artificial synapse, called a memristor. This electronic nanocomponent consists of a thin ferroelectric layer sandwiched between two electrodes, and whose resistance can be tuned using voltage pulses similar to those in neurons. If the resistance is low the synaptic connection will be strong, and if the resistance is high the connection will be weak. This capacity to adapt its resistance enables the synapse to learn.

Although research focusing on these artificial synapses is central to the concerns of many laboratories, the functioning of these devices remained largely unknown. The researchers have succeeded, for the first time, in developing a physical model able to predict how they function. This understanding of the process will make it possible to create more complex systems, such as a series of artificial neurons interconnected by these memristors.

As part of the ULPEC H2020 European project, this discovery will be used for real-time shape recognition using an innovative camera1 : the pixels remain inactive, except when they see a change in the angle of vision. The data processing procedure will require less energy, and will take less time to detect the selected objects. The research involved teams from the CNRS/Thales physics joint research unit, the Laboratoire de l’intégration du matériau au système (CNRS/Université de Bordeaux/Bordeaux INP), the University of Arkansas (US), the Centre de nanosciences et nanotechnologies (CNRS/Université Paris-Sud), the Université d’Evry, and Thales.


Image synapse

© Sören Boyn / CNRS/Thales physics joint research unit.

Artist’s impression of the electronic synapse: the particles represent electrons circulating through oxide, by analogy with neurotransmitters in biological synapses. The flow of electrons depends on the oxide’s ferroelectric domain structure, which is controlled by electric voltage pulses.

Here’s a link to and a citation for the paper,

Learning through ferroelectric domain dynamics in solid-state synapses by Sören Boyn, Julie Grollier, Gwendal Lecerf, Bin Xu, Nicolas Locatelli, Stéphane Fusil, Stéphanie Girod, Cécile Carrétéro, Karin Garcia, Stéphane Xavier, Jean Tomas, Laurent Bellaiche, Manuel Bibes, Agnès Barthélémy, Sylvain Saïghi, & Vincent Garcia. Nature Communications 8, Article number: 14736 (2017) doi:10.1038/ncomms14736 Published online: 03 April 2017

This paper is open access.

Thales or Thales Group is a French company, from its Wikipedia entry (Note: Links have been removed),

Thales Group (French: [talɛs]) is a French multinational company that designs and builds electrical systems and provides services for the aerospace, defence, transportation and security markets. Its headquarters are in La Défense[2] (the business district of Paris), and its stock is listed on the Euronext Paris.

The company changed its name to Thales (from the Greek philosopher Thales,[3] pronounced [talɛs] reflecting its pronunciation in French) from Thomson-CSF in December 2000 shortly after the £1.3 billion acquisition of Racal Electronics plc, a UK defence electronics group. It is partially state-owned by the French government,[4] and has operations in more than 56 countries. It has 64,000 employees and generated €14.9 billion in revenues in 2016. The Group is ranked as the 475th largest company in the world by Fortune 500 Global.[5] It is also the 10th largest defence contractor in the world[6] and 55% of its total sales are military sales.[4]

The ULPEC (Ultra-Low Power Event-Based Camera) H2020 [Horizon 2020 funded) European project can be found here,

The long term goal of ULPEC is to develop advanced vision applications with ultra-low power requirements and ultra-low latency. The output of the ULPEC project is a demonstrator connecting a neuromorphic event-based camera to a high speed ultra-low power consumption asynchronous visual data processing system (Spiking Neural Network with memristive synapses). Although ULPEC device aims to reach TRL 4, it is a highly application-oriented project: prospective use cases will b…

Finally, for anyone curious about Thales, the philosopher (from his Wikipedia entry), Note: Links have been removed,

Thales of Miletus (/ˈθeɪliːz/; Greek: Θαλῆς (ὁ Μῑλήσιος), Thalēs; c. 624 – c. 546 BC) was a pre-Socratic Greek/Phoenician philosopher, mathematician and astronomer from Miletus in Asia Minor (present-day Milet in Turkey). He was one of the Seven Sages of Greece. Many, most notably Aristotle, regard him as the first philosopher in the Greek tradition,[1][2] and he is otherwise historically recognized as the first individual in Western civilization known to have entertained and engaged in scientific philosophy.[3][4]

Does understanding your pet mean understanding artificial intelligence better?

Heather Roff’s take on artificial intelligence features an approach I haven’t seen before. From her March 30, 2017 essay for The Conversation (h/t March 31, 2017 news item on phys.org),

It turns out, though, that we already have a concept we can use when we think about AI: It’s how we think about animals. As a former animal trainer (albeit briefly) who now studies how people use AI, I know that animals and animal training can teach us quite a lot about how we ought to think about, approach and interact with artificial intelligence, both now and in the future.

Using animal analogies can help regular people understand many of the complex aspects of artificial intelligence. It can also help us think about how best to teach these systems new skills and, perhaps most importantly, how we can properly conceive of their limitations, even as we celebrate AI’s new possibilities.
Looking at constraints

As AI expert Maggie Boden explains, “Artificial intelligence seeks to make computers do the sorts of things that minds can do.” AI researchers are working on teaching computers to reason, perceive, plan, move and make associations. AI can see patterns in large data sets, predict the likelihood of an event occurring, plan a route, manage a person’s meeting schedule and even play war-game scenarios.

Many of these capabilities are, in themselves, unsurprising: Of course a robot can roll around a space and not collide with anything. But somehow AI seems more magical when the computer starts to put these skills together to accomplish tasks.

Thinking of AI as a trainable animal isn’t just useful for explaining it to the general public. It is also helpful for the researchers and engineers building the technology. If an AI scholar is trying to teach a system a new skill, thinking of the process from the perspective of an animal trainer could help identify potential problems or complications.

For instance, if I try to train my dog to sit, and every time I say “sit” the buzzer to the oven goes off, then my dog will begin to associate sitting not only with my command, but also with the sound of the oven’s buzzer. In essence, the buzzer becomes another signal telling the dog to sit, which is called an “accidental reinforcement.” If we look for accidental reinforcements or signals in AI systems that are not working properly, then we’ll know better not only what’s going wrong, but also what specific retraining will be most effective.

This requires us to understand what messages we are giving during AI training, as well as what the AI might be observing in the surrounding environment. The oven buzzer is a simple example; in the real world it will be far more complicated.

Before we welcome our AI overlords and hand over our lives and jobs to robots, we ought to pause and think about the kind of intelligences we are creating. …

Source: pixabay.com

It’s just last year (2016) that an AI system beat a human Go master player. Here’s how a March 17, 2016 article by John Russell for TechCrunch described the feat (Note: Links have been removed),

Much was written of an historic moment for artificial intelligence last week when a Google-developed AI beat one of the planet’s most sophisticated players of Go, an East Asia strategy game renowned for its deep thinking and strategy.

Go is viewed as one of the ultimate tests for an AI given the sheer possibilities on hand. “There are 1,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000 possible positions [in the game] — that’s more than the number of atoms in the universe, and more than a googol times larger than chess,” Google said earlier this year.

If you missed the series — which AlphaGo, the AI, won 4-1 — or were unsure of exactly why it was so significant, Google summed the general importance up in a post this week.

Far from just being a game, Demis Hassabis, CEO and Co-Founder of DeepMind — the Google-owned company behind AlphaGo — said the AI’s development is proof that it can be used to solve problems in ways that humans may be not be accustomed or able to do:

We’ve learned two important things from this experience. First, this test bodes well for AI’s potential in solving other problems. AlphaGo has the ability to look “globally” across a board—and find solutions that humans either have been trained not to play or would not consider. This has huge potential for using AlphaGo-like technology to find solutions that humans don’t necessarily see in other areas.

I find Roff’s thesis intriguing and is likely applicable to the short-term but in the longer term and in light of the attempts to  create devices that mimic neural plasticity and neuromorphic engineering  I don’t find her thesis convincing.

Ferroelectric roadmap to neuromorphic computing

Having written about memristors and neuromorphic engineering a number of times here, I’m  quite intrigued to see some research into another nanoscale device for mimicking the functions of a human brain.

The announcement about the latest research from the team at the US Department of Energy’s Argonne National Laboratory is in a Feb. 14, 2017 news item on Nanowerk (Note: A link has been removed),

Research published in Nature Scientific Reports (“Ferroelectric symmetry-protected multibit memory cell”) lays out a theoretical map to use ferroelectric material to process information using multivalued logic – a leap beyond the simple ones and zeroes that make up our current computing systems that could let us process information much more efficiently.

A Feb. 10, 2017 Argonne National Laboratory news release by Louise Lerner, which originated the news item, expands on the theme,

The language of computers is written in just two symbols – ones and zeroes, meaning yes or no. But a world of richer possibilities awaits us if we could expand to three or more values, so that the same physical switch could encode much more information.

“Most importantly, this novel logic unit will enable information processing using not only “yes” and “no”, but also “either yes or no” or “maybe” operations,” said Valerii Vinokur, a materials scientist and Distinguished Fellow at the U.S. Department of Energy’s Argonne National Laboratory and the corresponding author on the paper, along with Laurent Baudry with the Lille University of Science and Technology and Igor Lukyanchuk with the University of Picardie Jules Verne.

This is the way our brains operate, and they’re something on the order of a million times more efficient than the best computers we’ve ever managed to build – while consuming orders of magnitude less energy.

“Our brains process so much more information, but if our synapses were built like our current computers are, the brain would not just boil but evaporate from the energy they use,” Vinokur said.

While the advantages of this type of computing, called multivalued logic, have long been known, the problem is that we haven’t discovered a material system that could implement it. Right now, transistors can only operate as “on” or “off,” so this new system would have to find a new way to consistently maintain more states – as well as be easy to read and write and, ideally, to work at room temperature.

Hence Vinokur and the team’s interest in ferroelectrics, a class of materials whose polarization can be controlled with electric fields. As ferroelectrics physically change shape when the polarization changes, they’re very useful in sensors and other devices, such as medical ultrasound machines. Scientists are very interested in tapping these properties for computer memory and other applications; but the theory behind their behavior is very much still emerging.

The new paper lays out a recipe by which we could tap the properties of very thin films of a particular class of ferroelectric material called perovskites.

According to the calculations, perovskite films could hold two, three, or even four polarization positions that are energetically stable – “so they could ‘click’ into place, and thus provide a stable platform for encoding information,” Vinokur said.

The team calculated these stable configurations and how to manipulate the polarization to move it between stable positions using electric fields, Vinokur said.

“When we realize this in a device, it will enormously increase the efficiency of memory units and processors,” Vinokur said. “This offers a significant step towards realization of so-called neuromorphic computing, which strives to model the human brain.”

Vinokur said the team is working with experimentalists to apply the principles to create a working system

Here’s a link to and a citation for the paper,

Ferroelectric symmetry-protected multibit memory cell by Laurent Baudry, Igor Lukyanchuk, & Valerii M. Vinokur. Scientific Reports 7, Article number: 42196 (2017) doi:10.1038/srep42196 Published online: 08 February 2017

This paper is open access.

Changing synaptic connectivity with a memristor

The French have announced some research into memristive devices that mimic both short-term and long-term neural plasticity according to a Dec. 6, 2016 news item on Nanowerk,

Leti researchers have demonstrated that memristive devices are excellent candidates to emulate synaptic plasticity, the capability of synapses to enhance or diminish their connectivity between neurons, which is widely believed to be the cellular basis for learning and memory.

The breakthrough was presented today [Dec. 6, 2016] at IEDM [International Electron Devices Meeting] 2016 in San Francisco in the paper, “Experimental Demonstration of Short and Long Term Synaptic Plasticity Using OxRAM Multi k-bit Arrays for Reliable Detection in Highly Noisy Input Data”.

Neural systems such as the human brain exhibit various types and time periods of plasticity, e.g. synaptic modifications can last anywhere from seconds to days or months. However, prior research in utilizing synaptic plasticity using memristive devices relied primarily on simplified rules for plasticity and learning.

The project team, which includes researchers from Leti’s sister institute at CEA Tech, List, along with INSERM and Clinatec, proposed an architecture that implements both short- and long-term plasticity (STP and LTP) using RRAM devices.

A Dec. 6, 2016 Laboratoire d’électronique des technologies de l’information (LETI) press release, which originated the news item, elaborates,

“While implementing a learning rule for permanent modifications – LTP, based on spike-timing-dependent plasticity – we also incorporated the possibility of short-term modifications with STP, based on the Tsodyks/Markram model,” said Elisa Vianello, Leti non-volatile memories and cognitive computing specialist/research engineer. “We showed the benefits of utilizing both kinds of plasticity with visual pattern extraction and decoding of neural signals. LTP allows our artificial neural networks to learn patterns, and STP makes the learning process very robust against environmental noise.”

Resistive random-access memory (RRAM) devices coupled with a spike-coding scheme are key to implementing unsupervised learning with minimal hardware footprint and low power consumption. Embedding neuromorphic learning into low-power devices could enable design of autonomous systems, such as a brain-machine interface that makes decisions based on real-time, on-line processing of in-vivo recorded biological signals. Biological data are intrinsically highly noisy and the proposed combined LTP and STP learning rule is a powerful technique to improve the detection/recognition rate. This approach may enable the design of autonomous implantable devices for rehabilitation purposes

Leti, which has worked on RRAM to develop hardware neuromorphic architectures since 2010, is the coordinator of the H2020 [Horizon 2020] European project NeuRAM3. That project is working on fabricating a chip with architecture that supports state-of-the-art machine-learning algorithms and spike-based learning mechanisms.

That’s it folks.