Tag Archives: memristors

Memristor-based electronic synapses for neural networks

Caption: Neuron connections in biological neural networks. Credit: MIPT press office

Caption: Neuron connections in biological neural networks. Credit: MIPT press office

Russian scientists have recently published a paper about neural networks and electronic synapses based on ‘thin film’ memristors according to an April 19, 2016 news item on Nanowerk,

A team of scientists from the Moscow Institute of Physics and Technology (MIPT) have created prototypes of “electronic synapses” based on ultra-thin films of hafnium oxide (HfO2). These prototypes could potentially be used in fundamentally new computing systems.

An April 20, 2016 MIPT press release (also on EurekAlert), which originated the news item (the date inconsistency likely due to timezone differences) explains the connection between thin films and memristors,

The group of researchers from MIPT have made HfO2-based memristors measuring just 40×40 nm2. The nanostructures they built exhibit properties similar to biological synapses. Using newly developed technology, the memristors were integrated in matrices: in the future this technology may be used to design computers that function similar to biological neural networks.

Memristors (resistors with memory) are devices that are able to change their state (conductivity) depending on the charge passing through them, and they therefore have a memory of their “history”. In this study, the scientists used devices based on thin-film hafnium oxide, a material that is already used in the production of modern processors. This means that this new lab technology could, if required, easily be used in industrial processes.

“In a simpler version, memristors are promising binary non-volatile memory cells, in which information is written by switching the electric resistance – from high to low and back again. What we are trying to demonstrate are much more complex functions of memristors – that they behave similar to biological synapses,” said Yury Matveyev, the corresponding author of the paper, and senior researcher of MIPT’s Laboratory of Functional Materials and Devices for Nanoelectronics, commenting on the study.

The press release offers a description of biological synapses and their relationship to learning and memory,

A synapse is point of connection between neurons, the main function of which is to transmit a signal (a spike – a particular type of signal, see fig. 2) from one neuron to another. Each neuron may have thousands of synapses, i.e. connect with a large number of other neurons. This means that information can be processed in parallel, rather than sequentially (as in modern computers). This is the reason why “living” neural networks are so immensely effective both in terms of speed and energy consumption in solving large range of tasks, such as image / voice recognition, etc.

Over time, synapses may change their “weight”, i.e. their ability to transmit a signal. This property is believed to be the key to understanding the learning and memory functions of thebrain.

From the physical point of view, synaptic “memory” and “learning” in the brain can be interpreted as follows: the neural connection possesses a certain “conductivity”, which is determined by the previous “history” of signals that have passed through the connection. If a synapse transmits a signal from one neuron to another, we can say that it has high “conductivity”, and if it does not, we say it has low “conductivity”. However, synapses do not simply function in on/off mode; they can have any intermediate “weight” (intermediate conductivity value). Accordingly, if we want to simulate them using certain devices, these devices will also have to have analogous characteristics.

The researchers have provided an illustration of a biological synapse,

Fig.2 The type of electrical signal transmitted by neurons (a “spike”). The red lines are various other biological signals, the black line is the averaged signal. Source: MIPT press office

Fig.2 The type of electrical signal transmitted by neurons (a “spike”). The red lines are various other biological signals, the black line is the averaged signal. Source: MIPT press office

Now, the press release ties the memristor information together with the biological synapse information to describe the new work at the MIPT,

As in a biological synapse, the value of the electrical conductivity of a memristor is the result of its previous “life” – from the moment it was made.

There is a number of physical effects that can be exploited to design memristors. In this study, the authors used devices based on ultrathin-film hafnium oxide, which exhibit the effect of soft (reversible) electrical breakdown under an applied external electric field. Most often, these devices use only two different states encoding logic zero and one. However, in order to simulate biological synapses, a continuous spectrum of conductivities had to be used in the devices.

“The detailed physical mechanism behind the function of the memristors in question is still debated. However, the qualitative model is as follows: in the metal–ultrathin oxide–metal structure, charged point defects, such as vacancies of oxygen atoms, are formed and move around in the oxide layer when exposed to an electric field. It is these defects that are responsible for the reversible change in the conductivity of the oxide layer,” says the co-author of the paper and researcher of MIPT’s Laboratory of Functional Materials and Devices for Nanoelectronics, Sergey Zakharchenko.

The authors used the newly developed “analogue” memristors to model various learning mechanisms (“plasticity”) of biological synapses. In particular, this involved functions such as long-term potentiation (LTP) or long-term depression (LTD) of a connection between two neurons. It is generally accepted that these functions are the underlying mechanisms of  memory in the brain.

The authors also succeeded in demonstrating a more complex mechanism – spike-timing-dependent plasticity, i.e. the dependence of the value of the connection between neurons on the relative time taken for them to be “triggered”. It had previously been shown that this mechanism is responsible for associative learning – the ability of the brain to find connections between different events.

To demonstrate this function in their memristor devices, the authors purposefully used an electric signal which reproduced, as far as possible, the signals in living neurons, and they obtained a dependency very similar to those observed in living synapses (see fig. 3).

Fig.3. The change in conductivity of memristors depending on the temporal separation between "spikes"(rigth) and thr change in potential of the neuron connections in biological neural networks. Source: MIPT press office

Fig.3. The change in conductivity of memristors depending on the temporal separation between “spikes”(rigth) and thr change in potential of the neuron connections in biological neural networks. Source: MIPT press office

These results allowed the authors to confirm that the elements that they had developed could be considered a prototype of the “electronic synapse”, which could be used as a basis for the hardware implementation of artificial neural networks.

“We have created a baseline matrix of nanoscale memristors demonstrating the properties of biological synapses. Thanks to this research, we are now one step closer to building an artificial neural network. It may only be the very simplest of networks, but it is nevertheless a hardware prototype,” said the head of MIPT’s Laboratory of Functional Materials and Devices for Nanoelectronics, Andrey Zenkevich.

Here’s a link to and a citation for the paper,

Crossbar Nanoscale HfO2-Based Electronic Synapses by Yury Matveyev, Roman Kirtaev, Alena Fetisova, Sergey Zakharchenko, Dmitry Negrov and Andrey Zenkevich. Nanoscale Research Letters201611:147 DOI: 10.1186/s11671-016-1360-6

Published: 15 March 2016

This is an open access paper.

Indian researchers establish a multiplex number to identify efficiency of multilevel resistive switching devices

There’s a Feb. 1, 2016 Nanowerk Spotlight article by Dr. Abhay Sagade of Cambridge University (UK) about defining efficiency in memristive devices,

In a recent study, researchers at the Jawaharlal Nehru Centre for Advanced Scientific Research (JNCASR), Bangalore, India, have defined a new figure-of-merit to identify the efficiency of resistive switching devices with multiple memory states. The research was carried out in collaboration with the Indian Institute of Technology Madras (IITM), Chennai, and financially supported by Department of Science and Technology, New Delhi.

The scientists identified the versatility of palladium oxide (PdO) as a novel resistive switching material for use in resistive memory devices. Due to the availability to switch multiple redox states in the PdO system, researchers have controlled it by applying different amplitudes of voltage pulses.

To date, many materials have shown multiple memory states but there have been no efforts to define the ability of the fabricated device to switch between all possible memory states.

In this present report, the authors have defined the efficacy in a term coined as “multiplex number (M)” to quantify the performance of a multiple memory switching device:

For the PdO MRS device with five memory states, the multiplex number is found to be 5.7, which translates to 70% efficiency in switching. This is the highest value of M observed in any multiple memory device.

As multilevel resistive switching devices are expected to have great significance in futuristic brain-like memory devices [neuromorphic engineering products], the definition of their efficiency will provide a boost to the field. The number M will assist researches as well as technologist in classifying and deciding the true merit of their memory devices.

Here’s a link to and a citation for the paper Sagade is discussing,

Defining Switching Efficiency of Multilevel Resistive Memory with PdO as an Example by K. D. M. Rao, Abhay A. Sagade, Robin John, T. Pradeep and G. U. Kulkarni. Advanced Electronic Materials Volume 2, Issue 2, February 2016 DOI: 10.1002/aelm.201500286

© 2016 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim

This article is behind a paywall.

Memristor shakeup

New discoveries suggest that memristors do not function as was previously theorized. (For anyone who wants a memristor description, there’s this Wikipedia entry.) From an Oct. 13, 2015 posting by Alexander Hellemans for the Nanoclast blog (on the IEEE [Institute for Electrical and Electronics Engineers]), Note: Links have been removed,

What’s going to replace flash? The R&D arms of several companies including Hewlett Packard, Intel, and Samsung think the answer might be memristors (also called resistive RAM, ReRAM, or RRAM). These devices have a chance at unseating the non-volatile memory champion because, they use little energy, are very fast, and retain data without requiring power. However, new research indicates that they don’t work in quite the way we thought they do.

The fundamental mechanism at the heart of how a memristor works is something called an “imperfect point contact,” which was predicted in 1971, long before anybody had built working devices. When voltage is applied to a memristor cell, it reduces the resistance across the device. This change in resistance can be read out by applying another, smaller voltage. By inverting the voltage, the resistance of the device is returned to its initial value, that is, the stored information is erased.

Over the last decade researchers have produced two commercially promising types of memristors: electrochemical metallization memory (ECM) cells, and valence change mechanism memory (VCM) cells.

Now international research teams lead by Ilia Valov at the Peter Grünberg Institute in Jülich, Germany, report in Nature Nanotechnology and Advanced Materials that they have identified new processes that erase many of the differences between EMC and VCM cells.

Valov and coworkers in Germany, Japan, Korea, Greece, and the United States started investigating memristors that had a tantalum oxide electrolyte and an active tantalum electrode. “Our studies show that these two types of switching mechanisms in fact can be bridged, and we don’t have a purely oxygen type of switching as was believed, but that also positive [metal] ions, originating from the active electrode, are mobile,” explains Valov.

Here are links to and citations for both papers,

Graphene-Modified Interface Controls Transition from VCM to ECM Switching Modes in Ta/TaOx Based Memristive Devices by Michael Lübben, Panagiotis Karakolis, Vassilios Ioannou-Sougleridis, Pascal Normand, Pangiotis Dimitrakis, & Ilia Valov. Advanced Materials DOI: 10.1002/adma.201502574 First published: 10 September 2015

Nanoscale cation motion in TaOx, HfOx and TiOx memristive systems by Anja Wedig, Michael Luebben, Deok-Yong Cho, Marco Moors, Katharina Skaja, Vikas Rana, Tsuyoshi Hasegawa, Kiran K. Adepalli, Bilge Yildiz, Rainer Waser, & Ilia Valov. Nature Nanotechnology (2015) doi:10.1038/nnano.2015.221 Published online 28 September 2015

Both papers are behind paywalls.

Computer chips derived in a Darwinian environment

Courtesy: University of Twente

Courtesy: University of Twente

If that ‘computer chip’ looks a brain to you, good, since that’s what the image is intended to illustrate assuming I’ve correctly understood the Sept. 21, 2015 news item on Nanowerk (Note: A link has been removed),

Researchers of the MESA+ Institute for Nanotechnology and the CTIT Institute for ICT Research at the University of Twente in The Netherlands have demonstrated working electronic circuits that have been produced in a radically new way, using methods that resemble Darwinian evolution. The size of these circuits is comparable to the size of their conventional counterparts, but they are much closer to natural networks like the human brain. The findings promise a new generation of powerful, energy-efficient electronics, and have been published in the leading British journal Nature Nanotechnology (“Evolution of a Designless Nanoparticle Network into Reconfigurable Boolean Logic”).

A Sept. 21, 2015 University of Twente press release, which originated the news item, explains why and how they have decided to mimic nature to produce computer chips,

One of the greatest successes of the 20th century has been the development of digital computers. During the last decades these computers have become more and more powerful by integrating ever smaller components on silicon chips. However, it is becoming increasingly hard and extremely expensive to continue this miniaturisation. Current transistors consist of only a handful of atoms. It is a major challenge to produce chips in which the millions of transistors have the same characteristics, and thus to make the chips operate properly. Another drawback is that their energy consumption is reaching unacceptable levels. It is obvious that one has to look for alternative directions, and it is interesting to see what we can learn from nature. Natural evolution has led to powerful ‘computers’ like the human brain, which can solve complex problems in an energy-efficient way. Nature exploits complex networks that can execute many tasks in parallel.

Moving away from designed circuits

The approach of the researchers at the University of Twente is based on methods that resemble those found in Nature. They have used networks of gold nanoparticles for the execution of essential computational tasks. Contrary to conventional electronics, they have moved away from designed circuits. By using ‘designless’ systems, costly design mistakes are avoided. The computational power of their networks is enabled by applying artificial evolution. This evolution takes less than an hour, rather than millions of years. By applying electrical signals, one and the same network can be configured into 16 different logical gates. The evolutionary approach works around – or can even take advantage of – possible material defects that can be fatal in conventional electronics.

Powerful and energy-efficient

It is the first time that scientists have succeeded in this way in realizing robust electronics with dimensions that can compete with commercial technology. According to prof. Wilfred van der Wiel, the realized circuits currently still have limited computing power. “But with this research we have delivered proof of principle: demonstrated that our approach works in practice. By scaling up the system, real added value will be produced in the future. Take for example the efforts to recognize patterns, such as with face recognition. This is very difficult for a regular computer, while humans and possibly also our circuits can do this much better.”  Another important advantage may be that this type of circuitry uses much less energy, both in the production, and during use. The researchers anticipate a wide range of applications, for example in portable electronics and in the medical world.

Here’s a link to and a citation for the paper,

Evolution of a designless nanoparticle network into reconfigurable Boolean logic by S. K. Bose, C. P. Lawrence, Z. Liu, K. S. Makarenko, R. M. J. van Damme, H. J. Broersma, & W. G. van der Wiel. Nature Nanotechnology (2015) doi:10.1038/nnano.2015.207 Published online 21 September 2015

This paper is behind a paywall.

Final comment, this research, especially with the reference to facial recognition, reminds me of memristors and neuromorphic engineering. I have written many times on this topic and you should be able to find most of the material by using ‘memristor’ as your search term in the blog search engine. For the mildly curious, here are links to two recent memristor articles, Knowm (sounds like gnome?) A memristor company with a commercially available product in a Sept. 10, 2015 posting and Memristor, memristor, you are popular in a May 15, 2015 posting.

Knowm (sounds like gnome?) A memristor company with a commercially available product

German garden gnome Date: 11 August 2006 (original upload date) Source:Transferred from en.wikipedia to Commons. Author: Colibri1968 at English Wikipedia

German garden gnome
Date 11 August 2006 (original upload date)
Source Transferred from en.wikipedia to Commons.
Author Colibri1968 at English Wikipedia

I thought the ‘gnome’knowm’ homonym or, more precisely, homophone, might be an amusing way to lead into yet another memristor story on this blog.  A Sept. 3, 2015 news item on Azonano features a ‘memristor-based’ company/organization, Knowm,

Knowm Inc., a start-up pioneering next-generation advanced computing architectures and technology, today announced they are the first to develop and make commercially-available memristors with bi-directional incremental learning capability.

The device was developed through research from Boise State University’s [Idaho, US] Dr. Kris Campbell, and this new data unequivocally confirms Knowm’s memristors are capable of bi-directional incremental learning. This has been previously deemed impossible in filamentary devices by Knowm’s competitors, including IBM [emphasis mine], despite significant investment in materials, research and development. With this advancement, Knowm delivers the first commercial memristors that can adjust resistance in incremental steps in both direction rather than only one direction with an all-or-nothing ‘erase’. This advancement opens the gateway to extremely efficient and powerful machine learning and artificial intelligence applications.

A Sept. 2, 2015 Knowm news release (also on MarketWatch), which originated the news item, provides more details,

“Having commercially-available memristors with bi-directional voltage-dependent incremental capability is a huge step forward for the field of machine learning and, particularly, AHaH Computing,” said Alex Nugent, CEO and co-founder of Knowm. “We have been dreaming about this device and developing the theory for how to apply them to best maximize their potential for more than a decade, but the lack of capability confirmation had been holding us back. This data is truly a monumental technical milestone and it will serve as a springboard to catapult Knowm and AHaH Computing forward.”

Memristors with the bi-directional incremental resistance change property are the foundation for developing learning hardware such as Knowm Inc.’s recently announced Thermodynamic RAM (kT-RAM) and help realize the full potential of AHaH Computing. The availability of kT-RAM will have the largest impact in fields that require higher computational power for machine learning tasks like autonomous robotics, big-data analysis and intelligent Internet assistants. kT-RAM radically increases the efficiency of synaptic integration and adaptation operations by reducing them to physically adaptive ‘analog’ memristor-based circuits. Synaptic integration and adaptation are the core operations behind tasks such as pattern recognition and inference. Knowm Inc. is the first company in the world to bring this technology to market.

Knowm is ushering in the next phase of computing with the first general-purpose neuromemristive processor specification. Earlier this year the company announced the commercial availability of the first products in support of the kT-RAM technology stack. These include the sale of discrete memristor chips, a Back End of Line (BEOL) CMOS+memristor service, the SENSE and Application Servers and their first application named “Knowm Anomaly”, the first application built based on the theory of AHaH Computing and kT-RAM architecture. Knowm also simultaneously announced the company’s visionary developer program for organizations and individual developers. This includes the Knowm API, which serves as development hardware and training resources for co-developing the Knowm technology stack.

Knowm certainly has big ambitions. I’m a little surprised they mentioned IBM rather than HP Labs, which is where researchers first claimed to find evidence of the existence of memristors in 2008 (the story is noted in my Nanotech Mysteries wiki here). As I understand it, HP Labs is working busily (having a missed a few deadlines) on developing a commercial product using memristors.

For the curious, my latest informal roundup of memristor stories is in a May 15, 2015 posting.

Getting back to to Knowm and big ambitions, here’s Alex Nugent, Knowm CEO (Chief Executive Officer) and co-founder talking about the company and its technology,

Memristor, memristor, you are popular

Regular readers know I have a long-standing interest in memristor and artificial brains. I have three memristor-related pieces of research,  published in the last month or so, for this post.

First, there’s some research into nano memory at RMIT University, Australia, and the University of California at Santa Barbara (UC Santa Barbara). From a May 12, 2015 news item on ScienceDaily,

RMIT University researchers have mimicked the way the human brain processes information with the development of an electronic long-term memory cell.

Researchers at the MicroNano Research Facility (MNRF) have built the one of the world’s first electronic multi-state memory cell which mirrors the brain’s ability to simultaneously process and store multiple strands of information.

The development brings them closer to imitating key electronic aspects of the human brain — a vital step towards creating a bionic brain — which could help unlock successful treatments for common neurological conditions such as Alzheimer’s and Parkinson’s diseases.

A May 11, 2015 RMIT University news release, which originated the news item, reveals more about the researchers’ excitement and about the research,

“This is the closest we have come to creating a brain-like system with memory that learns and stores analog information and is quick at retrieving this stored information,” Dr Sharath said.

“The human brain is an extremely complex analog computer… its evolution is based on its previous experiences, and up until now this functionality has not been able to be adequately reproduced with digital technology.”

The ability to create highly dense and ultra-fast analog memory cells paves the way for imitating highly sophisticated biological neural networks, he said.

The research builds on RMIT’s previous discovery where ultra-fast nano-scale memories were developed using a functional oxide material in the form of an ultra-thin film – 10,000 times thinner than a human hair.

Dr Hussein Nili, lead author of the study, said: “This new discovery is significant as it allows the multi-state cell to store and process information in the very same way that the brain does.

“Think of an old camera which could only take pictures in black and white. The same analogy applies here, rather than just black and white memories we now have memories in full color with shade, light and texture, it is a major step.”

While these new devices are able to store much more information than conventional digital memories (which store just 0s and 1s), it is their brain-like ability to remember and retain previous information that is exciting.

“We have now introduced controlled faults or defects in the oxide material along with the addition of metallic atoms, which unleashes the full potential of the ‘memristive’ effect – where the memory element’s behaviour is dependent on its past experiences,” Dr Nili said.

Nano-scale memories are precursors to the storage components of the complex artificial intelligence network needed to develop a bionic brain.

Dr Nili said the research had myriad practical applications including the potential for scientists to replicate the human brain outside of the body.

“If you could replicate a brain outside the body, it would minimise ethical issues involved in treating and experimenting on the brain which can lead to better understanding of neurological conditions,” Dr Nili said.

The research, supported by the Australian Research Council, was conducted in collaboration with the University of California Santa Barbara.

Here’s a link to and a citation for this memristive nano device,

Donor-Induced Performance Tuning of Amorphous SrTiO3 Memristive Nanodevices: Multistate Resistive Switching and Mechanical Tunability by  Hussein Nili, Sumeet Walia, Ahmad Esmaielzadeh Kandjani, Rajesh Ramanathan, Philipp Gutruf, Taimur Ahmed, Sivacarendran Balendhran, Vipul Bansal, Dmitri B. Strukov, Omid Kavehei, Madhu Bhaskaran, and Sharath Sriram. Advanced Functional Materials DOI: 10.1002/adfm.201501019 Article first published online: 14 APR 2015

© 2015 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim

This paper is behind a paywall.

The second published piece of memristor-related research comes from a UC Santa Barbara and  Stony Brook University (New York state) team but is being publicized by UC Santa Barbara. From a May 11, 2015 news item on Nanowerk (Note: A link has been removed),

In what marks a significant step forward for artificial intelligence, researchers at UC Santa Barbara have demonstrated the functionality of a simple artificial neural circuit (Nature, “Training and operation of an integrated neuromorphic network based on metal-oxide memristors”). For the first time, a circuit of about 100 artificial synapses was proved to perform a simple version of a typical human task: image classification.

A May 11, 2015 UC Santa Barbara news release (also on EurekAlert)by Sonia Fernandez, which originated the news item, situates this development within the ‘artificial brain’ effort while describing it in more detail (Note: A link has been removed),

“It’s a small, but important step,” said Dmitri Strukov, a professor of electrical and computer engineering. With time and further progress, the circuitry may eventually be expanded and scaled to approach something like the human brain’s, which has 1015 (one quadrillion) synaptic connections.

For all its errors and potential for faultiness, the human brain remains a model of computational power and efficiency for engineers like Strukov and his colleagues, Mirko Prezioso, Farnood Merrikh-Bayat, Brian Hoskins and Gina Adam. That’s because the brain can accomplish certain functions in a fraction of a second what computers would require far more time and energy to perform.

… As you read this, your brain is making countless split-second decisions about the letters and symbols you see, classifying their shapes and relative positions to each other and deriving different levels of meaning through many channels of context, in as little time as it takes you to scan over this print. Change the font, or even the orientation of the letters, and it’s likely you would still be able to read this and derive the same meaning.

In the researchers’ demonstration, the circuit implementing the rudimentary artificial neural network was able to successfully classify three letters (“z”, “v” and “n”) by their images, each letter stylized in different ways or saturated with “noise”. In a process similar to how we humans pick our friends out from a crowd, or find the right key from a ring of similar keys, the simple neural circuitry was able to correctly classify the simple images.

“While the circuit was very small compared to practical networks, it is big enough to prove the concept of practicality,” said Merrikh-Bayat. According to Gina Adam, as interest grows in the technology, so will research momentum.

“And, as more solutions to the technological challenges are proposed the technology will be able to make it to the market sooner,” she said.

Key to this technology is the memristor (a combination of “memory” and “resistor”), an electronic component whose resistance changes depending on the direction of the flow of the electrical charge. Unlike conventional transistors, which rely on the drift and diffusion of electrons and their holes through semiconducting material, memristor operation is based on ionic movement, similar to the way human neural cells generate neural electrical signals.

“The memory state is stored as a specific concentration profile of defects that can be moved back and forth within the memristor,” said Strukov. The ionic memory mechanism brings several advantages over purely electron-based memories, which makes it very attractive for artificial neural network implementation, he added.

“For example, many different configurations of ionic profiles result in a continuum of memory states and hence analog memory functionality,” he said. “Ions are also much heavier than electrons and do not tunnel easily, which permits aggressive scaling of memristors without sacrificing analog properties.”

This is where analog memory trumps digital memory: In order to create the same human brain-type functionality with conventional technology, the resulting device would have to be enormous — loaded with multitudes of transistors that would require far more energy.

“Classical computers will always find an ineluctable limit to efficient brain-like computation in their very architecture,” said lead researcher Prezioso. “This memristor-based technology relies on a completely different way inspired by biological brain to carry on computation.”

To be able to approach functionality of the human brain, however, many more memristors would be required to build more complex neural networks to do the same kinds of things we can do with barely any effort and energy, such as identify different versions of the same thing or infer the presence or identity of an object not based on the object itself but on other things in a scene.

Potential applications already exist for this emerging technology, such as medical imaging, the improvement of navigation systems or even for searches based on images rather than on text. The energy-efficient compact circuitry the researchers are striving to create would also go a long way toward creating the kind of high-performance computers and memory storage devices users will continue to seek long after the proliferation of digital transistors predicted by Moore’s Law becomes too unwieldy for conventional electronics.

Here’s a link to and a citation for the paper,

Training and operation of an integrated neuromorphic network based on metal-oxide memristors by M. Prezioso, F. Merrikh-Bayat, B. D. Hoskins, G. C. Adam, K. K. Likharev,    & D. B. Strukov. Nature 521, 61–64 (07 May 2015) doi:10.1038/nature14441

This paper is behind a paywall but a free preview is available through ReadCube Access.

The third and last piece of research, which is from Rice University, hasn’t received any publicity yet, unusual given Rice’s very active communications/media department. Here’s a link to and a citation for their memristor paper,

2D materials: Memristor goes two-dimensional by Jiangtan Yuan & Jun Lou. Nature Nanotechnology 10, 389–390 (2015) doi:10.1038/nnano.2015.94 Published online 07 May 2015

This paper is behind a paywall but a free preview is available through ReadCube Access.

Dexter Johnson has written up the RMIT research (his May 14, 2015 post on the Nanoclast blog on the IEEE [Institute of Electrical and Electronics Engineers] website). He linked it to research from Mark Hersam’s team at Northwestern University (my April 10, 2015 posting) on creating a three-terminal memristor enabling its use in complex electronics systems. Dexter strongly hints in his headline that these developments could lead to bionic brains.

For those who’d like more memristor information, this June 26, 2014 posting which brings together some developments at the University of Michigan and information about developments in the industrial sector is my suggestion for a starting point. Also, you may want to check out my material on HP Labs, especially prominent in the story due to the company’s 2008 ‘discovery’ of the memristor, described on a page in my Nanotech Mysteries wiki, and the controversy triggered by the company’s terminology (there’s more about the controversy in my April 7, 2010 interview with Forrest H Bennett III).

Memristors, memcapacitors, and meminductors for faster computers

While some call memristors a fourth fundamental component alongside resistors, capacitors, and inductors (as mentioned in my June 26, 2014 posting which featured an update of sorts on memristors [scroll down about 80% of the way]), others view memristors as members of an emerging periodic table of circuit elements (as per my April 7, 2010 posting).

It seems scientists, Fabio Traversa, and his colleagues fall into the ‘periodic table of circuit elements’ camp. From Traversa’s  June 27, 2014 posting on nanotechweb.org,

Memristors, memcapacitors and meminductors may retain information even without a power source. Several applications of these devices have already been proposed, yet arguably one of the most appealing is ‘memcomputing’ – a brain-inspired computing paradigm utilizing the ability of emergent nanoscale devices to store and process information on the same physical platform.

A multidisciplinary team of researchers from the Autonomous University of Barcelona in Spain, the University of California San Diego and the University of South Carolina in the US, and the Polytechnic of Turin in Italy, suggest a realization of “memcomputing” based on nanoscale memcapacitors. They propose and analyse a major advancement in using memcapacitive systems (capacitors with memory), as central elements for Very Large Scale Integration (VLSI) circuits capable of storing and processing information on the same physical platform. They name this architecture Dynamic Computing Random Access Memory (DCRAM).

Using the standard configuration of a Dynamic Random Access Memory (DRAM) where the capacitors have been substituted with solid-state based memcapacitive systems, they show the possibility of performing WRITE, READ and polymorphic logic operations by only applying modulated voltage pulses to the memory cells. Being based on memcapacitors, the DCRAM expands very little energy per operation. It is a realistic memcomputing machine that overcomes the von Neumann bottleneck and clearly exhibits intrinsic parallelism and functional polymorphism.

Here’s a link to and a citation for the paper,

Dynamic computing random access memory by F L Traversa, F Bonani, Y V Pershin, and M Di Ventra. Nanotechnology Volume 25 Number 28  doi:10.1088/0957-4484/25/28/285201 Published 27 June 2014

This paper is behind a paywall.

Memristor, memristor! What is happening? News from the University of Michigan and HP Laboratories

Professor Wei Lu (whose work on memristors has been mentioned here a few times [an April 15, 2010 posting and an April 19, 2012 posting]) has made a discovery about memristors with significant implications (from a June 25, 2014 news item on Azonano),

In work that unmasks some of the magic behind memristors and “resistive random access memory,” or RRAM—cutting-edge computer components that combine logic and memory functions—researchers have shown that the metal particles in memristors don’t stay put as previously thought.

The findings have broad implications for the semiconductor industry and beyond. They show, for the first time, exactly how some memristors remember.

A June 24, 2014 University of Michigan news release, which originated the news item, includes Lu’s perspective on this discovery and more details about it,

“Most people have thought you can’t move metal particles in a solid material,” said Wei Lu, associate professor of electrical and computer engineering at the University of Michigan. “In a liquid and gas, it’s mobile and people understand that, but in a solid we don’t expect this behavior. This is the first time it has been shown.”

Lu, who led the project, and colleagues at U-M and the Electronic Research Centre Jülich in Germany used transmission electron microscopes to watch and record what happens to the atoms in the metal layer of their memristor when they exposed it to an electric field. The metal layer was encased in the dielectric material silicon dioxide, which is commonly used in the semiconductor industry to help route electricity.

They observed the metal atoms becoming charged ions, clustering with up to thousands of others into metal nanoparticles, and then migrating and forming a bridge between the electrodes at the opposite ends of the dielectric material.

They demonstrated this process with several metals, including silver and platinum. And depending on the materials involved and the electric current, the bridge formed in different ways.

The bridge, also called a conducting filament, stays put after the electrical power is turned off in the device. So when researchers turn the power back on, the bridge is there as a smooth pathway for current to travel along. Further, the electric field can be used to change the shape and size of the filament, or break the filament altogether, which in turn regulates the resistance of the device, or how easy current can flow through it.

Computers built with memristors would encode information in these different resistance values, which is in turn based on a different arrangement of conducting filaments.

Memristor researchers like Lu and his colleagues had theorized that the metal atoms in memristors moved, but previous results had yielded different shaped filaments and so they thought they hadn’t nailed down the underlying process.

“We succeeded in resolving the puzzle of apparently contradicting observations and in offering a predictive model accounting for materials and conditions,” said Ilia Valov, principle investigator at the Electronic Materials Research Centre Jülich. “Also the fact that we observed particle movement driven by electrochemical forces within dielectric matrix is in itself a sensation.”

The implications for this work (from the news release),

The results could lead to a new approach to chip design—one that involves using fine-tuned electrical signals to lay out integrated circuits after they’re fabricated. And it could also advance memristor technology, which promises smaller, faster, cheaper chips and computers inspired by biological brains in that they could perform many tasks at the same time.

As is becoming more common these days (from the news release),

Lu is a co-founder of Crossbar Inc., a Santa Clara, Calif.-based startup working to commercialize RRAM. Crossbar has just completed a $25 million Series C funding round.

Here’s a link to and a citation for the paper,

Electrochemical dynamics of nanoscale metallic inclusions in dielectrics by Yuchao Yang, Peng Gao, Linze Li, Xiaoqing Pan, Stefan Tappertzhofen, ShinHyun Choi, Rainer Waser, Ilia Valov, & Wei D. Lu. Nature Communications 5, Article number: 4232 doi:10.1038/ncomms5232 Published 23 June 2014

This paper is behind a paywall.

The other party instrumental in the development and, they hope, the commercialization of memristors is HP (Hewlett Packard) Laboratories (HP Labs). Anyone familiar with this blog will likely know I have frequently covered the topic starting with an essay explaining the basics on my Nanotech Mysteries wiki (or you can check this more extensive and more recently updated entry on Wikipedia) and with subsequent entries here over the years. The most recent entry is a Jan. 9, 2014 posting which featured the then latest information on the HP Labs memristor situation (scroll down about 50% of the way). This new information is more in the nature of a new revelation of details rather than an update on its status. Sebastian Anthony’s June 11, 2014 article for extremetech.com lays out the situation plainly (Note: Links have been removed),

HP, one of the original 800lb Silicon Valley gorillas that has seen much happier days, is staking everything on a brand new computer architecture that it calls… The Machine. Judging by an early report from Bloomberg Businessweek, up to 75% of HP’s once fairly illustrious R&D division — HP Labs – are working on The Machine. As you would expect, details of what will actually make The Machine a unique proposition are hard to come by, but it sounds like HP’s groundbreaking work on memristors (pictured top) and silicon photonics will play a key role.

First things first, we’re probably not talking about a consumer computing architecture here, though it’s possible that technologies commercialized by The Machine will percolate down to desktops and laptops. Basically, HP used to be a huge player in the workstation and server markets, with its own operating system and hardware architecture, much like Sun. Over the last 10 years though, Intel’s x86 architecture has rapidly taken over, to the point where HP (and Dell and IBM) are essentially just OEM resellers of commodity x86 servers. This has driven down enterprise profit margins — and when combined with its huge stake in the diminishing PC market, you can see why HP is rather nervous about the future. The Machine, and IBM’s OpenPower initiative, are both attempts to get out from underneath Intel’s x86 monopoly.

While exact details are hard to come by, it seems The Machine is predicated on the idea that current RAM, storage, and interconnect technology can’t keep up with modern Big Data processing requirements. HP is working on two technologies that could solve both problems: Memristors could replace RAM and long-term flash storage, and silicon photonics could provide faster on- and off-motherboard buses. Memristors essentially combine the benefits of DRAM and flash storage in a single, hyper-fast, super-dense package. Silicon photonics is all about reducing optical transmission and reception to a scale that can be integrated into silicon chips (moving from electrical to optical would allow for much higher data rates and lower power consumption). Both technologies can be built using conventional fabrication techniques.

In a June 11, 2014 article by Ashlee Vance for Bloomberg Business Newsweek, the company’s CTO (Chief Technical Officer), Martin Fink provides new details,

That’s what they’re calling it at HP Labs: “the Machine.” It’s basically a brand-new type of computer architecture that HP’s engineers say will serve as a replacement for today’s designs, with a new operating system, a different type of memory, and superfast data transfer. The company says it will bring the Machine to market within the next few years or fall on its face trying. “We think we have no choice,” says Martin Fink, the chief technology officer and head of HP Labs, who is expected to unveil HP’s plans at a conference Wednesday [June 11, 2014].

In my Jan. 9, 2014 posting there’s a quote from Martin Fink stating that 2018 would be earliest date for the company’s StoreServ arrays to be packed with 100TB Memristor drives (the Machine?). The company later clarified the comment by noting that it’s very difficult to set dates for new technology arrivals.

Vance shares what could be a stirring ‘origins’ story of sorts, provided the Machine is successful,

The Machine started to take shape two years ago, after Fink was named director of HP Labs. Assessing the company’s projects, he says, made it clear that HP was developing the needed components to create a better computing system. Among its research projects: a new form of memory known as memristors; and silicon photonics, the transfer of data inside a computer using light instead of copper wires. And its researchers have worked on operating systems including Windows, Linux, HP-UX, Tru64, and NonStop.

Fink and his colleagues decided to pitch HP Chief Executive Officer Meg Whitman on the idea of assembling all this technology to form the Machine. During a two-hour presentation held a year and a half ago, they laid out how the computer might work, its benefits, and the expectation that about 75 percent of HP Labs personnel would be dedicated to this one project. “At the end, Meg turned to [Chief Financial Officer] Cathie Lesjak and said, ‘Find them more money,’” says John Sontag, the vice president of systems research at HP, who attended the meeting and is in charge of bringing the Machine to life. “People in Labs see this as a once-in-a-lifetime opportunity.”

Here is the memristor making an appearance in Vance’s article,

HP’s bet is the memristor, a nanoscale chip that Labs researchers must build and handle in full anticontamination clean-room suits. At the simplest level, the memristor consists of a grid of wires with a stack of thin layers of materials such as tantalum oxide at each intersection. When a current is applied to the wires, the materials’ resistance is altered, and this state can hold after the current is removed. At that point, the device is essentially remembering 1s or 0s depending on which state it is in, multiplying its storage capacity. HP can build these chips with traditional semiconductor equipment and expects to be able to pack unprecedented amounts of memory—enough to store huge databases of pictures, files, and data—into a computer.

New memory and networking technology requires a new operating system. Most applications written in the past 50 years have been taught to wait for data, assuming that the memory systems feeding the main computers chips are slow. Fink has assigned one team to develop the open-source Machine OS, which will assume the availability of a high-speed, constant memory store. …

Peter Bright in his June 11, 2014 article for Ars Technica opens his article with a controversial statement (Note: Links have been removed),

In 2008, scientists at HP invented a fourth fundamental component to join the resistor, capacitor, and inductor: the memristor. [emphasis mine] Theorized back in 1971, memristors showed promise in computing as they can be used to both build logic gates, the building blocks of processors, and also act as long-term storage.

Whether or not the memristor is a fourth fundamental component has been a matter of some debate as you can see in this Memristor entry (section on Memristor definition and criticism) on Wikipedia.

Bright goes on to provide a 2016 delivery date for some type of memristor-based product and additional technical insight about the Machine,

… By 2016, the company plans to have memristor-based DIMMs, which will combine the high storage densities of hard disks with the high performance of traditional DRAM.

John Sontag, vice president of HP Systems Research, said that The Machine would use “electrons for processing, photons for communication, and ions for storage.” The electrons are found in conventional silicon processors, and the ions are found in the memristors. The photons are because the company wants to use optical interconnects in the system, built using silicon photonics technology. With silicon photonics, photons are generated on, and travel through, “circuits” etched onto silicon chips, enabling conventional chip manufacturing to construct optical parts. This allows the parts of the system using photons to be tightly integrated with the parts using electrons.

The memristor story has proved to be even more fascinating than I thought in 2008 and I was already as fascinated as could be, or so I thought.

Wacky oxide. biological synchronicity, and human brainlike computing

Research out of Pennsylvania State University (Penn State, US) has uncovered another approach  to creating artificial brains (more about the other approaches later in this post), from a May 14, 2014 news item on Science Daily,

Current computing is based on binary logic — zeroes and ones — also called Boolean computing. A new type of computing architecture that stores information in the frequencies and phases of periodic signals could work more like the human brain to do computing using a fraction of the energy of today’s computers.

A May 14, 2014 Pennsylvania State University news release, which originated the news item, describes the research in more detail,

Vanadium dioxide (VO2) is called a “wacky oxide” because it transitions from a conducting metal to an insulating semiconductor and vice versa with the addition of a small amount of heat or electrical current. A device created by electrical engineers at Penn State uses a thin film of VO2 on a titanium dioxide substrate to create an oscillating switch. Using a standard electrical engineering trick, Nikhil Shukla, a Ph.D. student in the group of Professor Suman Datta and co-advised by Professor Roman Engel-Herbert at Penn State, added a series resistor to the oxide device to stabilize their oscillations over billions of cycles. When Shukla added a second similar oscillating system, he discovered that over time the two devices would begin to oscillate in unison. This coupled system could provide the basis for non-Boolean computing. The results are reported in the May 14 [2014] online issue of Nature Publishing Group’s Scientific Reports.

“It’s called a small-world network,” explained Shukla. “You see it in lots of biological systems, such as certain species of fireflies. The males will flash randomly, but then for some unknown reason the flashes synchronize over time.” The brain is also a small-world network of closely clustered nodes that evolved for more efficient information processing.

“Biological synchronization is everywhere,” added Datta, professor of electrical engineering at Penn State and formerly a Principal Engineer in the Advanced Transistor and Nanotechnology Group at Intel Corporation. “We wanted to use it for a different kind of computing called associative processing, which is an analog rather than digital way to compute.” An array of oscillators can store patterns, for instance, the color of someone’s hair, their height and skin texture. If a second area of oscillators has the same pattern, they will begin to synchronize, and the degree of match can be read out. “They are doing this sort of thing already digitally, but it consumes tons of energy and lots of transistors,” Datta said. Datta is collaborating with co-author and Professor of Computer Science and Engineering, Vijay Narayanan, in exploring the use of these coupled oscillations in solving visual recognition problems more efficiently than existing embedded vision processors as part of a National Science Foundation Expedition in Computing program.

Shukla and Datta called on the expertise of Cornell University materials scientist Darrell Schlom to make the VO2 thin film, which has extremely high quality similar to single crystal silicon. Georgia Tech computer engineer Arijit Raychowdhury and graduate student Abhinav Parihar mathematically simulated the nonlinear dynamics of coupled phase transitions in the VO2 devices. Parihar created a short video* simulation of the transitions, which occur at a rate close to a million times per second, to show the way the oscillations synchronize. Penn State professor of materials science and engineering Venkatraman Gopalan used the Advanced Photon Source at Argonne National laboratory to visually characterize the structural changes occurring in the oxide thin film in the midst of the oscillations.

Datta believes it will take seven to ten years to scale up from their current network of two-three coupled oscillators to the 100 million or so closely packed oscillators required to make a neuromorphic computer chip. One of the benefits of the novel device is that it will use only about one percent of the energy of digital computing, allowing for new ways to design computers. Much work remains to determine if VO2 can be integrated into current silicon wafer technology. “It’s a fundamental building block for a different computing paradigm that is analog rather than digital,” Shukla concluded.

There are two papers being published about this work,

Synchronizing a single-electron shuttle to an external drive by Michael J Moeckel, Darren R Southworth, Eva M Weig, and Florian Marquardt. New J. Phys. 16 043009 doi:10.1088/1367-2630/16/4/043009

Synchronized charge oscillations in correlated electron systems by Nikhil Shukla, Abhinav Parihar, Eugene Freeman, Hanjong Paik, Greg Stone, Vijaykrishnan Narayanan, Haidan Wen, Zhonghou Cai, Venkatraman Gopalan, Roman Engel-Herbert, Darrell G. Schlom, Arijit Raychowdhury & Suman Datta. Scientific Reports 4, Article number: 4964 doi:10.1038/srep04964 Published 14 May 2014

Both articles are open access.

Finally, the researchers have provided a video animation illustrating their vanadium dioxide switches in action,

As noted earlier, there are other approaches to creating an artificial brain, i.e., neuromorphic engineering. My April 7, 2014 posting is the most recent synopsis posted here; it includes excerpts from a Nanowerk Spotlight article overview along with a mention of the ‘brain jelly’ approach and a discussion of my somewhat extensive coverage of memristors and a mention of work on nanoionic devices. There is also a published roadmap to neuromorphic engineering featuring both analog and digital devices, mentioned in my April 18, 2014 posting.

Roadmap to neuromorphic engineering digital and analog) for the creation of artificial brains *from the Georgia (US) Institute of Technology

While I didn’t mention neuromorphic engineering in my April 16, 2014 posting which focused on the more general aspect of nanotechnology in Transcendence, a movie starring Johnny Depp and opening on April 18, that specialty (neuromorphic engineering) is what makes the events in the movie ‘possible’ (assuming very large stretches of imagination bringing us into the realm implausibility and beyond). From the IMDB.com plot synopsis for Transcendence,

Dr. Will Caster (Johnny Depp) is the foremost researcher in the field of Artificial Intelligence, working to create a sentient machine that combines the collective intelligence of everything ever known with the full range of human emotions. His highly controversial experiments have made him famous, but they have also made him the prime target of anti-technology extremists who will do whatever it takes to stop him. However, in their attempt to destroy Will, they inadvertently become the catalyst for him to succeed to be a participant in his own transcendence. For his wife Evelyn (Rebecca Hall) and best friend Max Waters (Paul Bettany), both fellow researchers, the question is not if they canbut [sic] if they should. Their worst fears are realized as Will’s thirst for knowledge evolves into a seemingly omnipresent quest for power, to what end is unknown. The only thing that is becoming terrifyingly clear is there may be no way to stop him.

In the film, Carter’s intelligence/consciousness is uploaded to the computer, which suggests the computer has human brainlike qualities and abilities. The effort to make computer or artificial intelligence more humanlike is called neuromorphic engineering and according to an April 17, 2014 news item on phys.org, researchers at the Georgia Institute of Technology (Georgia Tech) have published a roadmap for this pursuit,

In the field of neuromorphic engineering, researchers study computing techniques that could someday mimic human cognition. Electrical engineers at the Georgia Institute of Technology recently published a “roadmap” that details innovative analog-based techniques that could make it possible to build a practical neuromorphic computer.

A core technological hurdle in this field involves the electrical power requirements of computing hardware. Although a human brain functions on a mere 20 watts of electrical energy, a digital computer that could approximate human cognitive abilities would require tens of thousands of integrated circuits (chips) and a hundred thousand watts of electricity or more – levels that exceed practical limits.

The Georgia Tech roadmap proposes a solution based on analog computing techniques, which require far less electrical power than traditional digital computing. The more efficient analog approach would help solve the daunting cooling and cost problems that presently make digital neuromorphic hardware systems impractical.

“To simulate the human brain, the eventual goal would be large-scale neuromorphic systems that could offer a great deal of computational power, robustness and performance,” said Jennifer Hasler, a professor in the Georgia Tech School of Electrical and Computer Engineering (ECE), who is a pioneer in using analog techniques for neuromorphic computing. “A configurable analog-digital system can be expected to have a power efficiency improvement of up to 10,000 times compared to an all-digital system.”

An April 16, 2014 Georgia Tech news release by Rick Robinson, which originated the news item, describes why Hasler wants to combine analog (based on biological principles) and digital computing approaches to the creation of artificial brains,

Unlike digital computing, in which computers can address many different applications by processing different software programs, analog circuits have traditionally been hard-wired to address a single application. For example, cell phones use energy-efficient analog circuits for a number of specific functions, including capturing the user’s voice, amplifying incoming voice signals, and controlling battery power.

Because analog devices do not have to process binary codes as digital computers do, their performance can be both faster and much less power hungry. Yet traditional analog circuits are limited because they’re built for a specific application, such as processing signals or controlling power. They don’t have the flexibility of digital devices that can process software, and they’re vulnerable to signal disturbance issues, or noise.

In recent years, Hasler has developed a new approach to analog computing, in which silicon-based analog integrated circuits take over many of the functions now performed by familiar digital integrated circuits. These analog chips can be quickly reconfigured to provide a range of processing capabilities, in a manner that resembles conventional digital techniques in some ways.

Over the last several years, Hasler and her research group have developed devices called field programmable analog arrays (FPAA). Like field programmable gate arrays (FPGA), which are digital integrated circuits that are ubiquitous in modern computing, the FPAA can be reconfigured after it’s manufactured – hence the phrase “field-programmable.”

Hasler and Marr’s 29-page paper traces a development process that could lead to the goal of reproducing human-brain complexity. The researchers investigate in detail a number of intermediate steps that would build on one another, helping researchers advance the technology sequentially.

For example, the researchers discuss ways to scale energy efficiency, performance and size in order to eventually achieve large-scale neuromorphic systems. The authors also address how the implementation and the application space of neuromorphic systems can be expected to evolve over time.

“A major concept here is that we have to first build smaller systems capable of a simple representation of one layer of human brain cortex,” Hasler said. “When that system has been successfully demonstrated, we can then replicate it in ways that increase its complexity and performance.”

Among neuromorphic computing’s major hurdles are the communication issues involved in networking integrated circuits in ways that could replicate human cognition. In their paper, Hasler and Marr emphasize local interconnectivity to reduce complexity. Moreover, they argue it’s possible to achieve these capabilities via purely silicon-based techniques, without relying on novel devices that are based on other approaches.

Commenting on the recent publication, Alice C. Parker, a professor of electrical engineering at the University of Southern California, said, “Professor Hasler’s technology roadmap is the first deep analysis of the prospects for large scale neuromorphic intelligent systems, clearly providing practical guidance for such systems, with a nearer-term perspective than our whole-brain emulation predictions. Her expertise in analog circuits, technology and device models positions her to provide this unique perspective on neuromorphic circuits.”

Eugenio Culurciello, an associate professor of biomedical engineering at Purdue University, commented, “I find this paper to be a very accurate description of the field of neuromorphic data processing systems. Hasler’s devices provide some of the best performance per unit power I have ever seen and are surely on the roadmap for one of the major technologies of the future.”

Said Hasler: “In this study, we conclude that useful neural computation machines based on biological principles – and potentially at the size of the human brain — seems technically within our grasp. We think that it’s more a question of gathering the right research teams and finding the funding for research and development than of any insurmountable technical barriers.”

Here’s a link to and a citation for the roadmap,

Finding a roadmap to achieve large neuromorphic hardware systems by Jennifer Hasler and Bo Marr.  Front. Neurosci. (Frontiers in Neuroscience), 10 September 2013 | doi: 10.3389/fnins.2013.00118

This is an open access article (at least, the HTML version is).

I have looked at Hasler’s roadmap and it provides a good and readable overview (even for an amateur like me; Note: you do have to need some tolerance for ‘not knowing’) of the state of neuromorphic engineering’s problems, and suggestions for overcoming them. Here’s a description of a human brain and its power requirements as compared to a computer’s (from the roadmap),

One of the amazing thing about the human brain is its ability to perform tasks beyond current supercomputers using roughly 20 W of average power, a level smaller than most individual computer microprocessor chips. A single neuron emulation can tax a high performance processor; given there is 1012 neurons operating at 20 W, each neuron consumes 20 pW average power. Assuming a neuron is conservatively performing the wordspotting computation (1000 synapses), 100,000 PMAC (PMAC = “Peta” MAC = 1015 MAC/s) would be required to duplicate the neural structure. A higher computational efficiency due to active dendritic line channels is expected as well as additional computation due to learning. The efficiency of a single neuron would be 5000 PMAC/W (or 5 TMAC/μW). A similar efficiency for 1011 neurons and 10,000 synapses is expected.

Building neuromorphic hardware requires that technology must scale from current levels given constraints of power, area, and cost: all issues typical in industrial and defense applications; if hardware technology does not scale as other available technologies, as well as takes advantage of the capabilities of IC technology that are currently visible, it will not be successful.

One of my main areas of interest is the memristor (a nanoscale ‘device/circuit element’ which emulates synaptic plasticity), which was mentioned in a way that allows me to understand how the device fits (or doesn’t fit) into the overall conceptual framework (from the roadmap),

The density for a 10 nm EEPROM device acting as a synapse begs the question of whether other nanotechnologies can improve on the resulting Si [silicon] synapse density. One transistor per synapse is hard to beat by any approach, particularly in scaled down Si (like 10 nm), when the synapse memory, computation, and update is contained within the EEPROM device. Most nano device technologies [i.e., memristors (Snider et al., 2011)] show considerable difficulties to get to two-dimensional arrays at a similar density level. Recently, a team from U. of Michigan announced the first functioning memristor two-dimensional (30 × 30) array built on a CMOS chip in 2012 (Kim et al., 2012), claiming applications in neuromorphic engineering, the same group has published innovative devices for digital (Jo and Lu, 2009) and analog applications (Jo et al., 2011).

I notice that the reference to the University’s of Michigan is relatively neutral in tone and the memristor does not figure substantively in Hasler’s roadmap.

Intriguingly, there is a section on commercialization; I didn’t think the research was at that stage yet (from the roadmap),

Although one can discuss how to build a cortical computer on the size of mammals and humans, the question is how will the technology developed for these large systems impact commercial development. The cost for ICs [integrated circuits or chips] alone for cortex would be approximately $20 M in current prices, which although possible for large users, would not be common to be found in individual households. Throughout the digital processor approach, commercial market opportunities have driven the progress in the field. Getting neuromorphic technology integrated into commercial environment allows us to ride this powerful economic “engine” rather than pull.

In most applications, the important commercial issues include minimization of cost, time to market, just sufficient performance for the application, power consumed, size and weight. The cost of a system built from ICs is, at a macro-level, a function of the area of those ICs, which then affects the number of ICs needed system wide, the number of components used, and the board space used. Efficiency of design tools, testing time and programming time also considerably affect system costs. Time to get an application to market is affected by the ability to reuse or quickly modify existing designs, and is reduced for a new application if existing hardware can be reconfigured, adapting to changing specifications, and a designer can utilize tools that allow rapid modifications to the design. Performance is key for any algorithm, but for a particular product, one only needs a solution to that particular problem; spending time to make the solution elegant is often a losing strategy.

The neuromorphic community has seen some early entries into commercial spaces, but we are just at the very beginning of the process. As the knowledge of neuromorphic engineering has progressed, which have included knowledge of sensor interfaces and analog signal processing, there have been those who have risen to the opportunities to commercialize these technologies. Neuromorphic research led to better understanding of sensory processing, particularly sensory systems interacting with other humans, enabling companies like Synaptics (touch pads), Foveon (CMOS color imagers), and Sonic Innovation (analog–digital hearing aids); Gilder provides a useful history of these two companies elsewhere (Gilder, 2005). From the early progress in analog signal processing we see companies like GTronix (acquired by National Semiconductor, then acquired by Texas Instruments) applying the impact of custom analog signal processing techniques and programmability toward auditory signal processing that improved sound quality requiring ultra-low power levels. Further, we see in companies like Audience there is some success from mapping the computational flow of the early stage auditory system, and implementing part of the event based auditory front-end to achieve useful results for improved voice quality. But the opportunities for the neuromorphic community are just beginning, and directly related to understanding the computational capabilities of these items. The availability of ICs that have these capabilities, whether or not one mentions they have any neuromorphic material, will further drive applications.

One expects that part of a cortex processing system would have significant computational possibilities, as well as cortex structures from smaller animals, and still be able to reach price points for commercial applications. In the following discussion, we will consider the potential of cortical structures at different levels of commercial applications. Figure 24 shows one typical block diagram, algorithms at each stage, resulting power efficiency (say based on current technology), as well as potential applications of the approach. In all cases, we will be considering a single die solution, typical for a commercial product, and will minimize the resulting communication power to I/O off the chip (no power consumed due to external memories or digital processing devices). We will assume a net computational efficiency of 10 TMAC/mW, corresponding to a lower power supply (i.e., mostly 500 mV, but not 180 mV) and slightly larger load capacitances; we make these assumptions as conservative pull back from possible applications, although we expect the more aggressive targets would be reachable. We assume the external power consumed is set by 1 event/second/neuron average event-rate off chip to a nearby IC. Given the input event rate is hard to predict, we don’t include that power requirement but assume it is handled by the input system. In all of these cases, getting the required computation using only digital techniques in a competitive size, weight, and especially power is hard to foresee.

We expect progress in these neuromorphic systems and that should find applications in traditional signal processing and graphics handling approaches. We will continue to have needs in computing that outpace our available computing resources, particularly at a power consumption required for a particular application. For example, the recent emphasis on cloud computing for academic/research problems shows the incredible need for larger computing resources than those directly available, or even projected to be available, for a portable computing platform (i.e., robotics). Of course a server per computing device is not a computing model that scales well. Given scaling limits on computing, both in power, area, and communication, one can expect to see more and more of these issues going forward.

We expect that a range of different ICs and systems will be built, all at different targets in the market. There are options for even larger networks, or integrating these systems with other processing elements on a chip/board. When moving to larger systems, particularly ones with 10–300 chips (3 × 107 to 109 neurons) or more, one can see utilization of stacking of dies, both decreasing the communication capacitance as well as board complexity. Stacking dies should roughly increase the final chip cost by the number of dies stacked.

In the following subsections, we overview general guidelines to consider when considering using neuromorphic ICs in the commercial market, first for low-cost consumer electronics, and second for a larger neuromorphic processor IC.

I have a casual observation to make. while the authors of the roadmap came to this conclusion “This study concludes that useful neural computation machines based on biological principles at the size of the human brain seems technically within our grasp.,” they’re also leaving themselves some wiggle room because the truth is no one knows if copying a human brain with circuits and various devices will lead to ‘thinking’ as we understand the concept.

For anyone who’s interested, you can search this blog for neuromorphic engineering, artificial brains, and/or memristors as I have many postings on these topics. One of my most recent on the topic of artificial brains is an April 7, 2014 piece titled: Brain-on-a-chip 2014 survey/overview.

One last observation about the movie ‘Transcendence’, has no one else noticed that it’s the ‘Easter’ story with a resurrected and digitized ‘Jesus’?

* Space inserted between ‘brains’ and ‘from’ in head on April 21, 2014.