Tag Archives: neuromorphic engineering

US white paper on neuromorphic computing (or the nanotechnology-inspired Grand Challenge for future computing)

The US has embarked on a number of what is called “Grand Challenges.” I first came across the concept when reading about the Bill and Melinda Gates (of Microsoft fame) Foundation. I gather these challenges are intended to provide funding for research that advances bold visions.

There is the US National Strategic Computing Initiative established on July 29, 2015 and its first anniversary results were announced one year to the day later. Within that initiative a nanotechnology-inspired Grand Challenge for Future Computing was issued and, according to a July 29, 2016 news item on Nanowerk, a white paper on the topic has been issued (Note: A link has been removed),

Today [July 29, 2016), Federal agencies participating in the National Nanotechnology Initiative (NNI) released a white paper (pdf) describing the collective Federal vision for the emerging and innovative solutions needed to realize the Nanotechnology-Inspired Grand Challenge for Future Computing.

The grand challenge, announced on October 20, 2015, is to “create a new type of computer that can proactively interpret and learn from data, solve unfamiliar problems using what it has learned, and operate with the energy efficiency of the human brain.” The white paper describes the technical priorities shared by the agencies, highlights the challenges and opportunities associated with these priorities, and presents a guiding vision for the research and development (R&D) needed to achieve key technical goals. By coordinating and collaborating across multiple levels of government, industry, academia, and nonprofit organizations, the nanotechnology and computer science communities can look beyond the decades-old approach to computing based on the von Neumann architecture and chart a new path that will continue the rapid pace of innovation beyond the next decade.

A July 29, 2016 US National Nanotechnology Coordination Office news release, which originated the news item, further and succinctly describes the contents of the paper,

“Materials and devices for computing have been and will continue to be a key application domain in the field of nanotechnology. As evident by the R&D topics highlighted in the white paper, this challenge will require the convergence of nanotechnology, neuroscience, and computer science to create a whole new paradigm for low-power computing with revolutionary, brain-like capabilities,” said Dr. Michael Meador, Director of the National Nanotechnology Coordination Office. …

The white paper was produced as a collaboration by technical staff at the Department of Energy, the National Science Foundation, the Department of Defense, the National Institute of Standards and Technology, and the Intelligence Community. …

The white paper titled “A Federal Vision for Future Computing: A Nanotechnology-Inspired Grand Challenge” is 15 pp. and it offers tidbits such as this (Note: Footnotes not included),

A new materials base may be needed for future electronic hardware. While most of today’s electronics use silicon, this approach is unsustainable if billions of disposable and short-lived sensor nodes are needed for the coming Internet-of-Things (IoT). To what extent can the materials base for the implementation of future information technology (IT) components and systems support sustainability through recycling and bio-degradability? More sustainable materials, such as compostable or biodegradable systems (polymers, paper, etc.) that can be recycled or reused,  may play an important role. The potential role for such alternative materials in the fabrication of integrated systems needs to be explored as well. [p. 5]

The basic architecture of computers today is essentially the same as those built in the 1940s—the von Neumann architecture—with separate compute, high-speed memory, and high-density storage components that are electronically interconnected. However, it is well known that continued performance increases using this architecture are not feasible in the long term, with power density constraints being one of the fundamental roadblocks.7 Further advances in the current approach using multiple cores, chip multiprocessors, and associated architectures are plagued by challenges in software and programming models. Thus,  research and development is required in radically new and different computing architectures involving processors, memory, input-output devices, and how they behave and are interconnected. [p. 7]

Neuroscience research suggests that the brain is a complex, high-performance computing system with low energy consumption and incredible parallelism. A highly plastic and flexible organ, the human brain is able to grow new neurons, synapses, and connections to cope with an ever-changing environment. Energy efficiency, growth, and flexibility occur at all scales, from molecular to cellular, and allow the brain, from early to late stage, to never stop learning and to act with proactive intelligence in both familiar and novel situations. Understanding how these mechanisms work and cooperate within and across scales has the potential to offer tremendous technical insights and novel engineering frameworks for materials, devices, and systems seeking to perform efficient and autonomous computing. This research focus area is the most synergistic with the national BRAIN Initiative. However, unlike the BRAIN Initiative, where the goal is to map the network connectivity of the brain, the objective here is to understand the nature, methods, and mechanisms for computation,  and how the brain performs some of its tasks. Even within this broad paradigm,  one can loosely distinguish between neuromorphic computing and artificial neural network (ANN) approaches. The goal of neuromorphic computing is oriented towards a hardware approach to reverse engineering the computational architecture of the brain. On the other hand, ANNs include algorithmic approaches arising from machinelearning,  which in turn could leverage advancements and understanding in neuroscience as well as novel cognitive, mathematical, and statistical techniques. Indeed, the ultimate intelligent systems may as well be the result of merging existing ANN (e.g., deep learning) and bio-inspired techniques. [p. 8]

As government documents go, this is quite readable.

For anyone interested in learning more about the future federal plans for computing in the US, there is a July 29, 2016 posting on the White House blog celebrating the first year of the US National Strategic Computing Initiative Strategic Plan (29 pp. PDF; awkward but that is the title).

Memory material with functions resembling synapses and neurons in the brain

This work comes from the University of Twente’s MESA+ Institute for Nanotechnology according to a July 8, 2016 news item on ScienceDaily,

Our brain does not work like a typical computer memory storing just ones and zeroes: thanks to a much larger variation in memory states, it can calculate faster consuming less energy. Scientists of the MESA+ Institute for Nanotechnology of the University of Twente (The Netherlands) now developed a ferro-electric material with a memory function resembling synapses and neurons in the brain, resulting in a multistate memory. …

A July 8, 2016 University of Twente press release, which originated the news item, provides more technical detail,

The material that could be the basic building block for ‘brain-inspired computing’ is lead-zirconium-titanate (PZT): a sandwich of materials with several attractive properties. One of them is that it is ferro-electric: you can switch it to a desired state, this state remains stable after the electric field is gone. This is called polarization: it leads to a fast memory function that is non-volatile. Combined with processor chips, a computer could be designed that starts much faster, for example. The UT scientists now added a thin layer of zinc oxide to the PZT, 25 nanometer thickness. They discovered that switching from one state to another not only happens from ‘zero’ to ‘one’ vice versa. It is possible to control smaller areas within the crystal: will they be polarized (‘flip’) or not?

In a PZT layer without zinc oxide (ZnO) there are basically two memorystates. Adding a nano layer of ZnO, every state in between is possible as well.


By using variable writing times in those smaller areas, the result is that many states can be stored anywhere between zero and one. This resembles the way synapses and neurons ‘weigh’ signals in our brain. Multistate memories, coupled to transistors, could drastically improve the speed of pattern recognition, for example: our brain performs this kind of tasks consuming only a fraction of the energy a computer system needs. Looking at the graphs, the writing times seem quite long compared to nowaday’s processor speeds, but it is possible to create many memories in parallel. The function of the brain has already been mimicked in software like neurale networks, but in that case conventional digital hardware is still a limitation. The new material is a first step towards electronic hardware with a brain-like memory. Finding solutions for combining PZT with semiconductors, or even developing new kinds of semiconductors for this, is one of the next steps.

Here’s a link to and a citation for the paper,

Multistability in Bistable Ferroelectric Materials toward Adaptive Applications by Anirban Ghosh, Gertjan Koster, and Guus Rijnders. Advanced Functional Materials DOI: 10.1002/adfm.201601353 Version of Record online: 4 JUL 2016

© 2016 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim

This paper is behind a paywall.

Artificial synapse rivals biological synapse in energy consumption

How can we make computers be like biological brains which do so much work and use so little power? It’s a question scientists from many countries are trying to answer and it seems South Korean scientists are proposing an answer. From a June 20, 2016 news item on Nanowerk,

News) Creation of an artificial intelligence system that fully emulates the functions of a human brain has long been a dream of scientists. A brain has many superior functions as compared with super computers, even though it has light weight, small volume, and consumes extremely low energy. This is required to construct an artificial neural network, in which a huge amount (1014)) of synapses is needed.

Most recently, great efforts have been made to realize synaptic functions in single electronic devices, such as using resistive random access memory (RRAM), phase change memory (PCM), conductive bridges, and synaptic transistors. Artificial synapses based on highly aligned nanostructures are still desired for the construction of a highly-integrated artificial neural network.

Prof. Tae-Woo Lee, research professor Wentao Xu, and Dr. Sung-Yong Min with the Dept. of Materials Science and Engineering at POSTECH [Pohang University of Science & Technology, South Korea] have succeeded in fabricating an organic nanofiber (ONF) electronic device that emulates not only the important working principles and energy consumption of biological synapses but also the morphology. …

A June 20, 2016 Pohang University of Science & Technology (POSTECH) news release on EurekAlert, which originated the news item, describes the work in more detail,

The morphology of ONFs is very similar to that of nerve fibers, which form crisscrossing grids to enable the high memory density of a human brain. Especially, based on the e-Nanowire printing technique, highly-aligned ONFs can be massively produced with precise control over alignment and dimension. This morphology potentially enables the future construction of high-density memory of a neuromorphic system.

Important working principles of a biological synapse have been emulated, such as paired-pulse facilitation (PPF), short-term plasticity (STP), long-term plasticity (LTP), spike-timing dependent plasticity (STDP), and spike-rate dependent plasticity (SRDP). Most amazingly, energy consumption of the device can be reduced to a femtojoule level per synaptic event, which is a value magnitudes lower than previous reports. It rivals that of a biological synapse. In addition, the organic artificial synapse devices not only provide a new research direction in neuromorphic electronics but even open a new era of organic electronics.

This technology will lead to the leap of brain-inspired electronics in both memory density and energy consumption aspects. The artificial synapse developed by Prof. Lee’s research team will provide important potential applications to neuromorphic computing systems and artificial intelligence systems for autonomous cars (or self-driving cars), analysis of big data, cognitive systems, robot control, medical diagnosis, stock trading analysis, remote sensing, and other smart human-interactive systems and machines in the future.

Here’s a link to and a citation for the paper,

Organic core-sheath nanowire artificial synapses with femtojoule energy consumption by Wentao Xu, Sung-Yong Min, Hyunsang Hwang, and Tae-Woo Lee. Science Advances  17 Jun 2016: Vol. 2, no. 6, e1501326 DOI: 10.1126/sciadv.1501326

This paper is open access.

X-rays reveal memristor workings

A June 14, 2016 news item on ScienceDaily focuses on memristors. (It’s been about two months since my last memristor posting on April 22, 2016 regarding electronic synapses and neural networks). This piece announces new insight into how memristors function at the atomic scale,

In experiments at two Department of Energy national labs — SLAC National Accelerator Laboratory and Lawrence Berkeley National Laboratory — scientists at Hewlett Packard Enterprise (HPE) [also referred to as HP Labs or Hewlett Packard Laboratories] have experimentally confirmed critical aspects of how a new type of microelectronic device, the memristor, works at an atomic scale.

This result is an important step in designing these solid-state devices for use in future computer memories that operate much faster, last longer and use less energy than today’s flash memory. …

“We need information like this to be able to design memristors that will succeed commercially,” said Suhas Kumar, an HPE scientist and first author on the group’s technical paper.

A June 13, 2016 SLAC news release, which originated the news item, offers a brief history according to HPE and provides details about the latest work,

The memristor was proposed theoretically [by Dr. Leon Chua] in 1971 as the fourth basic electrical device element alongside the resistor, capacitor and inductor. At its heart is a tiny piece of a transition metal oxide sandwiched between two electrodes. Applying a positive or negative voltage pulse dramatically increases or decreases the memristor’s electrical resistance. This behavior makes it suitable for use as a “non-volatile” computer memory that, like flash memory, can retain its state without being refreshed with additional power.

Over the past decade, an HPE group led by senior fellow R. Stanley Williams has explored memristor designs, materials and behavior in detail. Since 2009 they have used intense synchrotron X-rays to reveal the movements of atoms in memristors during switching. Despite advances in understanding the nature of this switching, critical details that would be important in designing commercially successful circuits  remained controversial. For example, the forces that move the atoms, resulting in dramatic resistance changes during switching, remain under debate.

In recent years, the group examined memristors made with oxides of titanium, tantalum and vanadium. Initial experiments revealed that switching in the tantalum oxide devices could be controlled most easily, so it was chosen for further exploration at two DOE Office of Science User Facilities – SLAC’s Stanford Synchrotron Radiation Lightsource (SSRL) and Berkeley Lab’s Advanced Light Source (ALS).

At ALS, the HPE researchers mapped the positions of oxygen atoms before and after switching. For this, they used a scanning transmission X-ray microscope and an apparatus they built to precisely control the position of their sample and the timing and intensity of the 500-electronvolt ALS X-rays, which were tuned to see oxygen.

The experiments revealed that even weak voltage pulses create a thin conductive path through the memristor. During the pulse the path heats up, which creates a force that pushes oxygen atoms away from the path, making it even more conductive. Reversing the voltage pulse resets the memristor by sucking some of oxygen atoms back into the conducting path, thereby increasing the device’s resistance. The memristor’s resistance changes between 10-fold and 1 million-fold, depending on operating parameters like the voltage-pulse amplitude. This resistance change is dramatic enough to exploit commercially.

To be sure of their conclusion, the researchers also needed to understand if the tantalum atoms were moving along with the oxygen during switching. Imaging tantalum required higher-energy, 10,000-electronvolt X-rays, which they obtained at SSRL’s Beam Line 6-2. In a single session there, they determined that the tantalum remained stationary.

“That sealed the deal, convincing us that our hypothesis was correct,” said HPE scientist Catherine Graves, who had worked at SSRL as a Stanford graduate student. She added that discussions with SLAC experts were critical in guiding the HPE team toward the X-ray techniques that would allow them to see the tantalum accurately.

Kumar said the most promising aspect of the tantalum oxide results was that the scientists saw no degradation in switching over more than a billion voltage pulses of a magnitude suitable for commercial use. He added that this knowledge helped his group build memristors that lasted nearly a billion switching cycles, about a thousand-fold improvement.

“This is much longer endurance than is possible with today’s flash memory devices,” Kumar said. “In addition, we also used much higher voltage pulses to accelerate and observe memristor failures, which is also important in understanding how these devices work. Failures occurred when oxygen atoms were forced so far away that they did not return to their initial positions.”

Beyond memory chips, Kumar says memristors’ rapid switching speed and small size could make them suitable for use in logic circuits. Additional memristor characteristics may also be beneficial in the emerging class of brain-inspired neuromorphic computing circuits.

“Transistors are big and bulky compared to memristors,” he said. “Memristors are also much better suited for creating the neuron-like voltage spikes that characterize neuromorphic circuits.”

The researchers have provided an animation illustrating how memristors can fail,

This animation shows how millions of high-voltage switching cycles can cause memristors to fail. The high-voltage switching eventually creates regions that are permanently rich (blue pits) or deficient (red peaks) in oxygen and cannot be switched back. Switching at lower voltages that would be suitable for commercial devices did not show this performance degradation. These observations allowed the researchers to develop materials processing and operating conditions that improved the memristors’ endurance by nearly a thousand times. (Suhas Kumar) Courtesy: SLAC

This animation shows how millions of high-voltage switching cycles can cause memristors to fail. The high-voltage switching eventually creates regions that are permanently rich (blue pits) or deficient (red peaks) in oxygen and cannot be switched back. Switching at lower voltages that would be suitable for commercial devices did not show this performance degradation. These observations allowed the researchers to develop materials processing and operating conditions that improved the memristors’ endurance by nearly a thousand times. (Suhas Kumar) Courtesy: SLAC

Here’s a link to and a citation for the paper,

Direct Observation of Localized Radial Oxygen Migration in Functioning Tantalum Oxide Memristors by Suhas Kumar, Catherine E. Graves, John Paul Strachan, Emmanuelle Merced Grafals, Arthur L. David Kilcoyne3, Tolek Tyliszczak, Johanna Nelson Weker, Yoshio Nishi, and R. Stanley Williams. Advanced Materials, First published: 2 February 2016; Print: Volume 28, Issue 14 April 13, 2016 Pages 2772–2776 DOI: 10.1002/adma.201505435

This paper is behind a paywall.

Some of the ‘memristor story’ is contested and you can find a brief overview of the discussion in this Wikipedia memristor entry in the section on ‘definition and criticism’. There is also a history of the memristor which dates back to the 19th century featured in my May 22, 2012 posting.

Memristor-based electronic synapses for neural networks

Caption: Neuron connections in biological neural networks. Credit: MIPT press office

Caption: Neuron connections in biological neural networks. Credit: MIPT press office

Russian scientists have recently published a paper about neural networks and electronic synapses based on ‘thin film’ memristors according to an April 19, 2016 news item on Nanowerk,

A team of scientists from the Moscow Institute of Physics and Technology (MIPT) have created prototypes of “electronic synapses” based on ultra-thin films of hafnium oxide (HfO2). These prototypes could potentially be used in fundamentally new computing systems.

An April 20, 2016 MIPT press release (also on EurekAlert), which originated the news item (the date inconsistency likely due to timezone differences) explains the connection between thin films and memristors,

The group of researchers from MIPT have made HfO2-based memristors measuring just 40×40 nm2. The nanostructures they built exhibit properties similar to biological synapses. Using newly developed technology, the memristors were integrated in matrices: in the future this technology may be used to design computers that function similar to biological neural networks.

Memristors (resistors with memory) are devices that are able to change their state (conductivity) depending on the charge passing through them, and they therefore have a memory of their “history”. In this study, the scientists used devices based on thin-film hafnium oxide, a material that is already used in the production of modern processors. This means that this new lab technology could, if required, easily be used in industrial processes.

“In a simpler version, memristors are promising binary non-volatile memory cells, in which information is written by switching the electric resistance – from high to low and back again. What we are trying to demonstrate are much more complex functions of memristors – that they behave similar to biological synapses,” said Yury Matveyev, the corresponding author of the paper, and senior researcher of MIPT’s Laboratory of Functional Materials and Devices for Nanoelectronics, commenting on the study.

The press release offers a description of biological synapses and their relationship to learning and memory,

A synapse is point of connection between neurons, the main function of which is to transmit a signal (a spike – a particular type of signal, see fig. 2) from one neuron to another. Each neuron may have thousands of synapses, i.e. connect with a large number of other neurons. This means that information can be processed in parallel, rather than sequentially (as in modern computers). This is the reason why “living” neural networks are so immensely effective both in terms of speed and energy consumption in solving large range of tasks, such as image / voice recognition, etc.

Over time, synapses may change their “weight”, i.e. their ability to transmit a signal. This property is believed to be the key to understanding the learning and memory functions of thebrain.

From the physical point of view, synaptic “memory” and “learning” in the brain can be interpreted as follows: the neural connection possesses a certain “conductivity”, which is determined by the previous “history” of signals that have passed through the connection. If a synapse transmits a signal from one neuron to another, we can say that it has high “conductivity”, and if it does not, we say it has low “conductivity”. However, synapses do not simply function in on/off mode; they can have any intermediate “weight” (intermediate conductivity value). Accordingly, if we want to simulate them using certain devices, these devices will also have to have analogous characteristics.

The researchers have provided an illustration of a biological synapse,

Fig.2 The type of electrical signal transmitted by neurons (a “spike”). The red lines are various other biological signals, the black line is the averaged signal. Source: MIPT press office

Fig.2 The type of electrical signal transmitted by neurons (a “spike”). The red lines are various other biological signals, the black line is the averaged signal. Source: MIPT press office

Now, the press release ties the memristor information together with the biological synapse information to describe the new work at the MIPT,

As in a biological synapse, the value of the electrical conductivity of a memristor is the result of its previous “life” – from the moment it was made.

There is a number of physical effects that can be exploited to design memristors. In this study, the authors used devices based on ultrathin-film hafnium oxide, which exhibit the effect of soft (reversible) electrical breakdown under an applied external electric field. Most often, these devices use only two different states encoding logic zero and one. However, in order to simulate biological synapses, a continuous spectrum of conductivities had to be used in the devices.

“The detailed physical mechanism behind the function of the memristors in question is still debated. However, the qualitative model is as follows: in the metal–ultrathin oxide–metal structure, charged point defects, such as vacancies of oxygen atoms, are formed and move around in the oxide layer when exposed to an electric field. It is these defects that are responsible for the reversible change in the conductivity of the oxide layer,” says the co-author of the paper and researcher of MIPT’s Laboratory of Functional Materials and Devices for Nanoelectronics, Sergey Zakharchenko.

The authors used the newly developed “analogue” memristors to model various learning mechanisms (“plasticity”) of biological synapses. In particular, this involved functions such as long-term potentiation (LTP) or long-term depression (LTD) of a connection between two neurons. It is generally accepted that these functions are the underlying mechanisms of  memory in the brain.

The authors also succeeded in demonstrating a more complex mechanism – spike-timing-dependent plasticity, i.e. the dependence of the value of the connection between neurons on the relative time taken for them to be “triggered”. It had previously been shown that this mechanism is responsible for associative learning – the ability of the brain to find connections between different events.

To demonstrate this function in their memristor devices, the authors purposefully used an electric signal which reproduced, as far as possible, the signals in living neurons, and they obtained a dependency very similar to those observed in living synapses (see fig. 3).

Fig.3. The change in conductivity of memristors depending on the temporal separation between "spikes"(rigth) and thr change in potential of the neuron connections in biological neural networks. Source: MIPT press office

Fig.3. The change in conductivity of memristors depending on the temporal separation between “spikes”(rigth) and thr change in potential of the neuron connections in biological neural networks. Source: MIPT press office

These results allowed the authors to confirm that the elements that they had developed could be considered a prototype of the “electronic synapse”, which could be used as a basis for the hardware implementation of artificial neural networks.

“We have created a baseline matrix of nanoscale memristors demonstrating the properties of biological synapses. Thanks to this research, we are now one step closer to building an artificial neural network. It may only be the very simplest of networks, but it is nevertheless a hardware prototype,” said the head of MIPT’s Laboratory of Functional Materials and Devices for Nanoelectronics, Andrey Zenkevich.

Here’s a link to and a citation for the paper,

Crossbar Nanoscale HfO2-Based Electronic Synapses by Yury Matveyev, Roman Kirtaev, Alena Fetisova, Sergey Zakharchenko, Dmitry Negrov and Andrey Zenkevich. Nanoscale Research Letters201611:147 DOI: 10.1186/s11671-016-1360-6

Published: 15 March 2016

This is an open access paper.

Indian researchers establish a multiplex number to identify efficiency of multilevel resistive switching devices

There’s a Feb. 1, 2016 Nanowerk Spotlight article by Dr. Abhay Sagade of Cambridge University (UK) about defining efficiency in memristive devices,

In a recent study, researchers at the Jawaharlal Nehru Centre for Advanced Scientific Research (JNCASR), Bangalore, India, have defined a new figure-of-merit to identify the efficiency of resistive switching devices with multiple memory states. The research was carried out in collaboration with the Indian Institute of Technology Madras (IITM), Chennai, and financially supported by Department of Science and Technology, New Delhi.

The scientists identified the versatility of palladium oxide (PdO) as a novel resistive switching material for use in resistive memory devices. Due to the availability to switch multiple redox states in the PdO system, researchers have controlled it by applying different amplitudes of voltage pulses.

To date, many materials have shown multiple memory states but there have been no efforts to define the ability of the fabricated device to switch between all possible memory states.

In this present report, the authors have defined the efficacy in a term coined as “multiplex number (M)” to quantify the performance of a multiple memory switching device:

For the PdO MRS device with five memory states, the multiplex number is found to be 5.7, which translates to 70% efficiency in switching. This is the highest value of M observed in any multiple memory device.

As multilevel resistive switching devices are expected to have great significance in futuristic brain-like memory devices [neuromorphic engineering products], the definition of their efficiency will provide a boost to the field. The number M will assist researches as well as technologist in classifying and deciding the true merit of their memory devices.

Here’s a link to and a citation for the paper Sagade is discussing,

Defining Switching Efficiency of Multilevel Resistive Memory with PdO as an Example by K. D. M. Rao, Abhay A. Sagade, Robin John, T. Pradeep and G. U. Kulkarni. Advanced Electronic Materials Volume 2, Issue 2, February 2016 DOI: 10.1002/aelm.201500286

© 2016 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim

This article is behind a paywall.

Plastic memristors for neural networks

There is a very nice explanation of memristors and computing systems from the Moscow Institute of Physics and Technology (MIPT). First their announcement, from a Jan. 27, 2016 news item on ScienceDaily,

A group of scientists has created a neural network based on polymeric memristors — devices that can potentially be used to build fundamentally new computers. These developments will primarily help in creating technologies for machine vision, hearing, and other machine sensory systems, and also for intelligent control systems in various fields of applications, including autonomous robots.

The authors of the new study focused on a promising area in the field of memristive neural networks – polymer-based memristors – and discovered that creating even the simplest perceptron is not that easy. In fact, it is so difficult that up until the publication of their paper in the journal Organic Electronics, there were no reports of any successful experiments (using organic materials). The experiments conducted at the Nano-, Bio-, Information and Cognitive Sciences and Technologies (NBIC) centre at the Kurchatov Institute by a joint team of Russian and Italian scientists demonstrated that it is possible to create very simple polyaniline-based neural networks. Furthermore, these networks are able to learn and perform specified logical operations.

A Jan. 27, 2016 MIPT press release on EurekAlert, which originated the news item, offers an explanation of memristors and a description of the research,

A memristor is an electric element similar to a conventional resistor. The difference between a memristor and a traditional element is that the electric resistance in a memristor is dependent on the charge passing through it, therefore it constantly changes its properties under the influence of an external signal: a memristor has a memory and at the same time is also able to change data encoded by its resistance state! In this sense, a memristor is similar to a synapse – a connection between two neurons in the brain that is able, with a high level of plasticity, to modify the efficiency of signal transmission between neurons under the influence of the transmission itself. A memristor enables scientists to build a “true” neural network, and the physical properties of memristors mean that at the very minimum they can be made as small as conventional chips.

Some estimates indicate that the size of a memristor can be reduced up to ten nanometers, and the technologies used in the manufacture of the experimental prototypes could, in theory, be scaled up to the level of mass production. However, as this is “in theory”, it does not mean that chips of a fundamentally new structure with neural networks will be available on the market any time soon, even in the next five years.

The plastic polyaniline was not chosen by chance. Previous studies demonstrated that it can be used to create individual memristors, so the scientists did not have to go through many different materials. Using a polyaniline solution, a glass substrate, and chromium electrodes, they created a prototype with dimensions that, at present, are much larger than those typically used in conventional microelectronics: the strip of the structure was approximately one millimeter wide (they decided to avoid miniaturization for the moment). All of the memristors were tested for their electrical characteristics: it was found that the current-voltage characteristic of the devices is in fact non-linear, which is in line with expectations. The memristors were then connected to a single neuromorphic network.

A current-voltage characteristic (or IV curve) is a graph where the horizontal axis represents voltage and the vertical axis the current. In conventional resistance, the IV curve is a straight line; in strict accordance with Ohm’s Law, current is proportional to voltage. For a memristor, however, it is not just the voltage that is important, but the change in voltage: if you begin to gradually increase the voltage supplied to the memristor, it will increase the current passing through it not in a linear fashion, but with a sharp bend in the graph and at a certain point its resistance will fall sharply.

Then if you begin to reduce the voltage, the memristor will remain in its conducting state for some time, after which it will change its properties rather sharply again to decrease its conductivity. Experimental samples with a voltage increase of 0.5V hardly allowed any current to pass through (around a few tenths of a microamp), but when the voltage was reduced by the same amount, the ammeter registered a figure of 5 microamps. Microamps are of course very small units, but in this case it is the contrast that is most significant: 0.1 μA to 5 μA is a difference of fifty times! This is more than enough to make a clear distinction between the two signals.

After checking the basic properties of individual memristors, the physicists conducted experiments to train the neural network. The training (it is a generally accepted term and is therefore written without inverted commas) involves applying electric pulses at random to the inputs of a perceptron. If a certain combination of electric pulses is applied to the inputs of a perceptron (e.g. a logic one and a logic zero at two inputs) and the perceptron gives the wrong answer, a special correcting pulse is applied to it, and after a certain number of repetitions all the internal parameters of the device (namely memristive resistance) reconfigure themselves, i.e. they are “trained” to give the correct answer.

The scientists demonstrated that after about a dozen attempts their new memristive network is capable of performing NAND logical operations, and then it is also able to learn to perform NOR operations. Since it is an operator or a conventional computer that is used to check for the correct answer, this method is called the supervised learning method.

Needless to say, an elementary perceptron of macroscopic dimensions with a characteristic reaction time of tenths or hundredths of a second is not an element that is ready for commercial production. However, as the researchers themselves note, their creation was made using inexpensive materials, and the reaction time will decrease as the size decreases: the first prototype was intentionally enlarged to make the work easier; it is physically possible to manufacture more compact chips. In addition, polyaniline can be used in attempts to make a three-dimensional structure by placing the memristors on top of one another in a multi-tiered structure (e.g. in the form of random intersections of thin polymer fibers), whereas modern silicon microelectronic systems, due to a number of technological limitations, are two-dimensional. The transition to the third dimension would potentially offer many new opportunities.

The press release goes to explain what the researchers mean when they mention a fundamentally different computer,

The common classification of computers is based either on their casing (desktop/laptop/tablet), or on the type of operating system used (Windows/MacOS/Linux). However, this is only a very simple classification from a user perspective, whereas specialists normally use an entirely different approach – an approach that is based on the principle of organizing computer operations. The computers that we are used to, whether they be tablets, desktop computers, or even on-board computers on spacecraft, are all devices with von Neumann architecture; without going into too much detail, they are devices based on independent processors, random access memory (RAM), and read only memory (ROM).

The memory stores the code of a program that is to be executed. A program is a set of instructions that command certain operations to be performed with data. Data are also stored in the memory* and are retrieved from it (and also written to it) in accordance with the program; the program’s instructions are performed by the processor. There may be several processors, they can work in parallel, data can be stored in a variety of ways – but there is always a fundamental division between the processor and the memory. Even if the computer is integrated into one single chip, it will still have separate elements for processing information and separate units for storing data. At present, all modern microelectronic systems are based on this particular principle and this is partly the reason why most people are not even aware that there may be other types of computer systems – without processors and memory.

*) if physically different elements are used to store data and store a program, the computer is said to be built using Harvard architecture. This method is used in certain microcontrollers, and in small specialized computing devices. The chip that controls the function of a refrigerator, lift, or car engine (in all these cases a “conventional” computer would be redundant) is a microcontroller. However, neither Harvard, nor von Neumann architectures allow the processing and storage of information to be combined into a single element of a computer system.

However, such systems do exist. Furthermore, if you look at the brain itself as a computer system (this is purely hypothetical at the moment: it is not yet known whether the function of the brain is reducible to computations), then you will see that it is not at all built like a computer with von Neumann architecture. Neural networks do not have a specialized computer or separate memory cells. Information is stored and processed in each and every neuron, one element of the computer system, and the human brain has approximately 100 billion of these elements. In addition, almost all of them are able to work in parallel (simultaneously), which is why the brain is able to process information with great efficiency and at such high speed. Artificial neural networks that are currently implemented on von Neumann computers only emulate these processes: emulation, i.e. step by step imitation of functions inevitably leads to a decrease in speed and an increase in energy consumption. In many cases this is not so critical, but in certain cases it can be.

Devices that do not simply imitate the function of neural networks, but are fundamentally the same could be used for a variety of tasks. Most importantly, neural networks are capable of pattern recognition; they are used as a basis for recognising handwritten text for example, or signature verification. When a certain pattern needs to be recognised and classified, such as a sound, an image, or characteristic changes on a graph, neural networks are actively used and it is in these fields where gaining an advantage in terms of speed and energy consumption is critical. In a control system for an autonomous flying robot every milliwatt-hour and every millisecond counts, just in the same way that a real-time system to process data from a collider detector cannot take too long to “think” about highlighting particle tracks that may be of interest to scientists from among a large number of other recorded events.

Bravo to the writer!

Here’s a link to and a citation for the paper,

Hardware elementary perceptron based on polyaniline memristive devices by V.A. Demin. V. V. Erokhin, A.V. Emelyanov, S. Battistoni, G. Baldi, S. Iannotta, P.K. Kashkarov, M.V. Kovalchuk. Organic Electronics Volume 25, October 2015, Pages 16–20 doi:10.1016/j.orgel.2015.06.015

This paper is behind a paywall.

Computer chips derived in a Darwinian environment

Courtesy: University of Twente

Courtesy: University of Twente

If that ‘computer chip’ looks a brain to you, good, since that’s what the image is intended to illustrate assuming I’ve correctly understood the Sept. 21, 2015 news item on Nanowerk (Note: A link has been removed),

Researchers of the MESA+ Institute for Nanotechnology and the CTIT Institute for ICT Research at the University of Twente in The Netherlands have demonstrated working electronic circuits that have been produced in a radically new way, using methods that resemble Darwinian evolution. The size of these circuits is comparable to the size of their conventional counterparts, but they are much closer to natural networks like the human brain. The findings promise a new generation of powerful, energy-efficient electronics, and have been published in the leading British journal Nature Nanotechnology (“Evolution of a Designless Nanoparticle Network into Reconfigurable Boolean Logic”).

A Sept. 21, 2015 University of Twente press release, which originated the news item, explains why and how they have decided to mimic nature to produce computer chips,

One of the greatest successes of the 20th century has been the development of digital computers. During the last decades these computers have become more and more powerful by integrating ever smaller components on silicon chips. However, it is becoming increasingly hard and extremely expensive to continue this miniaturisation. Current transistors consist of only a handful of atoms. It is a major challenge to produce chips in which the millions of transistors have the same characteristics, and thus to make the chips operate properly. Another drawback is that their energy consumption is reaching unacceptable levels. It is obvious that one has to look for alternative directions, and it is interesting to see what we can learn from nature. Natural evolution has led to powerful ‘computers’ like the human brain, which can solve complex problems in an energy-efficient way. Nature exploits complex networks that can execute many tasks in parallel.

Moving away from designed circuits

The approach of the researchers at the University of Twente is based on methods that resemble those found in Nature. They have used networks of gold nanoparticles for the execution of essential computational tasks. Contrary to conventional electronics, they have moved away from designed circuits. By using ‘designless’ systems, costly design mistakes are avoided. The computational power of their networks is enabled by applying artificial evolution. This evolution takes less than an hour, rather than millions of years. By applying electrical signals, one and the same network can be configured into 16 different logical gates. The evolutionary approach works around – or can even take advantage of – possible material defects that can be fatal in conventional electronics.

Powerful and energy-efficient

It is the first time that scientists have succeeded in this way in realizing robust electronics with dimensions that can compete with commercial technology. According to prof. Wilfred van der Wiel, the realized circuits currently still have limited computing power. “But with this research we have delivered proof of principle: demonstrated that our approach works in practice. By scaling up the system, real added value will be produced in the future. Take for example the efforts to recognize patterns, such as with face recognition. This is very difficult for a regular computer, while humans and possibly also our circuits can do this much better.”  Another important advantage may be that this type of circuitry uses much less energy, both in the production, and during use. The researchers anticipate a wide range of applications, for example in portable electronics and in the medical world.

Here’s a link to and a citation for the paper,

Evolution of a designless nanoparticle network into reconfigurable Boolean logic by S. K. Bose, C. P. Lawrence, Z. Liu, K. S. Makarenko, R. M. J. van Damme, H. J. Broersma, & W. G. van der Wiel. Nature Nanotechnology (2015) doi:10.1038/nnano.2015.207 Published online 21 September 2015

This paper is behind a paywall.

Final comment, this research, especially with the reference to facial recognition, reminds me of memristors and neuromorphic engineering. I have written many times on this topic and you should be able to find most of the material by using ‘memristor’ as your search term in the blog search engine. For the mildly curious, here are links to two recent memristor articles, Knowm (sounds like gnome?) A memristor company with a commercially available product in a Sept. 10, 2015 posting and Memristor, memristor, you are popular in a May 15, 2015 posting.

Knowm (sounds like gnome?) A memristor company with a commercially available product

German garden gnome Date: 11 August 2006 (original upload date) Source:Transferred from en.wikipedia to Commons. Author: Colibri1968 at English Wikipedia

German garden gnome
Date 11 August 2006 (original upload date)
Source Transferred from en.wikipedia to Commons.
Author Colibri1968 at English Wikipedia

I thought the ‘gnome’knowm’ homonym or, more precisely, homophone, might be an amusing way to lead into yet another memristor story on this blog.  A Sept. 3, 2015 news item on Azonano features a ‘memristor-based’ company/organization, Knowm,

Knowm Inc., a start-up pioneering next-generation advanced computing architectures and technology, today announced they are the first to develop and make commercially-available memristors with bi-directional incremental learning capability.

The device was developed through research from Boise State University’s [Idaho, US] Dr. Kris Campbell, and this new data unequivocally confirms Knowm’s memristors are capable of bi-directional incremental learning. This has been previously deemed impossible in filamentary devices by Knowm’s competitors, including IBM [emphasis mine], despite significant investment in materials, research and development. With this advancement, Knowm delivers the first commercial memristors that can adjust resistance in incremental steps in both direction rather than only one direction with an all-or-nothing ‘erase’. This advancement opens the gateway to extremely efficient and powerful machine learning and artificial intelligence applications.

A Sept. 2, 2015 Knowm news release (also on MarketWatch), which originated the news item, provides more details,

“Having commercially-available memristors with bi-directional voltage-dependent incremental capability is a huge step forward for the field of machine learning and, particularly, AHaH Computing,” said Alex Nugent, CEO and co-founder of Knowm. “We have been dreaming about this device and developing the theory for how to apply them to best maximize their potential for more than a decade, but the lack of capability confirmation had been holding us back. This data is truly a monumental technical milestone and it will serve as a springboard to catapult Knowm and AHaH Computing forward.”

Memristors with the bi-directional incremental resistance change property are the foundation for developing learning hardware such as Knowm Inc.’s recently announced Thermodynamic RAM (kT-RAM) and help realize the full potential of AHaH Computing. The availability of kT-RAM will have the largest impact in fields that require higher computational power for machine learning tasks like autonomous robotics, big-data analysis and intelligent Internet assistants. kT-RAM radically increases the efficiency of synaptic integration and adaptation operations by reducing them to physically adaptive ‘analog’ memristor-based circuits. Synaptic integration and adaptation are the core operations behind tasks such as pattern recognition and inference. Knowm Inc. is the first company in the world to bring this technology to market.

Knowm is ushering in the next phase of computing with the first general-purpose neuromemristive processor specification. Earlier this year the company announced the commercial availability of the first products in support of the kT-RAM technology stack. These include the sale of discrete memristor chips, a Back End of Line (BEOL) CMOS+memristor service, the SENSE and Application Servers and their first application named “Knowm Anomaly”, the first application built based on the theory of AHaH Computing and kT-RAM architecture. Knowm also simultaneously announced the company’s visionary developer program for organizations and individual developers. This includes the Knowm API, which serves as development hardware and training resources for co-developing the Knowm technology stack.

Knowm certainly has big ambitions. I’m a little surprised they mentioned IBM rather than HP Labs, which is where researchers first claimed to find evidence of the existence of memristors in 2008 (the story is noted in my Nanotech Mysteries wiki here). As I understand it, HP Labs is working busily (having a missed a few deadlines) on developing a commercial product using memristors.

For the curious, my latest informal roundup of memristor stories is in a May 15, 2015 posting.

Getting back to to Knowm and big ambitions, here’s Alex Nugent, Knowm CEO (Chief Executive Officer) and co-founder talking about the company and its technology,

Researchers at Karolinska Institute (Sweden) build an artificial neuron

Unlike my post earlier today (June 26, 2015) about BrainChip, this is not about neuromorphic engineering (artificial brain), although I imagine this new research from the Karolinska Institute (Institutet) will be of some interest to that community. This research was done in the interest of developing* therapeutic interventions for brain diseases. One aspect of this news item/press release I find particularly interesting is the insistence that “no living parts” were used to create the artificial neuron,

A June 24, 2015 news item on ScienceDaily describes what the artificial neuron can do,

Scientists have managed to build a fully functional neuron by using organic bioelectronics. This artificial neuron contain [sic] no ‘living’ parts, but is capable of mimicking the function of a human nerve cell and communicate in the same way as our own neurons do. [emphasis mine]

A June 24, 2015 Karolinska Institute press release (also on EurekAlert), which originated the news item, describes how neurons communicate in the brain, standard techniques for stimulating neuronal cells, and the scientists’ work on a technique to improve stimulation,

Neurons are isolated from each other and communicate with the help of chemical signals, commonly called neurotransmitters or signal substances. Inside a neuron, these chemical signals are converted to an electrical action potential, which travels along the axon of the neuron until it reaches the end. Here at the synapse, the electrical signal is converted to the release of chemical signals, which via diffusion can relay the signal to the next nerve cell.

To date, the primary technique for neuronal stimulation in human cells is based on electrical stimulation. However, scientists at the Swedish Medical Nanoscience Centre (SMNC) at Karolinska Institutet in collaboration with collegues at Linköping University, have now created an organic bioelectronic device that is capable of receiving chemical signals, which it can then relay to human cells.

“Our artificial neuron is made of conductive polymers and it functions like a human neuron,” says lead investigator Agneta Richter-Dahlfors, professor of cellular microbiology. “The sensing component of the artificial neuron senses a change in chemical signals in one dish, and translates this into an electrical signal. This electrical signal is next translated into the release of the neurotransmitter acetylcholine in a second dish, whose effect on living human cells can be monitored.”

The research team hope that their innovation, presented in the journal Biosensors & Bioelectronics, will improve treatments for neurologial disorders which currently rely on traditional electrical stimulation. The new technique makes it possible to stimulate neurons based on specific chemical signals received from different parts of the body. In the future, this may help physicians to bypass damaged nerve cells and restore neural function.

“Next, we would like to miniaturize this device to enable implantation into the human body,” says Agneta Richer-Dahlfors. “We foresee that in the future, by adding the concept of wireless communication, the biosensor could be placed in one part of the body, and trigger release of neurotransmitters at distant locations. Using such auto-regulated sensing and delivery, or possibly a remote control, new and exciting opportunities for future research and treatment of neurological disorders can be envisaged.”

Here’s a link to and a citation for the paper,

An organic electronic biomimetic neuron enables auto-regulated neuromodulation by Daniel T. Simon, Karin C. Larsson, David Nilsson, Gustav Burström, b, Dagmar Galter, Magnus Berggren, and Agneta Richter-Dahlfors. Biosensors and Bioelectronics Volume 71, 15 September 2015, Pages 359–364         doi:10.1016/j.bios.2015.04.058

This paper is behind a paywall.

As to anyone (other than myself) who may be curious about exactly what they used (other than “living parts”) to create an artificial neuron, there’s the paper’s abstract,

Current therapies for neurological disorders are based on traditional medication and electric stimulation. Here, we present an organic electronic biomimetic neuron, with the capacity to precisely intervene with the underlying malfunctioning signalling pathway using endogenous substances. The fundamental function of neurons, defined as chemical-to-electrical-to-chemical signal transduction, is achieved by connecting enzyme-based amperometric biosensors and organic electronic ion pumps. Selective biosensors transduce chemical signals into an electric current, which regulates electrophoretic delivery of chemical substances without necessitating liquid flow. Biosensors detected neurotransmitters in physiologically relevant ranges of 5–80 µM, showing linear response above 20 µm with approx. 0.1 nA/µM slope. When exceeding defined threshold concentrations, biosensor output signals, connected via custom hardware/software, activated local or distant neurotransmitter delivery from the organic electronic ion pump. Changes of 20 µM glutamate or acetylcholine triggered diffusive delivery of acetylcholine, which activated cells via receptor-mediated signalling. This was observed in real-time by single-cell ratiometric Ca2+ imaging. The results demonstrate the potential of the organic electronic biomimetic neuron in therapies involving long-range neuronal signalling by mimicking the function of projection neurons. Alternatively, conversion of glutamate-induced descending neuromuscular signals into acetylcholine-mediated muscular activation signals may be obtained, applicable for bridging injured sites and active prosthetics.

While it’s true neither are “living parts,” I believe both enzymes and organic electronic ion pumps can be found in biological organisms. The insistence on ‘nonliving’ in the press release suggests that scientists in Europe, if nowhere else, are still quite concerned about any hint that they are working on genetically modified organisms (GMO). It’s ironic when you consider that people blithely use enzyme-based cleaning and beauty products but one can appreciate the* scientists’ caution.

* ‘develop’ changed to ‘developing’ and ‘the’ added on July 3, 2015.