Tag Archives: neuromorphic engineering

Artificial synapse rivals biological synapse in energy consumption

How can we make computers be like biological brains which do so much work and use so little power? It’s a question scientists from many countries are trying to answer and it seems South Korean scientists are proposing an answer. From a June 20, 2016 news item on Nanowerk,

News) Creation of an artificial intelligence system that fully emulates the functions of a human brain has long been a dream of scientists. A brain has many superior functions as compared with super computers, even though it has light weight, small volume, and consumes extremely low energy. This is required to construct an artificial neural network, in which a huge amount (1014)) of synapses is needed.

Most recently, great efforts have been made to realize synaptic functions in single electronic devices, such as using resistive random access memory (RRAM), phase change memory (PCM), conductive bridges, and synaptic transistors. Artificial synapses based on highly aligned nanostructures are still desired for the construction of a highly-integrated artificial neural network.

Prof. Tae-Woo Lee, research professor Wentao Xu, and Dr. Sung-Yong Min with the Dept. of Materials Science and Engineering at POSTECH [Pohang University of Science & Technology, South Korea] have succeeded in fabricating an organic nanofiber (ONF) electronic device that emulates not only the important working principles and energy consumption of biological synapses but also the morphology. …

A June 20, 2016 Pohang University of Science & Technology (POSTECH) news release on EurekAlert, which originated the news item, describes the work in more detail,

The morphology of ONFs is very similar to that of nerve fibers, which form crisscrossing grids to enable the high memory density of a human brain. Especially, based on the e-Nanowire printing technique, highly-aligned ONFs can be massively produced with precise control over alignment and dimension. This morphology potentially enables the future construction of high-density memory of a neuromorphic system.

Important working principles of a biological synapse have been emulated, such as paired-pulse facilitation (PPF), short-term plasticity (STP), long-term plasticity (LTP), spike-timing dependent plasticity (STDP), and spike-rate dependent plasticity (SRDP). Most amazingly, energy consumption of the device can be reduced to a femtojoule level per synaptic event, which is a value magnitudes lower than previous reports. It rivals that of a biological synapse. In addition, the organic artificial synapse devices not only provide a new research direction in neuromorphic electronics but even open a new era of organic electronics.

This technology will lead to the leap of brain-inspired electronics in both memory density and energy consumption aspects. The artificial synapse developed by Prof. Lee’s research team will provide important potential applications to neuromorphic computing systems and artificial intelligence systems for autonomous cars (or self-driving cars), analysis of big data, cognitive systems, robot control, medical diagnosis, stock trading analysis, remote sensing, and other smart human-interactive systems and machines in the future.

Here’s a link to and a citation for the paper,

Organic core-sheath nanowire artificial synapses with femtojoule energy consumption by Wentao Xu, Sung-Yong Min, Hyunsang Hwang, and Tae-Woo Lee. Science Advances  17 Jun 2016: Vol. 2, no. 6, e1501326 DOI: 10.1126/sciadv.1501326

This paper is open access.

X-rays reveal memristor workings

A June 14, 2016 news item on ScienceDaily focuses on memristors. (It’s been about two months since my last memristor posting on April 22, 2016 regarding electronic synapses and neural networks). This piece announces new insight into how memristors function at the atomic scale,

In experiments at two Department of Energy national labs — SLAC National Accelerator Laboratory and Lawrence Berkeley National Laboratory — scientists at Hewlett Packard Enterprise (HPE) [also referred to as HP Labs or Hewlett Packard Laboratories] have experimentally confirmed critical aspects of how a new type of microelectronic device, the memristor, works at an atomic scale.

This result is an important step in designing these solid-state devices for use in future computer memories that operate much faster, last longer and use less energy than today’s flash memory. …

“We need information like this to be able to design memristors that will succeed commercially,” said Suhas Kumar, an HPE scientist and first author on the group’s technical paper.

A June 13, 2016 SLAC news release, which originated the news item, offers a brief history according to HPE and provides details about the latest work,

The memristor was proposed theoretically [by Dr. Leon Chua] in 1971 as the fourth basic electrical device element alongside the resistor, capacitor and inductor. At its heart is a tiny piece of a transition metal oxide sandwiched between two electrodes. Applying a positive or negative voltage pulse dramatically increases or decreases the memristor’s electrical resistance. This behavior makes it suitable for use as a “non-volatile” computer memory that, like flash memory, can retain its state without being refreshed with additional power.

Over the past decade, an HPE group led by senior fellow R. Stanley Williams has explored memristor designs, materials and behavior in detail. Since 2009 they have used intense synchrotron X-rays to reveal the movements of atoms in memristors during switching. Despite advances in understanding the nature of this switching, critical details that would be important in designing commercially successful circuits  remained controversial. For example, the forces that move the atoms, resulting in dramatic resistance changes during switching, remain under debate.

In recent years, the group examined memristors made with oxides of titanium, tantalum and vanadium. Initial experiments revealed that switching in the tantalum oxide devices could be controlled most easily, so it was chosen for further exploration at two DOE Office of Science User Facilities – SLAC’s Stanford Synchrotron Radiation Lightsource (SSRL) and Berkeley Lab’s Advanced Light Source (ALS).

At ALS, the HPE researchers mapped the positions of oxygen atoms before and after switching. For this, they used a scanning transmission X-ray microscope and an apparatus they built to precisely control the position of their sample and the timing and intensity of the 500-electronvolt ALS X-rays, which were tuned to see oxygen.

The experiments revealed that even weak voltage pulses create a thin conductive path through the memristor. During the pulse the path heats up, which creates a force that pushes oxygen atoms away from the path, making it even more conductive. Reversing the voltage pulse resets the memristor by sucking some of oxygen atoms back into the conducting path, thereby increasing the device’s resistance. The memristor’s resistance changes between 10-fold and 1 million-fold, depending on operating parameters like the voltage-pulse amplitude. This resistance change is dramatic enough to exploit commercially.

To be sure of their conclusion, the researchers also needed to understand if the tantalum atoms were moving along with the oxygen during switching. Imaging tantalum required higher-energy, 10,000-electronvolt X-rays, which they obtained at SSRL’s Beam Line 6-2. In a single session there, they determined that the tantalum remained stationary.

“That sealed the deal, convincing us that our hypothesis was correct,” said HPE scientist Catherine Graves, who had worked at SSRL as a Stanford graduate student. She added that discussions with SLAC experts were critical in guiding the HPE team toward the X-ray techniques that would allow them to see the tantalum accurately.

Kumar said the most promising aspect of the tantalum oxide results was that the scientists saw no degradation in switching over more than a billion voltage pulses of a magnitude suitable for commercial use. He added that this knowledge helped his group build memristors that lasted nearly a billion switching cycles, about a thousand-fold improvement.

“This is much longer endurance than is possible with today’s flash memory devices,” Kumar said. “In addition, we also used much higher voltage pulses to accelerate and observe memristor failures, which is also important in understanding how these devices work. Failures occurred when oxygen atoms were forced so far away that they did not return to their initial positions.”

Beyond memory chips, Kumar says memristors’ rapid switching speed and small size could make them suitable for use in logic circuits. Additional memristor characteristics may also be beneficial in the emerging class of brain-inspired neuromorphic computing circuits.

“Transistors are big and bulky compared to memristors,” he said. “Memristors are also much better suited for creating the neuron-like voltage spikes that characterize neuromorphic circuits.”

The researchers have provided an animation illustrating how memristors can fail,

This animation shows how millions of high-voltage switching cycles can cause memristors to fail. The high-voltage switching eventually creates regions that are permanently rich (blue pits) or deficient (red peaks) in oxygen and cannot be switched back. Switching at lower voltages that would be suitable for commercial devices did not show this performance degradation. These observations allowed the researchers to develop materials processing and operating conditions that improved the memristors’ endurance by nearly a thousand times. (Suhas Kumar) Courtesy: SLAC

This animation shows how millions of high-voltage switching cycles can cause memristors to fail. The high-voltage switching eventually creates regions that are permanently rich (blue pits) or deficient (red peaks) in oxygen and cannot be switched back. Switching at lower voltages that would be suitable for commercial devices did not show this performance degradation. These observations allowed the researchers to develop materials processing and operating conditions that improved the memristors’ endurance by nearly a thousand times. (Suhas Kumar) Courtesy: SLAC

Here’s a link to and a citation for the paper,

Direct Observation of Localized Radial Oxygen Migration in Functioning Tantalum Oxide Memristors by Suhas Kumar, Catherine E. Graves, John Paul Strachan, Emmanuelle Merced Grafals, Arthur L. David Kilcoyne3, Tolek Tyliszczak, Johanna Nelson Weker, Yoshio Nishi, and R. Stanley Williams. Advanced Materials, First published: 2 February 2016; Print: Volume 28, Issue 14 April 13, 2016 Pages 2772–2776 DOI: 10.1002/adma.201505435

This paper is behind a paywall.

Some of the ‘memristor story’ is contested and you can find a brief overview of the discussion in this Wikipedia memristor entry in the section on ‘definition and criticism’. There is also a history of the memristor which dates back to the 19th century featured in my May 22, 2012 posting.

Memristor-based electronic synapses for neural networks

Caption: Neuron connections in biological neural networks. Credit: MIPT press office

Caption: Neuron connections in biological neural networks. Credit: MIPT press office

Russian scientists have recently published a paper about neural networks and electronic synapses based on ‘thin film’ memristors according to an April 19, 2016 news item on Nanowerk,

A team of scientists from the Moscow Institute of Physics and Technology (MIPT) have created prototypes of “electronic synapses” based on ultra-thin films of hafnium oxide (HfO2). These prototypes could potentially be used in fundamentally new computing systems.

An April 20, 2016 MIPT press release (also on EurekAlert), which originated the news item (the date inconsistency likely due to timezone differences) explains the connection between thin films and memristors,

The group of researchers from MIPT have made HfO2-based memristors measuring just 40×40 nm2. The nanostructures they built exhibit properties similar to biological synapses. Using newly developed technology, the memristors were integrated in matrices: in the future this technology may be used to design computers that function similar to biological neural networks.

Memristors (resistors with memory) are devices that are able to change their state (conductivity) depending on the charge passing through them, and they therefore have a memory of their “history”. In this study, the scientists used devices based on thin-film hafnium oxide, a material that is already used in the production of modern processors. This means that this new lab technology could, if required, easily be used in industrial processes.

“In a simpler version, memristors are promising binary non-volatile memory cells, in which information is written by switching the electric resistance – from high to low and back again. What we are trying to demonstrate are much more complex functions of memristors – that they behave similar to biological synapses,” said Yury Matveyev, the corresponding author of the paper, and senior researcher of MIPT’s Laboratory of Functional Materials and Devices for Nanoelectronics, commenting on the study.

The press release offers a description of biological synapses and their relationship to learning and memory,

A synapse is point of connection between neurons, the main function of which is to transmit a signal (a spike – a particular type of signal, see fig. 2) from one neuron to another. Each neuron may have thousands of synapses, i.e. connect with a large number of other neurons. This means that information can be processed in parallel, rather than sequentially (as in modern computers). This is the reason why “living” neural networks are so immensely effective both in terms of speed and energy consumption in solving large range of tasks, such as image / voice recognition, etc.

Over time, synapses may change their “weight”, i.e. their ability to transmit a signal. This property is believed to be the key to understanding the learning and memory functions of thebrain.

From the physical point of view, synaptic “memory” and “learning” in the brain can be interpreted as follows: the neural connection possesses a certain “conductivity”, which is determined by the previous “history” of signals that have passed through the connection. If a synapse transmits a signal from one neuron to another, we can say that it has high “conductivity”, and if it does not, we say it has low “conductivity”. However, synapses do not simply function in on/off mode; they can have any intermediate “weight” (intermediate conductivity value). Accordingly, if we want to simulate them using certain devices, these devices will also have to have analogous characteristics.

The researchers have provided an illustration of a biological synapse,

Fig.2 The type of electrical signal transmitted by neurons (a “spike”). The red lines are various other biological signals, the black line is the averaged signal. Source: MIPT press office

Fig.2 The type of electrical signal transmitted by neurons (a “spike”). The red lines are various other biological signals, the black line is the averaged signal. Source: MIPT press office

Now, the press release ties the memristor information together with the biological synapse information to describe the new work at the MIPT,

As in a biological synapse, the value of the electrical conductivity of a memristor is the result of its previous “life” – from the moment it was made.

There is a number of physical effects that can be exploited to design memristors. In this study, the authors used devices based on ultrathin-film hafnium oxide, which exhibit the effect of soft (reversible) electrical breakdown under an applied external electric field. Most often, these devices use only two different states encoding logic zero and one. However, in order to simulate biological synapses, a continuous spectrum of conductivities had to be used in the devices.

“The detailed physical mechanism behind the function of the memristors in question is still debated. However, the qualitative model is as follows: in the metal–ultrathin oxide–metal structure, charged point defects, such as vacancies of oxygen atoms, are formed and move around in the oxide layer when exposed to an electric field. It is these defects that are responsible for the reversible change in the conductivity of the oxide layer,” says the co-author of the paper and researcher of MIPT’s Laboratory of Functional Materials and Devices for Nanoelectronics, Sergey Zakharchenko.

The authors used the newly developed “analogue” memristors to model various learning mechanisms (“plasticity”) of biological synapses. In particular, this involved functions such as long-term potentiation (LTP) or long-term depression (LTD) of a connection between two neurons. It is generally accepted that these functions are the underlying mechanisms of  memory in the brain.

The authors also succeeded in demonstrating a more complex mechanism – spike-timing-dependent plasticity, i.e. the dependence of the value of the connection between neurons on the relative time taken for them to be “triggered”. It had previously been shown that this mechanism is responsible for associative learning – the ability of the brain to find connections between different events.

To demonstrate this function in their memristor devices, the authors purposefully used an electric signal which reproduced, as far as possible, the signals in living neurons, and they obtained a dependency very similar to those observed in living synapses (see fig. 3).

Fig.3. The change in conductivity of memristors depending on the temporal separation between "spikes"(rigth) and thr change in potential of the neuron connections in biological neural networks. Source: MIPT press office

Fig.3. The change in conductivity of memristors depending on the temporal separation between “spikes”(rigth) and thr change in potential of the neuron connections in biological neural networks. Source: MIPT press office

These results allowed the authors to confirm that the elements that they had developed could be considered a prototype of the “electronic synapse”, which could be used as a basis for the hardware implementation of artificial neural networks.

“We have created a baseline matrix of nanoscale memristors demonstrating the properties of biological synapses. Thanks to this research, we are now one step closer to building an artificial neural network. It may only be the very simplest of networks, but it is nevertheless a hardware prototype,” said the head of MIPT’s Laboratory of Functional Materials and Devices for Nanoelectronics, Andrey Zenkevich.

Here’s a link to and a citation for the paper,

Crossbar Nanoscale HfO2-Based Electronic Synapses by Yury Matveyev, Roman Kirtaev, Alena Fetisova, Sergey Zakharchenko, Dmitry Negrov and Andrey Zenkevich. Nanoscale Research Letters201611:147 DOI: 10.1186/s11671-016-1360-6

Published: 15 March 2016

This is an open access paper.

Indian researchers establish a multiplex number to identify efficiency of multilevel resistive switching devices

There’s a Feb. 1, 2016 Nanowerk Spotlight article by Dr. Abhay Sagade of Cambridge University (UK) about defining efficiency in memristive devices,

In a recent study, researchers at the Jawaharlal Nehru Centre for Advanced Scientific Research (JNCASR), Bangalore, India, have defined a new figure-of-merit to identify the efficiency of resistive switching devices with multiple memory states. The research was carried out in collaboration with the Indian Institute of Technology Madras (IITM), Chennai, and financially supported by Department of Science and Technology, New Delhi.

The scientists identified the versatility of palladium oxide (PdO) as a novel resistive switching material for use in resistive memory devices. Due to the availability to switch multiple redox states in the PdO system, researchers have controlled it by applying different amplitudes of voltage pulses.

To date, many materials have shown multiple memory states but there have been no efforts to define the ability of the fabricated device to switch between all possible memory states.

In this present report, the authors have defined the efficacy in a term coined as “multiplex number (M)” to quantify the performance of a multiple memory switching device:

For the PdO MRS device with five memory states, the multiplex number is found to be 5.7, which translates to 70% efficiency in switching. This is the highest value of M observed in any multiple memory device.

As multilevel resistive switching devices are expected to have great significance in futuristic brain-like memory devices [neuromorphic engineering products], the definition of their efficiency will provide a boost to the field. The number M will assist researches as well as technologist in classifying and deciding the true merit of their memory devices.

Here’s a link to and a citation for the paper Sagade is discussing,

Defining Switching Efficiency of Multilevel Resistive Memory with PdO as an Example by K. D. M. Rao, Abhay A. Sagade, Robin John, T. Pradeep and G. U. Kulkarni. Advanced Electronic Materials Volume 2, Issue 2, February 2016 DOI: 10.1002/aelm.201500286

© 2016 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim

This article is behind a paywall.

Plastic memristors for neural networks

There is a very nice explanation of memristors and computing systems from the Moscow Institute of Physics and Technology (MIPT). First their announcement, from a Jan. 27, 2016 news item on ScienceDaily,

A group of scientists has created a neural network based on polymeric memristors — devices that can potentially be used to build fundamentally new computers. These developments will primarily help in creating technologies for machine vision, hearing, and other machine sensory systems, and also for intelligent control systems in various fields of applications, including autonomous robots.

The authors of the new study focused on a promising area in the field of memristive neural networks – polymer-based memristors – and discovered that creating even the simplest perceptron is not that easy. In fact, it is so difficult that up until the publication of their paper in the journal Organic Electronics, there were no reports of any successful experiments (using organic materials). The experiments conducted at the Nano-, Bio-, Information and Cognitive Sciences and Technologies (NBIC) centre at the Kurchatov Institute by a joint team of Russian and Italian scientists demonstrated that it is possible to create very simple polyaniline-based neural networks. Furthermore, these networks are able to learn and perform specified logical operations.

A Jan. 27, 2016 MIPT press release on EurekAlert, which originated the news item, offers an explanation of memristors and a description of the research,

A memristor is an electric element similar to a conventional resistor. The difference between a memristor and a traditional element is that the electric resistance in a memristor is dependent on the charge passing through it, therefore it constantly changes its properties under the influence of an external signal: a memristor has a memory and at the same time is also able to change data encoded by its resistance state! In this sense, a memristor is similar to a synapse – a connection between two neurons in the brain that is able, with a high level of plasticity, to modify the efficiency of signal transmission between neurons under the influence of the transmission itself. A memristor enables scientists to build a “true” neural network, and the physical properties of memristors mean that at the very minimum they can be made as small as conventional chips.

Some estimates indicate that the size of a memristor can be reduced up to ten nanometers, and the technologies used in the manufacture of the experimental prototypes could, in theory, be scaled up to the level of mass production. However, as this is “in theory”, it does not mean that chips of a fundamentally new structure with neural networks will be available on the market any time soon, even in the next five years.

The plastic polyaniline was not chosen by chance. Previous studies demonstrated that it can be used to create individual memristors, so the scientists did not have to go through many different materials. Using a polyaniline solution, a glass substrate, and chromium electrodes, they created a prototype with dimensions that, at present, are much larger than those typically used in conventional microelectronics: the strip of the structure was approximately one millimeter wide (they decided to avoid miniaturization for the moment). All of the memristors were tested for their electrical characteristics: it was found that the current-voltage characteristic of the devices is in fact non-linear, which is in line with expectations. The memristors were then connected to a single neuromorphic network.

A current-voltage characteristic (or IV curve) is a graph where the horizontal axis represents voltage and the vertical axis the current. In conventional resistance, the IV curve is a straight line; in strict accordance with Ohm’s Law, current is proportional to voltage. For a memristor, however, it is not just the voltage that is important, but the change in voltage: if you begin to gradually increase the voltage supplied to the memristor, it will increase the current passing through it not in a linear fashion, but with a sharp bend in the graph and at a certain point its resistance will fall sharply.

Then if you begin to reduce the voltage, the memristor will remain in its conducting state for some time, after which it will change its properties rather sharply again to decrease its conductivity. Experimental samples with a voltage increase of 0.5V hardly allowed any current to pass through (around a few tenths of a microamp), but when the voltage was reduced by the same amount, the ammeter registered a figure of 5 microamps. Microamps are of course very small units, but in this case it is the contrast that is most significant: 0.1 μA to 5 μA is a difference of fifty times! This is more than enough to make a clear distinction between the two signals.

After checking the basic properties of individual memristors, the physicists conducted experiments to train the neural network. The training (it is a generally accepted term and is therefore written without inverted commas) involves applying electric pulses at random to the inputs of a perceptron. If a certain combination of electric pulses is applied to the inputs of a perceptron (e.g. a logic one and a logic zero at two inputs) and the perceptron gives the wrong answer, a special correcting pulse is applied to it, and after a certain number of repetitions all the internal parameters of the device (namely memristive resistance) reconfigure themselves, i.e. they are “trained” to give the correct answer.

The scientists demonstrated that after about a dozen attempts their new memristive network is capable of performing NAND logical operations, and then it is also able to learn to perform NOR operations. Since it is an operator or a conventional computer that is used to check for the correct answer, this method is called the supervised learning method.

Needless to say, an elementary perceptron of macroscopic dimensions with a characteristic reaction time of tenths or hundredths of a second is not an element that is ready for commercial production. However, as the researchers themselves note, their creation was made using inexpensive materials, and the reaction time will decrease as the size decreases: the first prototype was intentionally enlarged to make the work easier; it is physically possible to manufacture more compact chips. In addition, polyaniline can be used in attempts to make a three-dimensional structure by placing the memristors on top of one another in a multi-tiered structure (e.g. in the form of random intersections of thin polymer fibers), whereas modern silicon microelectronic systems, due to a number of technological limitations, are two-dimensional. The transition to the third dimension would potentially offer many new opportunities.

The press release goes to explain what the researchers mean when they mention a fundamentally different computer,

The common classification of computers is based either on their casing (desktop/laptop/tablet), or on the type of operating system used (Windows/MacOS/Linux). However, this is only a very simple classification from a user perspective, whereas specialists normally use an entirely different approach – an approach that is based on the principle of organizing computer operations. The computers that we are used to, whether they be tablets, desktop computers, or even on-board computers on spacecraft, are all devices with von Neumann architecture; without going into too much detail, they are devices based on independent processors, random access memory (RAM), and read only memory (ROM).

The memory stores the code of a program that is to be executed. A program is a set of instructions that command certain operations to be performed with data. Data are also stored in the memory* and are retrieved from it (and also written to it) in accordance with the program; the program’s instructions are performed by the processor. There may be several processors, they can work in parallel, data can be stored in a variety of ways – but there is always a fundamental division between the processor and the memory. Even if the computer is integrated into one single chip, it will still have separate elements for processing information and separate units for storing data. At present, all modern microelectronic systems are based on this particular principle and this is partly the reason why most people are not even aware that there may be other types of computer systems – without processors and memory.

*) if physically different elements are used to store data and store a program, the computer is said to be built using Harvard architecture. This method is used in certain microcontrollers, and in small specialized computing devices. The chip that controls the function of a refrigerator, lift, or car engine (in all these cases a “conventional” computer would be redundant) is a microcontroller. However, neither Harvard, nor von Neumann architectures allow the processing and storage of information to be combined into a single element of a computer system.

However, such systems do exist. Furthermore, if you look at the brain itself as a computer system (this is purely hypothetical at the moment: it is not yet known whether the function of the brain is reducible to computations), then you will see that it is not at all built like a computer with von Neumann architecture. Neural networks do not have a specialized computer or separate memory cells. Information is stored and processed in each and every neuron, one element of the computer system, and the human brain has approximately 100 billion of these elements. In addition, almost all of them are able to work in parallel (simultaneously), which is why the brain is able to process information with great efficiency and at such high speed. Artificial neural networks that are currently implemented on von Neumann computers only emulate these processes: emulation, i.e. step by step imitation of functions inevitably leads to a decrease in speed and an increase in energy consumption. In many cases this is not so critical, but in certain cases it can be.

Devices that do not simply imitate the function of neural networks, but are fundamentally the same could be used for a variety of tasks. Most importantly, neural networks are capable of pattern recognition; they are used as a basis for recognising handwritten text for example, or signature verification. When a certain pattern needs to be recognised and classified, such as a sound, an image, or characteristic changes on a graph, neural networks are actively used and it is in these fields where gaining an advantage in terms of speed and energy consumption is critical. In a control system for an autonomous flying robot every milliwatt-hour and every millisecond counts, just in the same way that a real-time system to process data from a collider detector cannot take too long to “think” about highlighting particle tracks that may be of interest to scientists from among a large number of other recorded events.

Bravo to the writer!

Here’s a link to and a citation for the paper,

Hardware elementary perceptron based on polyaniline memristive devices by V.A. Demin. V. V. Erokhin, A.V. Emelyanov, S. Battistoni, G. Baldi, S. Iannotta, P.K. Kashkarov, M.V. Kovalchuk. Organic Electronics Volume 25, October 2015, Pages 16–20 doi:10.1016/j.orgel.2015.06.015

This paper is behind a paywall.

Computer chips derived in a Darwinian environment

Courtesy: University of Twente

Courtesy: University of Twente

If that ‘computer chip’ looks a brain to you, good, since that’s what the image is intended to illustrate assuming I’ve correctly understood the Sept. 21, 2015 news item on Nanowerk (Note: A link has been removed),

Researchers of the MESA+ Institute for Nanotechnology and the CTIT Institute for ICT Research at the University of Twente in The Netherlands have demonstrated working electronic circuits that have been produced in a radically new way, using methods that resemble Darwinian evolution. The size of these circuits is comparable to the size of their conventional counterparts, but they are much closer to natural networks like the human brain. The findings promise a new generation of powerful, energy-efficient electronics, and have been published in the leading British journal Nature Nanotechnology (“Evolution of a Designless Nanoparticle Network into Reconfigurable Boolean Logic”).

A Sept. 21, 2015 University of Twente press release, which originated the news item, explains why and how they have decided to mimic nature to produce computer chips,

One of the greatest successes of the 20th century has been the development of digital computers. During the last decades these computers have become more and more powerful by integrating ever smaller components on silicon chips. However, it is becoming increasingly hard and extremely expensive to continue this miniaturisation. Current transistors consist of only a handful of atoms. It is a major challenge to produce chips in which the millions of transistors have the same characteristics, and thus to make the chips operate properly. Another drawback is that their energy consumption is reaching unacceptable levels. It is obvious that one has to look for alternative directions, and it is interesting to see what we can learn from nature. Natural evolution has led to powerful ‘computers’ like the human brain, which can solve complex problems in an energy-efficient way. Nature exploits complex networks that can execute many tasks in parallel.

Moving away from designed circuits

The approach of the researchers at the University of Twente is based on methods that resemble those found in Nature. They have used networks of gold nanoparticles for the execution of essential computational tasks. Contrary to conventional electronics, they have moved away from designed circuits. By using ‘designless’ systems, costly design mistakes are avoided. The computational power of their networks is enabled by applying artificial evolution. This evolution takes less than an hour, rather than millions of years. By applying electrical signals, one and the same network can be configured into 16 different logical gates. The evolutionary approach works around – or can even take advantage of – possible material defects that can be fatal in conventional electronics.

Powerful and energy-efficient

It is the first time that scientists have succeeded in this way in realizing robust electronics with dimensions that can compete with commercial technology. According to prof. Wilfred van der Wiel, the realized circuits currently still have limited computing power. “But with this research we have delivered proof of principle: demonstrated that our approach works in practice. By scaling up the system, real added value will be produced in the future. Take for example the efforts to recognize patterns, such as with face recognition. This is very difficult for a regular computer, while humans and possibly also our circuits can do this much better.”  Another important advantage may be that this type of circuitry uses much less energy, both in the production, and during use. The researchers anticipate a wide range of applications, for example in portable electronics and in the medical world.

Here’s a link to and a citation for the paper,

Evolution of a designless nanoparticle network into reconfigurable Boolean logic by S. K. Bose, C. P. Lawrence, Z. Liu, K. S. Makarenko, R. M. J. van Damme, H. J. Broersma, & W. G. van der Wiel. Nature Nanotechnology (2015) doi:10.1038/nnano.2015.207 Published online 21 September 2015

This paper is behind a paywall.

Final comment, this research, especially with the reference to facial recognition, reminds me of memristors and neuromorphic engineering. I have written many times on this topic and you should be able to find most of the material by using ‘memristor’ as your search term in the blog search engine. For the mildly curious, here are links to two recent memristor articles, Knowm (sounds like gnome?) A memristor company with a commercially available product in a Sept. 10, 2015 posting and Memristor, memristor, you are popular in a May 15, 2015 posting.

Knowm (sounds like gnome?) A memristor company with a commercially available product

German garden gnome Date: 11 August 2006 (original upload date) Source:Transferred from en.wikipedia to Commons. Author: Colibri1968 at English Wikipedia

German garden gnome
Date 11 August 2006 (original upload date)
Source Transferred from en.wikipedia to Commons.
Author Colibri1968 at English Wikipedia

I thought the ‘gnome’knowm’ homonym or, more precisely, homophone, might be an amusing way to lead into yet another memristor story on this blog.  A Sept. 3, 2015 news item on Azonano features a ‘memristor-based’ company/organization, Knowm,

Knowm Inc., a start-up pioneering next-generation advanced computing architectures and technology, today announced they are the first to develop and make commercially-available memristors with bi-directional incremental learning capability.

The device was developed through research from Boise State University’s [Idaho, US] Dr. Kris Campbell, and this new data unequivocally confirms Knowm’s memristors are capable of bi-directional incremental learning. This has been previously deemed impossible in filamentary devices by Knowm’s competitors, including IBM [emphasis mine], despite significant investment in materials, research and development. With this advancement, Knowm delivers the first commercial memristors that can adjust resistance in incremental steps in both direction rather than only one direction with an all-or-nothing ‘erase’. This advancement opens the gateway to extremely efficient and powerful machine learning and artificial intelligence applications.

A Sept. 2, 2015 Knowm news release (also on MarketWatch), which originated the news item, provides more details,

“Having commercially-available memristors with bi-directional voltage-dependent incremental capability is a huge step forward for the field of machine learning and, particularly, AHaH Computing,” said Alex Nugent, CEO and co-founder of Knowm. “We have been dreaming about this device and developing the theory for how to apply them to best maximize their potential for more than a decade, but the lack of capability confirmation had been holding us back. This data is truly a monumental technical milestone and it will serve as a springboard to catapult Knowm and AHaH Computing forward.”

Memristors with the bi-directional incremental resistance change property are the foundation for developing learning hardware such as Knowm Inc.’s recently announced Thermodynamic RAM (kT-RAM) and help realize the full potential of AHaH Computing. The availability of kT-RAM will have the largest impact in fields that require higher computational power for machine learning tasks like autonomous robotics, big-data analysis and intelligent Internet assistants. kT-RAM radically increases the efficiency of synaptic integration and adaptation operations by reducing them to physically adaptive ‘analog’ memristor-based circuits. Synaptic integration and adaptation are the core operations behind tasks such as pattern recognition and inference. Knowm Inc. is the first company in the world to bring this technology to market.

Knowm is ushering in the next phase of computing with the first general-purpose neuromemristive processor specification. Earlier this year the company announced the commercial availability of the first products in support of the kT-RAM technology stack. These include the sale of discrete memristor chips, a Back End of Line (BEOL) CMOS+memristor service, the SENSE and Application Servers and their first application named “Knowm Anomaly”, the first application built based on the theory of AHaH Computing and kT-RAM architecture. Knowm also simultaneously announced the company’s visionary developer program for organizations and individual developers. This includes the Knowm API, which serves as development hardware and training resources for co-developing the Knowm technology stack.

Knowm certainly has big ambitions. I’m a little surprised they mentioned IBM rather than HP Labs, which is where researchers first claimed to find evidence of the existence of memristors in 2008 (the story is noted in my Nanotech Mysteries wiki here). As I understand it, HP Labs is working busily (having a missed a few deadlines) on developing a commercial product using memristors.

For the curious, my latest informal roundup of memristor stories is in a May 15, 2015 posting.

Getting back to to Knowm and big ambitions, here’s Alex Nugent, Knowm CEO (Chief Executive Officer) and co-founder talking about the company and its technology,

Researchers at Karolinska Institute (Sweden) build an artificial neuron

Unlike my post earlier today (June 26, 2015) about BrainChip, this is not about neuromorphic engineering (artificial brain), although I imagine this new research from the Karolinska Institute (Institutet) will be of some interest to that community. This research was done in the interest of developing* therapeutic interventions for brain diseases. One aspect of this news item/press release I find particularly interesting is the insistence that “no living parts” were used to create the artificial neuron,

A June 24, 2015 news item on ScienceDaily describes what the artificial neuron can do,

Scientists have managed to build a fully functional neuron by using organic bioelectronics. This artificial neuron contain [sic] no ‘living’ parts, but is capable of mimicking the function of a human nerve cell and communicate in the same way as our own neurons do. [emphasis mine]

A June 24, 2015 Karolinska Institute press release (also on EurekAlert), which originated the news item, describes how neurons communicate in the brain, standard techniques for stimulating neuronal cells, and the scientists’ work on a technique to improve stimulation,

Neurons are isolated from each other and communicate with the help of chemical signals, commonly called neurotransmitters or signal substances. Inside a neuron, these chemical signals are converted to an electrical action potential, which travels along the axon of the neuron until it reaches the end. Here at the synapse, the electrical signal is converted to the release of chemical signals, which via diffusion can relay the signal to the next nerve cell.

To date, the primary technique for neuronal stimulation in human cells is based on electrical stimulation. However, scientists at the Swedish Medical Nanoscience Centre (SMNC) at Karolinska Institutet in collaboration with collegues at Linköping University, have now created an organic bioelectronic device that is capable of receiving chemical signals, which it can then relay to human cells.

“Our artificial neuron is made of conductive polymers and it functions like a human neuron,” says lead investigator Agneta Richter-Dahlfors, professor of cellular microbiology. “The sensing component of the artificial neuron senses a change in chemical signals in one dish, and translates this into an electrical signal. This electrical signal is next translated into the release of the neurotransmitter acetylcholine in a second dish, whose effect on living human cells can be monitored.”

The research team hope that their innovation, presented in the journal Biosensors & Bioelectronics, will improve treatments for neurologial disorders which currently rely on traditional electrical stimulation. The new technique makes it possible to stimulate neurons based on specific chemical signals received from different parts of the body. In the future, this may help physicians to bypass damaged nerve cells and restore neural function.

“Next, we would like to miniaturize this device to enable implantation into the human body,” says Agneta Richer-Dahlfors. “We foresee that in the future, by adding the concept of wireless communication, the biosensor could be placed in one part of the body, and trigger release of neurotransmitters at distant locations. Using such auto-regulated sensing and delivery, or possibly a remote control, new and exciting opportunities for future research and treatment of neurological disorders can be envisaged.”

Here’s a link to and a citation for the paper,

An organic electronic biomimetic neuron enables auto-regulated neuromodulation by Daniel T. Simon, Karin C. Larsson, David Nilsson, Gustav Burström, b, Dagmar Galter, Magnus Berggren, and Agneta Richter-Dahlfors. Biosensors and Bioelectronics Volume 71, 15 September 2015, Pages 359–364         doi:10.1016/j.bios.2015.04.058

This paper is behind a paywall.

As to anyone (other than myself) who may be curious about exactly what they used (other than “living parts”) to create an artificial neuron, there’s the paper’s abstract,

Current therapies for neurological disorders are based on traditional medication and electric stimulation. Here, we present an organic electronic biomimetic neuron, with the capacity to precisely intervene with the underlying malfunctioning signalling pathway using endogenous substances. The fundamental function of neurons, defined as chemical-to-electrical-to-chemical signal transduction, is achieved by connecting enzyme-based amperometric biosensors and organic electronic ion pumps. Selective biosensors transduce chemical signals into an electric current, which regulates electrophoretic delivery of chemical substances without necessitating liquid flow. Biosensors detected neurotransmitters in physiologically relevant ranges of 5–80 µM, showing linear response above 20 µm with approx. 0.1 nA/µM slope. When exceeding defined threshold concentrations, biosensor output signals, connected via custom hardware/software, activated local or distant neurotransmitter delivery from the organic electronic ion pump. Changes of 20 µM glutamate or acetylcholine triggered diffusive delivery of acetylcholine, which activated cells via receptor-mediated signalling. This was observed in real-time by single-cell ratiometric Ca2+ imaging. The results demonstrate the potential of the organic electronic biomimetic neuron in therapies involving long-range neuronal signalling by mimicking the function of projection neurons. Alternatively, conversion of glutamate-induced descending neuromuscular signals into acetylcholine-mediated muscular activation signals may be obtained, applicable for bridging injured sites and active prosthetics.

While it’s true neither are “living parts,” I believe both enzymes and organic electronic ion pumps can be found in biological organisms. The insistence on ‘nonliving’ in the press release suggests that scientists in Europe, if nowhere else, are still quite concerned about any hint that they are working on genetically modified organisms (GMO). It’s ironic when you consider that people blithely use enzyme-based cleaning and beauty products but one can appreciate the* scientists’ caution.

* ‘develop’ changed to ‘developing’ and ‘the’ added on July 3, 2015.

Is it time to invest in a ‘brain chip’ company?

This story take a few twists and turns. First, ‘brain chips’ as they’re sometimes called would allow, theoretically, computers to learn and function like human brains. (Note: There’s another type of ‘brain chip’ which could be implanted in human brains to help deal with diseases such as Parkinson’s and Alzheimer’s. *Today’s [June 26, 2015] earlier posting about an artificial neuron points at some of the work being done in this areas.*)

Returning to the ‘brain ship’ at hand. Second, there’s a company called BrainChip, which has one patent and another pending for, yes, a ‘brain chip’.

The company, BrainChip, founded in Australia and now headquartered in California’s Silicon Valley, recently sparked some investor interest in Australia. From an April 7, 2015 article by Timna Jacks for the Australian Financial Review,

Former mining stock Aziana Limited has whet Australian investors’ appetite for science fiction, with its share price jumping 125 per cent since it announced it was acquiring a US-based tech company called BrainChip, which promises artificial intelligence through a microchip that replicates the neural system of the human brain.

Shares in the company closed at 9¢ before the Easter long weekend, having been priced at just 4¢ when the backdoor listing of BrainChip was announced to the market on March 18.

Creator of the patented digital chip, Peter Van Der Made told The Australian Financial Review the technology has the capacity to learn autonomously, due to its composition of 10,000 biomimic neurons, which, through a process known as synaptic time-dependent plasticity, can form memories and associations in the same way as a biological brain. He said it works 5000 times faster and uses a thousandth of the power of the fastest computers available today.

Mr Van Der Made is inviting technology partners to license the technology for their own chips and products, and is donating the technology to university laboratories in the US for research.

The Netherlands-born Australian, now based in southern California, was inspired to create the brain-like chip in 2004, after working at the IBM Internet Security Systems for two years, where he was chief scientist for behaviour analysis security systems. …

A June 23, 2015 article by Tony Malkovic on phys.org provide a few more details about BrainChip and about the deal,

Mr Van der Made and the company, also called BrainChip, are now based in Silicon Valley in California and he returned to Perth last month as part of the company’s recent merger and listing on the Australian Stock Exchange.

He says BrainChip has the ability to learn autonomously, evolve and associate information and respond to stimuli like a brain.

Mr Van der Made says the company’s chip technology is more than 5,000 faster than other technologies, yet uses only 1/1,000th of the power.

“It’s a hardware only solution, there is no software to slow things down,” he says.

“It doesn’t executes instructions, it learns and supplies what it has learnt to new information.

“BrainChip is on the road to position itself at the forefront of artificial intelligence,” he says.

“We have a clear advantage, at least 10 years, over anybody else in the market, that includes IBM.”

BrainChip is aiming at the global semiconductor market involving almost anything that involves a microprocessor.

You can find out more about the company, BrainChip here. The site does have a little more information about the technology,

Spiking Neuron Adaptive Processor (SNAP)

BrainChip’s inventor, Peter van der Made, has created an exciting new Spiking Neural Networking technology that has the ability to learn autonomously, evolve and associate information just like the human brain. The technology is developed as a digital design containing a configurable “sea of biomimic neurons’.

The technology is fast, completely digital, and consumes very low power, making it feasible to integrate large networks into portable battery-operated products, something that has never been possible before.

BrainChip neurons autonomously learn through a process known as STDP (Synaptic Time Dependent Plasticity). BrainChip’s fully digital neurons process input spikes directly in hardware. Sensory neurons convert physical stimuli into spikes. Learning occurs when the input is intense, or repeating through feedback and this is directly correlated to the way the brain learns.

Computing Artificial Neural Networks (ANNs)

The brain consists of specialized nerve cells that communicate with one another. Each such nerve cell is called a Neuron,. The inputs are memory nodes called synapses. When the neuron associates information, it produces a ‘spike’ or a ‘spike train’. Each spike is a pulse that triggers a value in the next synapse. Synapses store values, similar to the way a computer stores numbers. In combination, these values determine the function of the neural network. Synapses acquire values through learning.

In Artificial Neural Networks (ANNs) this complex function is generally simplified to a static summation and compare function, which severely limits computational power. BrainChip has redefined how neural networks work, replicating the behaviour of the brain. BrainChip’s artificial neurons are completely digital, biologically realistic resulting in increased computational power, high speed and extremely low power consumption.

The Problem with Artificial Neural Networks

Standard ANNs, running on computer hardware are processed sequentially; the processor runs a program that defines the neural network. This consumes considerable time and because these neurons are processed sequentially, all this delayed time adds up resulting in a significant linear decline in network performance with size.

BrainChip neurons are all mapped in parallel. Therefore the performance of the network is not dependent on the size of the network providing a clear speed advantage. So because there is no decline in performance with network size, learning also takes place in parallel within each synapse, making STDP learning very fast.

A hardware solution

BrainChip’s digital neural technology is the only custom hardware solution that is capable of STDP learning. The hardware requires no coding and has no software as it evolves learning through experience and user direction.

The BrainChip neuron is unique in that it is completely digital, behaves asynchronously like an analog neuron, and has a higher level of biological realism. It is more sophisticated than software neural models and is many orders of magnitude faster. The BrainChip neuron consists entirely of binary logic gates with no traditional CPU core. Hence, there are no ‘programming’ steps. Learning and training takes the place of programming and coding. Like of a child learning a task for the first time.

Software ‘neurons’, to compromise for limited processing power, are simplified to a point where they do not resemble any of the features of a biological neuron. This is due to the sequential nature of computers, whereby all data has to pass through a central processor in chunks of 16, 32 or 64 bits. In contrast, the brain’s network is parallel and processes the equivalent of millions of data bits simultaneously.

A significantly faster technology

Performing emulation in digital hardware has distinct advantages over software. As software is processed sequentially, one instruction at a time, Software Neural Networks perform slower with increasing size. Parallel hardware does not have this problem and maintains the same speed no matter how large the network is. Another advantage of hardware is that it is more power efficient by several orders of magnitude.

The speed of the BrainChip device is unparalleled in the industry.

For large neural networks a GPU (Graphics Processing Unit) is ~70 times faster than the Intel i7 executing a similar size neural network. The BrainChip neural network is faster still and takes far fewer CPU (Central Processing Unit) cycles, with just a little communication overhead, which means that the CPU is available for other tasks. The BrainChip network also responds much faster than a software network accelerating the performance of the entire system.

The BrainChip network is completely parallel, with no sequential dependencies. This means that the network does not slow down with increasing size.

Endorsed by the neuroscience community

A number of the world’s pre-eminent neuroscientists have endorsed the technology and are agreeing to joint develop projects.

BrainChip has the potential to become the de facto standard for all autonomous learning technology and computer products.

Patented

BrainChip’s autonomous learning technology patent was granted on the 21st September 2008 (Patent number US 8,250,011 “Autonomous learning dynamic artificial neural computing device and brain inspired system”). BrainChip is the only company in the world to have achieved autonomous learning in a network of Digital Neurons without any software.

A prototype Spiking Neuron Adaptive Processor was designed as a ‘proof of concept’ chip.

The first tests were completed at the end of 2007 and this design was used as the foundation for the US patent application which was filed in 2008. BrainChip has also applied for a continuation-in-part patent filed in 2012, the “Method and System for creating Dynamic Neural Function Libraries”, US Patent Application 13/461,800 which is pending.

Van der Made doesn’t seem to have published any papers on this work and the description of the technology provided on the website is frustratingly vague. There are many acronyms for processes but no mention of what this hardware might be. For example, is it based on a memristor or some kind of atomic ionic switch or something else altogether?

It would be interesting to find out more but, presumably, van der Made, wishes to withhold details. There are many companies following the same strategy while pursuing what they view as a business advantage.

* Artificial neuron link added June 26, 2015 at 1017 hours PST.

A more complex memristor: from two terminals to three for brain-like computing

Researchers have developed a more complex memristor device than has been the case according to an April 6, 2015 Northwestern University news release (also on EurekAlert),

Researchers are always searching for improved technologies, but the most efficient computer possible already exists. It can learn and adapt without needing to be programmed or updated. It has nearly limitless memory, is difficult to crash, and works at extremely fast speeds. It’s not a Mac or a PC; it’s the human brain. And scientists around the world want to mimic its abilities.

Both academic and industrial laboratories are working to develop computers that operate more like the human brain. Instead of operating like a conventional, digital system, these new devices could potentially function more like a network of neurons.

“Computers are very impressive in many ways, but they’re not equal to the mind,” said Mark Hersam, the Bette and Neison Harris Chair in Teaching Excellence in Northwestern University’s McCormick School of Engineering. “Neurons can achieve very complicated computation with very low power consumption compared to a digital computer.”

A team of Northwestern researchers, including Hersam, has accomplished a new step forward in electronics that could bring brain-like computing closer to reality. The team’s work advances memory resistors, or “memristors,” which are resistors in a circuit that “remember” how much current has flowed through them.

“Memristors could be used as a memory element in an integrated circuit or computer,” Hersam said. “Unlike other memories that exist today in modern electronics, memristors are stable and remember their state even if you lose power.”

Current computers use random access memory (RAM), which moves very quickly as a user works but does not retain unsaved data if power is lost. Flash drives, on the other hand, store information when they are not powered but work much slower. Memristors could provide a memory that is the best of both worlds: fast and reliable. But there’s a problem: memristors are two-terminal electronic devices, which can only control one voltage channel. Hersam wanted to transform it into a three-terminal device, allowing it to be used in more complex electronic circuits and systems.

The memristor is of some interest to a number of other parties prominent amongst them, the University of Michigan’s Professor Wei Lu and HP (Hewlett Packard) Labs, both of whom are mentioned in one of my more recent memristor pieces, a June 26, 2014 post.

Getting back to Northwestern,

Hersam and his team met this challenge by using single-layer molybdenum disulfide (MoS2), an atomically thin, two-dimensional nanomaterial semiconductor. Much like the way fibers are arranged in wood, atoms are arranged in a certain direction–called “grains”–within a material. The sheet of MoS2 that Hersam used has a well-defined grain boundary, which is the interface where two different grains come together.

“Because the atoms are not in the same orientation, there are unsatisfied chemical bonds at that interface,” Hersam explained. “These grain boundaries influence the flow of current, so they can serve as a means of tuning resistance.”

When a large electric field is applied, the grain boundary literally moves, causing a change in resistance. By using MoS2 with this grain boundary defect instead of the typical metal-oxide-metal memristor structure, the team presented a novel three-terminal memristive device that is widely tunable with a gate electrode.

“With a memristor that can be tuned with a third electrode, we have the possibility to realize a function you could not previously achieve,” Hersam said. “A three-terminal memristor has been proposed as a means of realizing brain-like computing. We are now actively exploring this possibility in the laboratory.”

Here’s a link to and a citation for the paper,

Gate-tunable memristive phenomena mediated by grain boundaries in single-layer MoS2 by Vinod K. Sangwan, Deep Jariwala, In Soo Kim, Kan-Sheng Chen, Tobin J. Marks, Lincoln J. Lauhon, & Mark C. Hersam. Nature Nanotechnology (2015) doi:10.1038/nnano.2015.56 Published online 06 April 2015

This paper is behind a paywall but there is a few preview available through ReadCube Access.

Dexter Johnson has written about this latest memristor development in an April 9, 2015 posting on his Nanoclast blog (on the IEEE [Institute for Electrical and Electronics Engineers] website) where he notes this (Note: A link has been removed),

The memristor seems to generate fairly polarized debate, especially here on this website in the comments on stories covering the technology. The controversy seems to fall along the lines that the device that HP Labs’ Stan Williams and Greg Snider developed back in 2008 doesn’t exactly line up with the original theory of the memristor proposed by Leon Chua back in 1971.

It seems the ‘debate’ has evolved from issues about how the memristor is categorized. I wonder if there’s still discussion about whether or not HP Labs is attempting to develop a patent thicket of sorts.