Tag Archives: synapses

A new memristor circuit

Apparently engineers at the University of Massachusetts at Amherst have developed a new kind of memristor. A Sept. 29, 2016 news item on Nanowerk makes the announcement (Note: A link has been removed),

Engineers at the University of Massachusetts Amherst are leading a research team that is developing a new type of nanodevice for computer microprocessors that can mimic the functioning of a biological synapse—the place where a signal passes from one nerve cell to another in the body. The work is featured in the advance online publication of Nature Materials (“Memristors with diffusive dynamics as synaptic emulators for neuromorphic computing”).

Such neuromorphic computing in which microprocessors are configured more like human brains is one of the most promising transformative computing technologies currently under study.

While it doesn’t sound different from any other memristor, that’s misleading. Do read on. A Sept. 27, 2016 University of Massachusetts at Amherst news release, which originated the news item, provides more detail about the researchers and the work,

J. Joshua Yang and Qiangfei Xia are professors in the electrical and computer engineering department in the UMass Amherst College of Engineering. Yang describes the research as part of collaborative work on a new type of memristive device.

Memristive devices are electrical resistance switches that can alter their resistance based on the history of applied voltage and current. These devices can store and process information and offer several key performance characteristics that exceed conventional integrated circuit technology.

“Memristors have become a leading candidate to enable neuromorphic computing by reproducing the functions in biological synapses and neurons in a neural network system, while providing advantages in energy and size,” the researchers say.

Neuromorphic computing—meaning microprocessors configured more like human brains than like traditional computer chips—is one of the most promising transformative computing technologies currently under intensive study. Xia says, “This work opens a new avenue of neuromorphic computing hardware based on memristors.”

They say that most previous work in this field with memristors has not implemented diffusive dynamics without using large standard technology found in integrated circuits commonly used in microprocessors, microcontrollers, static random access memory and other digital logic circuits.

The researchers say they proposed and demonstrated a bio-inspired solution to the diffusive dynamics that is fundamentally different from the standard technology for integrated circuits while sharing great similarities with synapses. They say, “Specifically, we developed a diffusive-type memristor where diffusion of atoms offers a similar dynamics [?] and the needed time-scales as its bio-counterpart, leading to a more faithful emulation of actual synapses, i.e., a true synaptic emulator.”

The researchers say, “The results here provide an encouraging pathway toward synaptic emulation using diffusive memristors for neuromorphic computing.”

Here’s a link to and a citation for the paper,

Memristors with diffusive dynamics as synaptic emulators for neuromorphic computing by Zhongrui Wang, Saumil Joshi, Sergey E. Savel’ev, Hao Jiang, Rivu Midya, Peng Lin, Miao Hu, Ning Ge, John Paul Strachan, Zhiyong Li, Qing Wu, Mark Barnell, Geng-Lin Li, Huolin L. Xin, R. Stanley Williams [emphasis mine], Qiangfei Xia, & J. Joshua Yang. Nature Materials (2016) doi:10.1038/nmat4756 Published online 26 September 2016

This paper is behind a paywall.

I’ve emphasized R. Stanley Williams’ name as he was the lead researcher on the HP Labs team that proved Leon Chua’s 1971 theory about the memristor and exerted engineering control of the memristor in 2008. (Bernard Widrow, in the 1960s,  predicted and proved the existence of something he termed a ‘memistor’. Chua arrived at his ‘memristor’ theory independently.)

Austin Silver in a Sept. 29, 2016 posting on The Human OS blog (on the IEEE [Institute of Electrical and Electronics Engineers] website) delves into this latest memristor research (Note: Links have been removed),

In research published in Nature Materials on 26 September [2016], Yang and his team mimicked a crucial underlying component of how synaptic connections get stronger or weaker: the flow of calcium.

The movement of calcium into or out of the neuronal membrane, neuroscientists have found, directly affects the connection. Chemical processes move the calcium in and out— triggering a long-term change in the synapses’ strength. 2015 research in ACS NanoLetters and Advanced Functional Materials discovered that types of memristors can simulate some of the calcium behavior, but not all.

In the new research, Yang combined two types of memristors in series to create an artificial synapse. The hybrid device more closely mimics biological synapse behavior—the calcium flow in particular, Yang says.

The new memristor used–called a diffusive memristor because atoms in the resistive material move even without an applied voltage when the device is in the high resistance state—was a dielectic film sandwiched between Pt [platinum] or Au [gold] electrodes. The film contained Ag [silver] nanoparticles, which would play the role of calcium in the experiments.

By tracking the movement of the silver nanoparticles inside the diffusive memristor, the researchers noticed a striking similarity to how calcium functions in biological systems.

A voltage pulse to the hybrid device drove silver into the gap between the diffusive memristor’s two electrodes–creating a filament bridge. After the pulse died away, the filament started to break and the silver moved back— resistance increased.

Like the case with calcium, a force made silver go in and a force made silver go out.

To complete the artificial synapse, the researchers connected the diffusive memristor in series to another type of memristor that had been studied before.

When presented with a sequence of voltage pulses with particular timing, the artificial synapse showed the kind of long-term strengthening behavior a real synapse would, according to the researchers. “We think it is sort of a real emulation, rather than simulation because they have the physical similarity,” Yang says.

I was glad to find some additional technical detail about this new memristor and to find the Human OS blog, which is new to me and according to its home page is a “biomedical blog, featuring the wearable sensors, big data analytics, and implanted devices that enable new ventures in personalized medicine.”

US white paper on neuromorphic computing (or the nanotechnology-inspired Grand Challenge for future computing)

The US has embarked on a number of what is called “Grand Challenges.” I first came across the concept when reading about the Bill and Melinda Gates (of Microsoft fame) Foundation. I gather these challenges are intended to provide funding for research that advances bold visions.

There is the US National Strategic Computing Initiative established on July 29, 2015 and its first anniversary results were announced one year to the day later. Within that initiative a nanotechnology-inspired Grand Challenge for Future Computing was issued and, according to a July 29, 2016 news item on Nanowerk, a white paper on the topic has been issued (Note: A link has been removed),

Today [July 29, 2016), Federal agencies participating in the National Nanotechnology Initiative (NNI) released a white paper (pdf) describing the collective Federal vision for the emerging and innovative solutions needed to realize the Nanotechnology-Inspired Grand Challenge for Future Computing.

The grand challenge, announced on October 20, 2015, is to “create a new type of computer that can proactively interpret and learn from data, solve unfamiliar problems using what it has learned, and operate with the energy efficiency of the human brain.” The white paper describes the technical priorities shared by the agencies, highlights the challenges and opportunities associated with these priorities, and presents a guiding vision for the research and development (R&D) needed to achieve key technical goals. By coordinating and collaborating across multiple levels of government, industry, academia, and nonprofit organizations, the nanotechnology and computer science communities can look beyond the decades-old approach to computing based on the von Neumann architecture and chart a new path that will continue the rapid pace of innovation beyond the next decade.

A July 29, 2016 US National Nanotechnology Coordination Office news release, which originated the news item, further and succinctly describes the contents of the paper,

“Materials and devices for computing have been and will continue to be a key application domain in the field of nanotechnology. As evident by the R&D topics highlighted in the white paper, this challenge will require the convergence of nanotechnology, neuroscience, and computer science to create a whole new paradigm for low-power computing with revolutionary, brain-like capabilities,” said Dr. Michael Meador, Director of the National Nanotechnology Coordination Office. …

The white paper was produced as a collaboration by technical staff at the Department of Energy, the National Science Foundation, the Department of Defense, the National Institute of Standards and Technology, and the Intelligence Community. …

The white paper titled “A Federal Vision for Future Computing: A Nanotechnology-Inspired Grand Challenge” is 15 pp. and it offers tidbits such as this (Note: Footnotes not included),

A new materials base may be needed for future electronic hardware. While most of today’s electronics use silicon, this approach is unsustainable if billions of disposable and short-lived sensor nodes are needed for the coming Internet-of-Things (IoT). To what extent can the materials base for the implementation of future information technology (IT) components and systems support sustainability through recycling and bio-degradability? More sustainable materials, such as compostable or biodegradable systems (polymers, paper, etc.) that can be recycled or reused,  may play an important role. The potential role for such alternative materials in the fabrication of integrated systems needs to be explored as well. [p. 5]

The basic architecture of computers today is essentially the same as those built in the 1940s—the von Neumann architecture—with separate compute, high-speed memory, and high-density storage components that are electronically interconnected. However, it is well known that continued performance increases using this architecture are not feasible in the long term, with power density constraints being one of the fundamental roadblocks.7 Further advances in the current approach using multiple cores, chip multiprocessors, and associated architectures are plagued by challenges in software and programming models. Thus,  research and development is required in radically new and different computing architectures involving processors, memory, input-output devices, and how they behave and are interconnected. [p. 7]

Neuroscience research suggests that the brain is a complex, high-performance computing system with low energy consumption and incredible parallelism. A highly plastic and flexible organ, the human brain is able to grow new neurons, synapses, and connections to cope with an ever-changing environment. Energy efficiency, growth, and flexibility occur at all scales, from molecular to cellular, and allow the brain, from early to late stage, to never stop learning and to act with proactive intelligence in both familiar and novel situations. Understanding how these mechanisms work and cooperate within and across scales has the potential to offer tremendous technical insights and novel engineering frameworks for materials, devices, and systems seeking to perform efficient and autonomous computing. This research focus area is the most synergistic with the national BRAIN Initiative. However, unlike the BRAIN Initiative, where the goal is to map the network connectivity of the brain, the objective here is to understand the nature, methods, and mechanisms for computation,  and how the brain performs some of its tasks. Even within this broad paradigm,  one can loosely distinguish between neuromorphic computing and artificial neural network (ANN) approaches. The goal of neuromorphic computing is oriented towards a hardware approach to reverse engineering the computational architecture of the brain. On the other hand, ANNs include algorithmic approaches arising from machinelearning,  which in turn could leverage advancements and understanding in neuroscience as well as novel cognitive, mathematical, and statistical techniques. Indeed, the ultimate intelligent systems may as well be the result of merging existing ANN (e.g., deep learning) and bio-inspired techniques. [p. 8]

As government documents go, this is quite readable.

For anyone interested in learning more about the future federal plans for computing in the US, there is a July 29, 2016 posting on the White House blog celebrating the first year of the US National Strategic Computing Initiative Strategic Plan (29 pp. PDF; awkward but that is the title).

Memory material with functions resembling synapses and neurons in the brain

This work comes from the University of Twente’s MESA+ Institute for Nanotechnology according to a July 8, 2016 news item on ScienceDaily,

Our brain does not work like a typical computer memory storing just ones and zeroes: thanks to a much larger variation in memory states, it can calculate faster consuming less energy. Scientists of the MESA+ Institute for Nanotechnology of the University of Twente (The Netherlands) now developed a ferro-electric material with a memory function resembling synapses and neurons in the brain, resulting in a multistate memory. …

A July 8, 2016 University of Twente press release, which originated the news item, provides more technical detail,

The material that could be the basic building block for ‘brain-inspired computing’ is lead-zirconium-titanate (PZT): a sandwich of materials with several attractive properties. One of them is that it is ferro-electric: you can switch it to a desired state, this state remains stable after the electric field is gone. This is called polarization: it leads to a fast memory function that is non-volatile. Combined with processor chips, a computer could be designed that starts much faster, for example. The UT scientists now added a thin layer of zinc oxide to the PZT, 25 nanometer thickness. They discovered that switching from one state to another not only happens from ‘zero’ to ‘one’ vice versa. It is possible to control smaller areas within the crystal: will they be polarized (‘flip’) or not?

In a PZT layer without zinc oxide (ZnO) there are basically two memorystates. Adding a nano layer of ZnO, every state in between is possible as well.

Multistate

By using variable writing times in those smaller areas, the result is that many states can be stored anywhere between zero and one. This resembles the way synapses and neurons ‘weigh’ signals in our brain. Multistate memories, coupled to transistors, could drastically improve the speed of pattern recognition, for example: our brain performs this kind of tasks consuming only a fraction of the energy a computer system needs. Looking at the graphs, the writing times seem quite long compared to nowaday’s processor speeds, but it is possible to create many memories in parallel. The function of the brain has already been mimicked in software like neurale networks, but in that case conventional digital hardware is still a limitation. The new material is a first step towards electronic hardware with a brain-like memory. Finding solutions for combining PZT with semiconductors, or even developing new kinds of semiconductors for this, is one of the next steps.

Here’s a link to and a citation for the paper,

Multistability in Bistable Ferroelectric Materials toward Adaptive Applications by Anirban Ghosh, Gertjan Koster, and Guus Rijnders. Advanced Functional Materials DOI: 10.1002/adfm.201601353 Version of Record online: 4 JUL 2016

© 2016 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim

This paper is behind a paywall.

Calming a synapse (part of a neuron) with graphene flakes

As we continue to colonize our own brains, there’s more news of graphene and neurons (see my Feb. 1, 2016 post featuring research from the same team in Italy featured in this post). A May 10, 2016 news item on ScienceDaily highlights work that could be used for epilepsy,

Innovative graphene technology to buffer the activity of synapses– this is the idea behind a recently-published study in the journal ACS Nano coordinated by the International School for Advanced Studies in Trieste (SISSA) and the University of Trieste. In particular, the study showed how effective graphene oxide flakes are at interfering with excitatory synapses, an effect that could prove useful in new treatments for diseases like epilepsy.

I guess the press release took a while to make its way through translation, here’s more from the April 10, 2016 SISSA (International School for Advanced Studies) press release (also on EurekAlert),

The laboratory of SISSA’s Laura Ballerini in collaboration with the University of Trieste, the University of Manchester and the University of Castilla -la Mancha, has discovered a new approach to modulating synapses. This methodology could be useful for treating diseases in which electrical nerve activity is altered. Ballerini and Maurizio Prato (University of Trieste) are the principal investigators of the project within the European flagship on graphene, a far-reaching 10-year international collaboration (one billion euros in funding) that studies innovative uses of the material.

Traditional treatments for neurological diseases generally include drugs that act on the brain or neurosurgery. Today however, graphene technology is showing promise for these types of applications, and is receiving increased attention from the scientific community. The method studied by Ballerini and colleagues uses “graphene nano-ribbons” (flakes) which buffer activity of synapses simply by being present.

“We administered aqueous solutions of graphene flakes to cultured neurons in ‘chronic’ exposure conditions, repeating the operation every day for a week. Analyzing functional neuronal electrical activity, we then traced the effect on synapses” says Rossana Rauti, SISSA researcher and first author of the study.

In the experiments, size of the flakes varied (10 microns or 80 nanometers) as well as the type of graphene: in one condition graphene was used, in another, graphene oxide. “The ‘buffering’ effect on synaptic activity happens only with smaller flakes of graphene oxide and not in other conditions,” says Ballerini. “The effect, in the system we tested, is selective for the excitatory synapses, while it is absent in inhibitory ones”

A Matter of Size

What is the origin of this selectivity? “We know that in principle graphene does not interact chemically with synapses in a significant way- its effect is likely due to the mere presence of synapses,” explains SISSA researcher and one of the study’s authors, Denis Scaini. “We do not yet have direct evidence, but our hypothesis is that there is a link with the sub-cellular organization of the synaptic space.”

A synapse is a contact point between one neuron and another where the nervous electrical signal “jumps” between a pre and post-synaptic unit. [emphasis mine] There is a small gap or discontinuity where the electrical signal is “translated” by a neurotransmitter and released by pre-synaptic termination into the extracellular space and reabsorbed by the postsynaptic space, to be translated again into an electrical signal. The access to this space varies depending on the type of synapses: “For the excitatory synapses, the structure’s organization allows higher exposure for the graphene flakes interaction, unlike inhibitory synapses, which are less physically accessible in this experimental model,” says Scaini.

Another clue that distance and size could be crucial in the process is found in the observation that graphene performs its function only in the oxidized form. “Normal graphene looks like a stretched and stiff sheet while graphene oxide appears crumpled, and thus possibly favoring interface with the synaptic space, ” adds Rauti.

Administering graphene flake solutions leaves the neurons alive and intact. For this reason the team thinks they could be used in biomedical applications for treating certain diseases. “We may imagine to target a drug by exploiting the apparent flakes’ selectivity for synapses, thus targeting directly the basic functional unit of neurons”concludes Ballerini.

That’s a nice description of neurons, synapses, and neurotransmitters.

Here’s a link to and a citation for the paper,

Graphene Oxide Nanosheets Reshape Synaptic Function in Cultured Brain Networks by Rossana Rauti, Neus Lozano, Veronica León, Denis Scaini†, Mattia Musto, Ilaria Rago, Francesco P. Ulloa Severino, Alessandra Fabbro, Loredana Casalis, Ester Vázquez, Kostas Kostarelos, Maurizio Prato, and Laura Ballerini. ACS Nano, 2016, 10 (4), pp 4459–4471
DOI: 10.1021/acsnano.6b00130 Publication Date (Web): March 31, 2016

Copyright © 2016 American Chemical Society

This paper is behind a paywall.

Memristor-based electronic synapses for neural networks

Caption: Neuron connections in biological neural networks. Credit: MIPT press office

Caption: Neuron connections in biological neural networks. Credit: MIPT press office

Russian scientists have recently published a paper about neural networks and electronic synapses based on ‘thin film’ memristors according to an April 19, 2016 news item on Nanowerk,

A team of scientists from the Moscow Institute of Physics and Technology (MIPT) have created prototypes of “electronic synapses” based on ultra-thin films of hafnium oxide (HfO2). These prototypes could potentially be used in fundamentally new computing systems.

An April 20, 2016 MIPT press release (also on EurekAlert), which originated the news item (the date inconsistency likely due to timezone differences) explains the connection between thin films and memristors,

The group of researchers from MIPT have made HfO2-based memristors measuring just 40×40 nm2. The nanostructures they built exhibit properties similar to biological synapses. Using newly developed technology, the memristors were integrated in matrices: in the future this technology may be used to design computers that function similar to biological neural networks.

Memristors (resistors with memory) are devices that are able to change their state (conductivity) depending on the charge passing through them, and they therefore have a memory of their “history”. In this study, the scientists used devices based on thin-film hafnium oxide, a material that is already used in the production of modern processors. This means that this new lab technology could, if required, easily be used in industrial processes.

“In a simpler version, memristors are promising binary non-volatile memory cells, in which information is written by switching the electric resistance – from high to low and back again. What we are trying to demonstrate are much more complex functions of memristors – that they behave similar to biological synapses,” said Yury Matveyev, the corresponding author of the paper, and senior researcher of MIPT’s Laboratory of Functional Materials and Devices for Nanoelectronics, commenting on the study.

The press release offers a description of biological synapses and their relationship to learning and memory,

A synapse is point of connection between neurons, the main function of which is to transmit a signal (a spike – a particular type of signal, see fig. 2) from one neuron to another. Each neuron may have thousands of synapses, i.e. connect with a large number of other neurons. This means that information can be processed in parallel, rather than sequentially (as in modern computers). This is the reason why “living” neural networks are so immensely effective both in terms of speed and energy consumption in solving large range of tasks, such as image / voice recognition, etc.

Over time, synapses may change their “weight”, i.e. their ability to transmit a signal. This property is believed to be the key to understanding the learning and memory functions of thebrain.

From the physical point of view, synaptic “memory” and “learning” in the brain can be interpreted as follows: the neural connection possesses a certain “conductivity”, which is determined by the previous “history” of signals that have passed through the connection. If a synapse transmits a signal from one neuron to another, we can say that it has high “conductivity”, and if it does not, we say it has low “conductivity”. However, synapses do not simply function in on/off mode; they can have any intermediate “weight” (intermediate conductivity value). Accordingly, if we want to simulate them using certain devices, these devices will also have to have analogous characteristics.

The researchers have provided an illustration of a biological synapse,

Fig.2 The type of electrical signal transmitted by neurons (a “spike”). The red lines are various other biological signals, the black line is the averaged signal. Source: MIPT press office

Fig.2 The type of electrical signal transmitted by neurons (a “spike”). The red lines are various other biological signals, the black line is the averaged signal. Source: MIPT press office

Now, the press release ties the memristor information together with the biological synapse information to describe the new work at the MIPT,

As in a biological synapse, the value of the electrical conductivity of a memristor is the result of its previous “life” – from the moment it was made.

There is a number of physical effects that can be exploited to design memristors. In this study, the authors used devices based on ultrathin-film hafnium oxide, which exhibit the effect of soft (reversible) electrical breakdown under an applied external electric field. Most often, these devices use only two different states encoding logic zero and one. However, in order to simulate biological synapses, a continuous spectrum of conductivities had to be used in the devices.

“The detailed physical mechanism behind the function of the memristors in question is still debated. However, the qualitative model is as follows: in the metal–ultrathin oxide–metal structure, charged point defects, such as vacancies of oxygen atoms, are formed and move around in the oxide layer when exposed to an electric field. It is these defects that are responsible for the reversible change in the conductivity of the oxide layer,” says the co-author of the paper and researcher of MIPT’s Laboratory of Functional Materials and Devices for Nanoelectronics, Sergey Zakharchenko.

The authors used the newly developed “analogue” memristors to model various learning mechanisms (“plasticity”) of biological synapses. In particular, this involved functions such as long-term potentiation (LTP) or long-term depression (LTD) of a connection between two neurons. It is generally accepted that these functions are the underlying mechanisms of  memory in the brain.

The authors also succeeded in demonstrating a more complex mechanism – spike-timing-dependent plasticity, i.e. the dependence of the value of the connection between neurons on the relative time taken for them to be “triggered”. It had previously been shown that this mechanism is responsible for associative learning – the ability of the brain to find connections between different events.

To demonstrate this function in their memristor devices, the authors purposefully used an electric signal which reproduced, as far as possible, the signals in living neurons, and they obtained a dependency very similar to those observed in living synapses (see fig. 3).

Fig.3. The change in conductivity of memristors depending on the temporal separation between "spikes"(rigth) and thr change in potential of the neuron connections in biological neural networks. Source: MIPT press office

Fig.3. The change in conductivity of memristors depending on the temporal separation between “spikes”(rigth) and thr change in potential of the neuron connections in biological neural networks. Source: MIPT press office

These results allowed the authors to confirm that the elements that they had developed could be considered a prototype of the “electronic synapse”, which could be used as a basis for the hardware implementation of artificial neural networks.

“We have created a baseline matrix of nanoscale memristors demonstrating the properties of biological synapses. Thanks to this research, we are now one step closer to building an artificial neural network. It may only be the very simplest of networks, but it is nevertheless a hardware prototype,” said the head of MIPT’s Laboratory of Functional Materials and Devices for Nanoelectronics, Andrey Zenkevich.

Here’s a link to and a citation for the paper,

Crossbar Nanoscale HfO2-Based Electronic Synapses by Yury Matveyev, Roman Kirtaev, Alena Fetisova, Sergey Zakharchenko, Dmitry Negrov and Andrey Zenkevich. Nanoscale Research Letters201611:147 DOI: 10.1186/s11671-016-1360-6

Published: 15 March 2016

This is an open access paper.

Is it time to invest in a ‘brain chip’ company?

This story take a few twists and turns. First, ‘brain chips’ as they’re sometimes called would allow, theoretically, computers to learn and function like human brains. (Note: There’s another type of ‘brain chip’ which could be implanted in human brains to help deal with diseases such as Parkinson’s and Alzheimer’s. *Today’s [June 26, 2015] earlier posting about an artificial neuron points at some of the work being done in this areas.*)

Returning to the ‘brain ship’ at hand. Second, there’s a company called BrainChip, which has one patent and another pending for, yes, a ‘brain chip’.

The company, BrainChip, founded in Australia and now headquartered in California’s Silicon Valley, recently sparked some investor interest in Australia. From an April 7, 2015 article by Timna Jacks for the Australian Financial Review,

Former mining stock Aziana Limited has whet Australian investors’ appetite for science fiction, with its share price jumping 125 per cent since it announced it was acquiring a US-based tech company called BrainChip, which promises artificial intelligence through a microchip that replicates the neural system of the human brain.

Shares in the company closed at 9¢ before the Easter long weekend, having been priced at just 4¢ when the backdoor listing of BrainChip was announced to the market on March 18.

Creator of the patented digital chip, Peter Van Der Made told The Australian Financial Review the technology has the capacity to learn autonomously, due to its composition of 10,000 biomimic neurons, which, through a process known as synaptic time-dependent plasticity, can form memories and associations in the same way as a biological brain. He said it works 5000 times faster and uses a thousandth of the power of the fastest computers available today.

Mr Van Der Made is inviting technology partners to license the technology for their own chips and products, and is donating the technology to university laboratories in the US for research.

The Netherlands-born Australian, now based in southern California, was inspired to create the brain-like chip in 2004, after working at the IBM Internet Security Systems for two years, where he was chief scientist for behaviour analysis security systems. …

A June 23, 2015 article by Tony Malkovic on phys.org provide a few more details about BrainChip and about the deal,

Mr Van der Made and the company, also called BrainChip, are now based in Silicon Valley in California and he returned to Perth last month as part of the company’s recent merger and listing on the Australian Stock Exchange.

He says BrainChip has the ability to learn autonomously, evolve and associate information and respond to stimuli like a brain.

Mr Van der Made says the company’s chip technology is more than 5,000 faster than other technologies, yet uses only 1/1,000th of the power.

“It’s a hardware only solution, there is no software to slow things down,” he says.

“It doesn’t executes instructions, it learns and supplies what it has learnt to new information.

“BrainChip is on the road to position itself at the forefront of artificial intelligence,” he says.

“We have a clear advantage, at least 10 years, over anybody else in the market, that includes IBM.”

BrainChip is aiming at the global semiconductor market involving almost anything that involves a microprocessor.

You can find out more about the company, BrainChip here. The site does have a little more information about the technology,

Spiking Neuron Adaptive Processor (SNAP)

BrainChip’s inventor, Peter van der Made, has created an exciting new Spiking Neural Networking technology that has the ability to learn autonomously, evolve and associate information just like the human brain. The technology is developed as a digital design containing a configurable “sea of biomimic neurons’.

The technology is fast, completely digital, and consumes very low power, making it feasible to integrate large networks into portable battery-operated products, something that has never been possible before.

BrainChip neurons autonomously learn through a process known as STDP (Synaptic Time Dependent Plasticity). BrainChip’s fully digital neurons process input spikes directly in hardware. Sensory neurons convert physical stimuli into spikes. Learning occurs when the input is intense, or repeating through feedback and this is directly correlated to the way the brain learns.

Computing Artificial Neural Networks (ANNs)

The brain consists of specialized nerve cells that communicate with one another. Each such nerve cell is called a Neuron,. The inputs are memory nodes called synapses. When the neuron associates information, it produces a ‘spike’ or a ‘spike train’. Each spike is a pulse that triggers a value in the next synapse. Synapses store values, similar to the way a computer stores numbers. In combination, these values determine the function of the neural network. Synapses acquire values through learning.

In Artificial Neural Networks (ANNs) this complex function is generally simplified to a static summation and compare function, which severely limits computational power. BrainChip has redefined how neural networks work, replicating the behaviour of the brain. BrainChip’s artificial neurons are completely digital, biologically realistic resulting in increased computational power, high speed and extremely low power consumption.

The Problem with Artificial Neural Networks

Standard ANNs, running on computer hardware are processed sequentially; the processor runs a program that defines the neural network. This consumes considerable time and because these neurons are processed sequentially, all this delayed time adds up resulting in a significant linear decline in network performance with size.

BrainChip neurons are all mapped in parallel. Therefore the performance of the network is not dependent on the size of the network providing a clear speed advantage. So because there is no decline in performance with network size, learning also takes place in parallel within each synapse, making STDP learning very fast.

A hardware solution

BrainChip’s digital neural technology is the only custom hardware solution that is capable of STDP learning. The hardware requires no coding and has no software as it evolves learning through experience and user direction.

The BrainChip neuron is unique in that it is completely digital, behaves asynchronously like an analog neuron, and has a higher level of biological realism. It is more sophisticated than software neural models and is many orders of magnitude faster. The BrainChip neuron consists entirely of binary logic gates with no traditional CPU core. Hence, there are no ‘programming’ steps. Learning and training takes the place of programming and coding. Like of a child learning a task for the first time.

Software ‘neurons’, to compromise for limited processing power, are simplified to a point where they do not resemble any of the features of a biological neuron. This is due to the sequential nature of computers, whereby all data has to pass through a central processor in chunks of 16, 32 or 64 bits. In contrast, the brain’s network is parallel and processes the equivalent of millions of data bits simultaneously.

A significantly faster technology

Performing emulation in digital hardware has distinct advantages over software. As software is processed sequentially, one instruction at a time, Software Neural Networks perform slower with increasing size. Parallel hardware does not have this problem and maintains the same speed no matter how large the network is. Another advantage of hardware is that it is more power efficient by several orders of magnitude.

The speed of the BrainChip device is unparalleled in the industry.

For large neural networks a GPU (Graphics Processing Unit) is ~70 times faster than the Intel i7 executing a similar size neural network. The BrainChip neural network is faster still and takes far fewer CPU (Central Processing Unit) cycles, with just a little communication overhead, which means that the CPU is available for other tasks. The BrainChip network also responds much faster than a software network accelerating the performance of the entire system.

The BrainChip network is completely parallel, with no sequential dependencies. This means that the network does not slow down with increasing size.

Endorsed by the neuroscience community

A number of the world’s pre-eminent neuroscientists have endorsed the technology and are agreeing to joint develop projects.

BrainChip has the potential to become the de facto standard for all autonomous learning technology and computer products.

Patented

BrainChip’s autonomous learning technology patent was granted on the 21st September 2008 (Patent number US 8,250,011 “Autonomous learning dynamic artificial neural computing device and brain inspired system”). BrainChip is the only company in the world to have achieved autonomous learning in a network of Digital Neurons without any software.

A prototype Spiking Neuron Adaptive Processor was designed as a ‘proof of concept’ chip.

The first tests were completed at the end of 2007 and this design was used as the foundation for the US patent application which was filed in 2008. BrainChip has also applied for a continuation-in-part patent filed in 2012, the “Method and System for creating Dynamic Neural Function Libraries”, US Patent Application 13/461,800 which is pending.

Van der Made doesn’t seem to have published any papers on this work and the description of the technology provided on the website is frustratingly vague. There are many acronyms for processes but no mention of what this hardware might be. For example, is it based on a memristor or some kind of atomic ionic switch or something else altogether?

It would be interesting to find out more but, presumably, van der Made, wishes to withhold details. There are many companies following the same strategy while pursuing what they view as a business advantage.

* Artificial neuron link added June 26, 2015 at 1017 hours PST.

Brain-on-a-chip 2014 survey/overview

Michael Berger has written another of his Nanowerk Spotlight articles focussing on neuromorphic engineering and the concept of a brain-on-a-chip bringing it up-to-date April 2014 style.

It’s a topic he and I have been following (separately) for years. Berger’s April 4, 2014 Brain-on-a-chip Spotlight article provides a very welcome overview of the international neuromorphic engineering effort (Note: Links have been removed),

Constructing realistic simulations of the human brain is a key goal of the Human Brain Project, a massive European-led research project that commenced in 2013.

The Human Brain Project is a large-scale, scientific collaborative project, which aims to gather all existing knowledge about the human brain, build multi-scale models of the brain that integrate this knowledge and use these models to simulate the brain on supercomputers. The resulting “virtual brain” offers the prospect of a fundamentally new and improved understanding of the human brain, opening the way for better treatments for brain diseases and for novel, brain-like computing technologies.

Several years ago, another European project named FACETS (Fast Analog Computing with Emergent Transient States) completed an exhaustive study of neurons to find out exactly how they work, how they connect to each other and how the network can ‘learn’ to do new things. One of the outcomes of the project was PyNN, a simulator-independent language for building neuronal network models.

Scientists have great expectations that nanotechnologies will bring them closer to the goal of creating computer systems that can simulate and emulate the brain’s abilities for sensation, perception, action, interaction and cognition while rivaling its low power consumption and compact size – basically a brain-on-a-chip. Already, scientists are working hard on laying the foundations for what is called neuromorphic engineering – a new interdisciplinary discipline that includes nanotechnologies and whose goal is to design artificial neural systems with physical architectures similar to biological nervous systems.

Several research projects funded with millions of dollars are at work with the goal of developing brain-inspired computer architectures or virtual brains: DARPA’s SyNAPSE, the EU’s BrainScaleS (a successor to FACETS), or the Blue Brain project (one of the predecessors of the Human Brain Project) at Switzerland’s EPFL [École Polytechnique Fédérale de Lausanne].

Berger goes on to describe the raison d’être for neuromorphic engineering (attempts to mimic biological brains),

Programmable machines are limited not only by their computational capacity, but also by an architecture requiring (human-derived) algorithms to both describe and process information from their environment. In contrast, biological neural systems (e.g., brains) autonomously process information in complex environments by automatically learning relevant and probabilistically stable features and associations. Since real world systems are always many body problems with infinite combinatorial complexity, neuromorphic electronic machines would be preferable in a host of applications – but useful and practical implementations do not yet exist.

Researchers are mostly interested in emulating neural plasticity (aka synaptic plasticity), from Berger’s April 4, 2014 article,

Independent from military-inspired research like DARPA’s, nanotechnology researchers in France have developed a hybrid nanoparticle-organic transistor that can mimic the main functionalities of a synapse. This organic transistor, based on pentacene and gold nanoparticles and termed NOMFET (Nanoparticle Organic Memory Field-Effect Transistor), has opened the way to new generations of neuro-inspired computers, capable of responding in a manner similar to the nervous system  (read more: “Scientists use nanotechnology to try building computers modeled after the brain”).

One of the key components of any neuromorphic effort, and its starting point, is the design of artificial synapses. Synapses dominate the architecture of the brain and are responsible for massive parallelism, structural plasticity, and robustness of the brain. They are also crucial to biological computations that underlie perception and learning. Therefore, a compact nanoelectronic device emulating the functions and plasticity of biological synapses will be the most important building block of brain-inspired computational systems.

In 2011, a team at Stanford University demonstrates a new single element nanoscale device, based on the successfully commercialized phase change material technology, emulating the functionality and the plasticity of biological synapses. In their work, the Stanford team demonstrated a single element electronic synapse with the capability of both the modulation of the time constant and the realization of the different synaptic plasticity forms while consuming picojoule level energy for its operation (read more: “Brain-inspired computing with nanoelectronic programmable synapses”).

Berger does mention memristors but not in any great detail in this article,

Researchers have also suggested that memristor devices are capable of emulating the biological synapses with properly designed CMOS neuron components. A memristor is a two-terminal electronic device whose conductance can be precisely modulated by charge or flux through it. It has the special property that its resistance can be programmed (resistor) and subsequently remains stored (memory).

One research project already demonstrated that a memristor can connect conventional circuits and support a process that is the basis for memory and learning in biological systems (read more: “Nanotechnology’s road to artificial brains”).

You can find a number of memristor articles here including these: Memristors have always been with us from June 14, 2013; How to use a memristor to create an artificial brain from Feb. 26, 2013; Electrochemistry of memristors in a critique of the 2008 discovery from Sept. 6, 2012; and many more (type ‘memristor’ into the blog search box and you should receive many postings or alternatively, you can try ‘artificial brains’ if you want everything I have on artificial brains).

Getting back to Berger’s April 4, 2014 article, he mentions one more approach and this one stands out,

A completely different – and revolutionary – human brain model has been designed by researchers in Japan who introduced the concept of a new class of computer which does not use any circuit or logic gate. This artificial brain-building project differs from all others in the world. It does not use logic-gate based computing within the framework of Turing. The decision-making protocol is not a logical reduction of decision rather projection of frequency fractal operations in a real space, it is an engineering perspective of Gödel’s incompleteness theorem.

Berger wrote about this work in much more detail in a Feb. 10, 2014 Nanowerk Spotlight article titled: Brain jelly – design and construction of an organic, brain-like computer, (Note: Links have been removed),

In a previous Nanowerk Spotlight we reported on the concept of a full-fledged massively parallel organic computer at the nanoscale that uses extremely low power (“Will brain-like evolutionary circuit lead to intelligent computers?”). In this work, the researchers created a process of circuit evolution similar to the human brain in an organic molecular layer. This was the first time that such a brain-like ‘evolutionary’ circuit had been realized.

The research team, led by Dr. Anirban Bandyopadhyay, a senior researcher at the Advanced Nano Characterization Center at the National Institute of Materials Science (NIMS) in Tsukuba, Japan, has now finalized their human brain model and introduced the concept of a new class of computer which does not use any circuit or logic gate.

In a new open-access paper published online on January 27, 2014, in Information (“Design and Construction of a Brain-Like Computer: A New Class of Frequency-Fractal Computing Using Wireless Communication in a Supramolecular Organic, Inorganic System”), Bandyopadhyay and his team now describe the fundamental computing principle of a frequency fractal brain like computer.

“Our artificial brain-building project differs from all others in the world for several reasons,” Bandyopadhyay explains to Nanowerk. He lists the four major distinctions:
1) We do not use logic gate based computing within the framework of Turing, our decision-making protocol is not a logical reduction of decision rather projection of frequency fractal operations in a real space, it is an engineering perspective of Gödel’s incompleteness theorem.
2) We do not need to write any software, the argument and basic phase transition for decision-making, ‘if-then’ arguments and the transformation of one set of arguments into another self-assemble and expand spontaneously, the system holds an astronomically large number of ‘if’ arguments and its associative ‘then’ situations.
3) We use ‘spontaneous reply back’, via wireless communication using a unique resonance band coupling mode, not conventional antenna-receiver model, since fractal based non-radiative power management is used, the power expense is negligible.
4) We have carried out our own single DNA, single protein molecule and single brain microtubule neurophysiological study to develop our own Human brain model.

I encourage people to read Berger’s articles on this topic as they provide excellent information and links to much more. Curiously (mind you, it is easy to miss something), he does not mention James Gimzewski’s work at the University of California at Los Angeles (UCLA). Working with colleagues from the National Institute for Materials Science in Japan, Gimzewski published a paper about “two-, three-terminal WO3-x-based nanoionic devices capable of a broad range of neuromorphic and electrical functions”. You can find out more about the paper in my Dec. 24, 2012 posting titled: Synaptic electronics.

As for the ‘brain jelly’ paper, here’s a link to and a citation for it,

Design and Construction of a Brain-Like Computer: A New Class of Frequency-Fractal Computing Using Wireless Communication in a Supramolecular Organic, Inorganic System by Subrata Ghoshemail, Krishna Aswaniemail, Surabhi Singhemail, Satyajit Sahuemail, Daisuke Fujitaemail and Anirban Bandyopadhyay. Information 2014, 5(1), 28-100; doi:10.3390/info5010028

It’s an open access paper.

As for anyone who’s curious about why the US BRAIN initiative ((Brain Research through Advancing Innovative Neurotechnologies, also referred to as the Brain Activity Map Project) is not mentioned, I believe that’s because it’s focussed on biological brains exclusively at this point (you can check its Wikipedia entry to confirm).

Anirban Bandyopadhyay was last mentioned here in a January 16, 2014 posting titled: Controversial theory of consciousness confirmed (maybe) in  the context of a presentation in Amsterdam, Netherlands.

How to use a memristor to create an artificial brain

Dr. Andy Thomas of Bielefeld University’s (Germany) Faculty of Physics has developed a ‘blueprint’ for an artificial brain based on memristors. From the Feb. 26, 2013, news item on phys.org,

Scientists have long been dreaming about building a computer that would work like a brain. This is because a brain is far more energy-saving than a computer, it can learn by itself, and it doesn’t need any programming. Privatdozent [senior lecturer] Dr. Andy Thomas from Bielefeld University’s Faculty of Physics is experimenting with memristors – electronic microcomponents that imitate natural nerves. Thomas and his colleagues proved that they could do this a year ago. They constructed a memristor that is capable of learning. Andy Thomas is now using his memristors as key components in a blueprint for an artificial brain. He will be presenting his results at the beginning of March in the print edition of the Journal of Physics D: Applied Physics.

The Feb. 26, 2013 University of Bielefeld news release, which originated the news item, describes why memristors are the foundation for Thomas’s proposed artificial brain,

Memristors are made of fine nanolayers and can be used to connect electric circuits. For several years now, the memristor has been considered to be the electronic equivalent of the synapse. Synapses are, so to speak, the bridges across which nerve cells (neurons) contact each other. Their connections increase in strength the more often they are used. Usually, one nerve cell is connected to other nerve cells across thousands of synapses.

Like synapses, memristors learn from earlier impulses. In their case, these are electrical impulses that (as yet) do not come from nerve cells but from the electric circuits to which they are connected. The amount of current a memristor allows to pass depends on how strong the current was that flowed through it in the past and how long it was exposed to it.

Andy Thomas explains that because of their similarity to synapses, memristors are particularly suitable for building an artificial brain – a new generation of computers. ‘They allow us to construct extremely energy-efficient and robust processors that are able to learn by themselves.’ Based on his own experiments and research findings from biology and physics, his article is the first to summarize which principles taken from nature need to be transferred to technological systems if such a neuromorphic (nerve like) computer is to function. Such principles are that memristors, just like synapses, have to ‘note’ earlier impulses, and that neurons react to an impulse only when it passes a certain threshold.

‘… a memristor can store information more precisely than the bits on which previous computer processors have been based,’ says Thomas. Both a memristor and a bit work with electrical impulses. However, a bit does not allow any fine adjustment – it can only work with ‘on’ and ‘off’. In contrast, a memristor can raise or lower its resistance continuously. ‘This is how memristors deliver a basis for the gradual learning and forgetting of an artificial brain,’ explains Thomas.

A nanocomponent that is capable of learning: The Bielefeld memristor built into a chip here is 600 times thinner than a human hair. [ downloaded from http://ekvv.uni-bielefeld.de/blog/uninews/entry/blueprint_for_an_artificial_brain]

A nanocomponent that is capable of learning: The Bielefeld memristor built into a chip here is 600 times thinner than a human hair. [ downloaded from http://ekvv.uni-bielefeld.de/blog/uninews/entry/blueprint_for_an_artificial_brain]

Here’s a citation for and link to the paper (from the university news release),

Andy Thomas, ‘Memristor-based neural networks’, Journal of Physics D: Applied Physics, http://dx.doi.org/10.1088/0022-3727/46/9/093001, released online on 5 February 2013, published in print on 6 March 2013.

This paper is available until March 5, 2013 as IOP Science (publisher of Journal Physics D: Applied Physics), makes their papers freely available (with some provisos) for the first 30 days after online publication, from the Access Options page for Memristor-based neural networks,

As a service to the community, IOP is pleased to make papers in its journals freely available for 30 days from date of online publication – but only fair use of the content is permitted.

Under fair use, IOP content may only be used by individuals for the sole purpose of their own private study or research. Such individuals may access, download, store, search and print hard copies of the text. Copying should be limited to making single printed or electronic copies.

Other use is not considered fair use. In particular, use by persons other than for the purpose of their own private study or research is not fair use. Nor is altering, recompiling, reselling, systematic or programmatic copying, redistributing or republishing. Regular/systematic downloading of content or the downloading of a substantial proportion of the content is not fair use either.

Getting back to the memristor, I’ve been writing about it for some years, it was most recently mentioned here  in a Feb.7, 2013 posting and I mentioned in a Dec. 24, 2012 posting nanoionic nanodevices  also described as resembling synapses.

Memristors and dogs

They’ve managed to recreate Pavlov’s classic experiment with dogs and feeding bells using an electronic circuit and teaching it to respond to a stimulus just as the dogs learned to respond. From the May 8, 2012 news item on Science Daily,

The bell rings and the dog starts drooling. Such a reaction was part of studies performed by Ivan Pavlov, a famous Russian psychologist and physiologist and winner of the Nobel Prize for Physiology and Medicine in 1904. His experiment, nowadays known as “Pavlov’s Dog,” is ever since considered as a milestone for implicit learning processes. By using specific electronic components scientists form the Technical Faculty and the Memory Research at the Kiel University together with the Forschungszentrum Jülich were now able to mimic the behavior of Pavlov`s dog.

I found this image on the May 8, 2012 news release webpage at the University of Kiel (Germany) website,

The experiment called “Pavlov’s Dog” shows that acoustic stimulations can cause physical reactions. Scientists of Kiel University redesigned this mental learning process. Source: Kohlstedt

Also from the May 8, 2012 news release on the University of Kiel website,

“We used memristive devices in order to mimic the associative behaviour of Pavlov’s dog in form of an electronic circuit”, explains Professor Hermann Kohlstedt, head of the working group Nanoelectronics at the University of Kiel.

Memristors are a class of electronic circuit elements which have only been available to scientists in an adequate quality for a few years. They exhibit a memory characteristic in form of hysteretic current-voltage curves consisting of high and low resistance branches. In dependence on the prior charge flow through the device these resistances can vary. Scientists try to use this memory effect in order to create networks that are similar to neuronal connections between synapses. “In the long term, our goal is to copy the synaptic plasticity onto electronic circuits. We might even be able to recreate cognitive skills electronically”, says Kohlstedt. The collaborating scientific working groups in Kiel and Jülich have taken a small step toward this goal.

The project set-up consisted of the following: two electrical impulses were linked via a memristive device to a comparator. The two pulses represent the food and the bell in Pavlov’s experiment. A comparator is a device that compares two voltages or currents and generates an output when a given level has been reached. In this case, it produces the output signal (representing saliva) when the threshold value is reached. In addition, the memristive element also has a threshold voltage that is defined by physical and chemical mechanisms in the nano-electronic device. Below this threshold value the memristive device behaves like any ordinary linear resistor. However, when the threshold value is exceeded, a hysteretic (changed) current-voltage characteristic will appear.

“During the experimental investigation, the food for the dog (electrical impulse 1) resulted in an output signal of the comparator, which could be defined as salivation. Unlike to impulse 1, the ring of the bell (electrical impulse 2) was set in such a way that the compartor’s output stayed unaffected – meaning no salivation”, describes Dr. Martin Ziegler, scientist at the Kiel University and the first-author of the publication. After applying both impulses simultaneously to the memristive device, the threshold value was exceeded. The working group had activated the memristive memory function. Multiple repetitions led to an associative learning process within the circuit – similar to Pavlov’s dogs. “From this moment on, we had only to apply electrical impulse 2 (bell) and the comparator generated an output signal, equivalent to salivation”, says Ziegler and is very pleased with these results. Electrical impulse 1 (feed) triggers the same reaction as it did before the learning. Hence, the electric circuit shows a behaviour that is termed classical conditioning in the field of psychology. Beyond that, the scientists were able to prove that the electrical circuit is able to unlearn a particular behaviour if both impulses were not longer applied simultaneously.

My most recent posting (and I have many) on memristors is from April 19, 2012 where I mentioned an artificial synapse developed with them at the University of Michigan and also noted that HP Labs has claimed it will be releasing ‘memristor-based’ products in2013.

The May 8, 2012 news item on Science Daily includes the full citation for the team’s paper and a link to it (the paper is behind a paywall).