Tag Archives: neuromorphic engineering

Computer chips derived in a Darwinian environment

Courtesy: University of Twente

Courtesy: University of Twente

If that ‘computer chip’ looks a brain to you, good, since that’s what the image is intended to illustrate assuming I’ve correctly understood the Sept. 21, 2015 news item on Nanowerk (Note: A link has been removed),

Researchers of the MESA+ Institute for Nanotechnology and the CTIT Institute for ICT Research at the University of Twente in The Netherlands have demonstrated working electronic circuits that have been produced in a radically new way, using methods that resemble Darwinian evolution. The size of these circuits is comparable to the size of their conventional counterparts, but they are much closer to natural networks like the human brain. The findings promise a new generation of powerful, energy-efficient electronics, and have been published in the leading British journal Nature Nanotechnology (“Evolution of a Designless Nanoparticle Network into Reconfigurable Boolean Logic”).

A Sept. 21, 2015 University of Twente press release, which originated the news item, explains why and how they have decided to mimic nature to produce computer chips,

One of the greatest successes of the 20th century has been the development of digital computers. During the last decades these computers have become more and more powerful by integrating ever smaller components on silicon chips. However, it is becoming increasingly hard and extremely expensive to continue this miniaturisation. Current transistors consist of only a handful of atoms. It is a major challenge to produce chips in which the millions of transistors have the same characteristics, and thus to make the chips operate properly. Another drawback is that their energy consumption is reaching unacceptable levels. It is obvious that one has to look for alternative directions, and it is interesting to see what we can learn from nature. Natural evolution has led to powerful ‘computers’ like the human brain, which can solve complex problems in an energy-efficient way. Nature exploits complex networks that can execute many tasks in parallel.

Moving away from designed circuits

The approach of the researchers at the University of Twente is based on methods that resemble those found in Nature. They have used networks of gold nanoparticles for the execution of essential computational tasks. Contrary to conventional electronics, they have moved away from designed circuits. By using ‘designless’ systems, costly design mistakes are avoided. The computational power of their networks is enabled by applying artificial evolution. This evolution takes less than an hour, rather than millions of years. By applying electrical signals, one and the same network can be configured into 16 different logical gates. The evolutionary approach works around – or can even take advantage of – possible material defects that can be fatal in conventional electronics.

Powerful and energy-efficient

It is the first time that scientists have succeeded in this way in realizing robust electronics with dimensions that can compete with commercial technology. According to prof. Wilfred van der Wiel, the realized circuits currently still have limited computing power. “But with this research we have delivered proof of principle: demonstrated that our approach works in practice. By scaling up the system, real added value will be produced in the future. Take for example the efforts to recognize patterns, such as with face recognition. This is very difficult for a regular computer, while humans and possibly also our circuits can do this much better.”  Another important advantage may be that this type of circuitry uses much less energy, both in the production, and during use. The researchers anticipate a wide range of applications, for example in portable electronics and in the medical world.

Here’s a link to and a citation for the paper,

Evolution of a designless nanoparticle network into reconfigurable Boolean logic by S. K. Bose, C. P. Lawrence, Z. Liu, K. S. Makarenko, R. M. J. van Damme, H. J. Broersma, & W. G. van der Wiel. Nature Nanotechnology (2015) doi:10.1038/nnano.2015.207 Published online 21 September 2015

This paper is behind a paywall.

Final comment, this research, especially with the reference to facial recognition, reminds me of memristors and neuromorphic engineering. I have written many times on this topic and you should be able to find most of the material by using ‘memristor’ as your search term in the blog search engine. For the mildly curious, here are links to two recent memristor articles, Knowm (sounds like gnome?) A memristor company with a commercially available product in a Sept. 10, 2015 posting and Memristor, memristor, you are popular in a May 15, 2015 posting.

Knowm (sounds like gnome?) A memristor company with a commercially available product

German garden gnome Date: 11 August 2006 (original upload date) Source:Transferred from en.wikipedia to Commons. Author: Colibri1968 at English Wikipedia

German garden gnome
Date 11 August 2006 (original upload date)
Source Transferred from en.wikipedia to Commons.
Author Colibri1968 at English Wikipedia

I thought the ‘gnome’knowm’ homonym or, more precisely, homophone, might be an amusing way to lead into yet another memristor story on this blog.  A Sept. 3, 2015 news item on Azonano features a ‘memristor-based’ company/organization, Knowm,

Knowm Inc., a start-up pioneering next-generation advanced computing architectures and technology, today announced they are the first to develop and make commercially-available memristors with bi-directional incremental learning capability.

The device was developed through research from Boise State University’s [Idaho, US] Dr. Kris Campbell, and this new data unequivocally confirms Knowm’s memristors are capable of bi-directional incremental learning. This has been previously deemed impossible in filamentary devices by Knowm’s competitors, including IBM [emphasis mine], despite significant investment in materials, research and development. With this advancement, Knowm delivers the first commercial memristors that can adjust resistance in incremental steps in both direction rather than only one direction with an all-or-nothing ‘erase’. This advancement opens the gateway to extremely efficient and powerful machine learning and artificial intelligence applications.

A Sept. 2, 2015 Knowm news release (also on MarketWatch), which originated the news item, provides more details,

“Having commercially-available memristors with bi-directional voltage-dependent incremental capability is a huge step forward for the field of machine learning and, particularly, AHaH Computing,” said Alex Nugent, CEO and co-founder of Knowm. “We have been dreaming about this device and developing the theory for how to apply them to best maximize their potential for more than a decade, but the lack of capability confirmation had been holding us back. This data is truly a monumental technical milestone and it will serve as a springboard to catapult Knowm and AHaH Computing forward.”

Memristors with the bi-directional incremental resistance change property are the foundation for developing learning hardware such as Knowm Inc.’s recently announced Thermodynamic RAM (kT-RAM) and help realize the full potential of AHaH Computing. The availability of kT-RAM will have the largest impact in fields that require higher computational power for machine learning tasks like autonomous robotics, big-data analysis and intelligent Internet assistants. kT-RAM radically increases the efficiency of synaptic integration and adaptation operations by reducing them to physically adaptive ‘analog’ memristor-based circuits. Synaptic integration and adaptation are the core operations behind tasks such as pattern recognition and inference. Knowm Inc. is the first company in the world to bring this technology to market.

Knowm is ushering in the next phase of computing with the first general-purpose neuromemristive processor specification. Earlier this year the company announced the commercial availability of the first products in support of the kT-RAM technology stack. These include the sale of discrete memristor chips, a Back End of Line (BEOL) CMOS+memristor service, the SENSE and Application Servers and their first application named “Knowm Anomaly”, the first application built based on the theory of AHaH Computing and kT-RAM architecture. Knowm also simultaneously announced the company’s visionary developer program for organizations and individual developers. This includes the Knowm API, which serves as development hardware and training resources for co-developing the Knowm technology stack.

Knowm certainly has big ambitions. I’m a little surprised they mentioned IBM rather than HP Labs, which is where researchers first claimed to find evidence of the existence of memristors in 2008 (the story is noted in my Nanotech Mysteries wiki here). As I understand it, HP Labs is working busily (having a missed a few deadlines) on developing a commercial product using memristors.

For the curious, my latest informal roundup of memristor stories is in a May 15, 2015 posting.

Getting back to to Knowm and big ambitions, here’s Alex Nugent, Knowm CEO (Chief Executive Officer) and co-founder talking about the company and its technology,

Researchers at Karolinska Institute (Sweden) build an artificial neuron

Unlike my post earlier today (June 26, 2015) about BrainChip, this is not about neuromorphic engineering (artificial brain), although I imagine this new research from the Karolinska Institute (Institutet) will be of some interest to that community. This research was done in the interest of developing* therapeutic interventions for brain diseases. One aspect of this news item/press release I find particularly interesting is the insistence that “no living parts” were used to create the artificial neuron,

A June 24, 2015 news item on ScienceDaily describes what the artificial neuron can do,

Scientists have managed to build a fully functional neuron by using organic bioelectronics. This artificial neuron contain [sic] no ‘living’ parts, but is capable of mimicking the function of a human nerve cell and communicate in the same way as our own neurons do. [emphasis mine]

A June 24, 2015 Karolinska Institute press release (also on EurekAlert), which originated the news item, describes how neurons communicate in the brain, standard techniques for stimulating neuronal cells, and the scientists’ work on a technique to improve stimulation,

Neurons are isolated from each other and communicate with the help of chemical signals, commonly called neurotransmitters or signal substances. Inside a neuron, these chemical signals are converted to an electrical action potential, which travels along the axon of the neuron until it reaches the end. Here at the synapse, the electrical signal is converted to the release of chemical signals, which via diffusion can relay the signal to the next nerve cell.

To date, the primary technique for neuronal stimulation in human cells is based on electrical stimulation. However, scientists at the Swedish Medical Nanoscience Centre (SMNC) at Karolinska Institutet in collaboration with collegues at Linköping University, have now created an organic bioelectronic device that is capable of receiving chemical signals, which it can then relay to human cells.

“Our artificial neuron is made of conductive polymers and it functions like a human neuron,” says lead investigator Agneta Richter-Dahlfors, professor of cellular microbiology. “The sensing component of the artificial neuron senses a change in chemical signals in one dish, and translates this into an electrical signal. This electrical signal is next translated into the release of the neurotransmitter acetylcholine in a second dish, whose effect on living human cells can be monitored.”

The research team hope that their innovation, presented in the journal Biosensors & Bioelectronics, will improve treatments for neurologial disorders which currently rely on traditional electrical stimulation. The new technique makes it possible to stimulate neurons based on specific chemical signals received from different parts of the body. In the future, this may help physicians to bypass damaged nerve cells and restore neural function.

“Next, we would like to miniaturize this device to enable implantation into the human body,” says Agneta Richer-Dahlfors. “We foresee that in the future, by adding the concept of wireless communication, the biosensor could be placed in one part of the body, and trigger release of neurotransmitters at distant locations. Using such auto-regulated sensing and delivery, or possibly a remote control, new and exciting opportunities for future research and treatment of neurological disorders can be envisaged.”

Here’s a link to and a citation for the paper,

An organic electronic biomimetic neuron enables auto-regulated neuromodulation by Daniel T. Simon, Karin C. Larsson, David Nilsson, Gustav Burström, b, Dagmar Galter, Magnus Berggren, and Agneta Richter-Dahlfors. Biosensors and Bioelectronics Volume 71, 15 September 2015, Pages 359–364         doi:10.1016/j.bios.2015.04.058

This paper is behind a paywall.

As to anyone (other than myself) who may be curious about exactly what they used (other than “living parts”) to create an artificial neuron, there’s the paper’s abstract,

Current therapies for neurological disorders are based on traditional medication and electric stimulation. Here, we present an organic electronic biomimetic neuron, with the capacity to precisely intervene with the underlying malfunctioning signalling pathway using endogenous substances. The fundamental function of neurons, defined as chemical-to-electrical-to-chemical signal transduction, is achieved by connecting enzyme-based amperometric biosensors and organic electronic ion pumps. Selective biosensors transduce chemical signals into an electric current, which regulates electrophoretic delivery of chemical substances without necessitating liquid flow. Biosensors detected neurotransmitters in physiologically relevant ranges of 5–80 µM, showing linear response above 20 µm with approx. 0.1 nA/µM slope. When exceeding defined threshold concentrations, biosensor output signals, connected via custom hardware/software, activated local or distant neurotransmitter delivery from the organic electronic ion pump. Changes of 20 µM glutamate or acetylcholine triggered diffusive delivery of acetylcholine, which activated cells via receptor-mediated signalling. This was observed in real-time by single-cell ratiometric Ca2+ imaging. The results demonstrate the potential of the organic electronic biomimetic neuron in therapies involving long-range neuronal signalling by mimicking the function of projection neurons. Alternatively, conversion of glutamate-induced descending neuromuscular signals into acetylcholine-mediated muscular activation signals may be obtained, applicable for bridging injured sites and active prosthetics.

While it’s true neither are “living parts,” I believe both enzymes and organic electronic ion pumps can be found in biological organisms. The insistence on ‘nonliving’ in the press release suggests that scientists in Europe, if nowhere else, are still quite concerned about any hint that they are working on genetically modified organisms (GMO). It’s ironic when you consider that people blithely use enzyme-based cleaning and beauty products but one can appreciate the* scientists’ caution.

* ‘develop’ changed to ‘developing’ and ‘the’ added on July 3, 2015.

Is it time to invest in a ‘brain chip’ company?

This story take a few twists and turns. First, ‘brain chips’ as they’re sometimes called would allow, theoretically, computers to learn and function like human brains. (Note: There’s another type of ‘brain chip’ which could be implanted in human brains to help deal with diseases such as Parkinson’s and Alzheimer’s. *Today’s [June 26, 2015] earlier posting about an artificial neuron points at some of the work being done in this areas.*)

Returning to the ‘brain ship’ at hand. Second, there’s a company called BrainChip, which has one patent and another pending for, yes, a ‘brain chip’.

The company, BrainChip, founded in Australia and now headquartered in California’s Silicon Valley, recently sparked some investor interest in Australia. From an April 7, 2015 article by Timna Jacks for the Australian Financial Review,

Former mining stock Aziana Limited has whet Australian investors’ appetite for science fiction, with its share price jumping 125 per cent since it announced it was acquiring a US-based tech company called BrainChip, which promises artificial intelligence through a microchip that replicates the neural system of the human brain.

Shares in the company closed at 9¢ before the Easter long weekend, having been priced at just 4¢ when the backdoor listing of BrainChip was announced to the market on March 18.

Creator of the patented digital chip, Peter Van Der Made told The Australian Financial Review the technology has the capacity to learn autonomously, due to its composition of 10,000 biomimic neurons, which, through a process known as synaptic time-dependent plasticity, can form memories and associations in the same way as a biological brain. He said it works 5000 times faster and uses a thousandth of the power of the fastest computers available today.

Mr Van Der Made is inviting technology partners to license the technology for their own chips and products, and is donating the technology to university laboratories in the US for research.

The Netherlands-born Australian, now based in southern California, was inspired to create the brain-like chip in 2004, after working at the IBM Internet Security Systems for two years, where he was chief scientist for behaviour analysis security systems. …

A June 23, 2015 article by Tony Malkovic on phys.org provide a few more details about BrainChip and about the deal,

Mr Van der Made and the company, also called BrainChip, are now based in Silicon Valley in California and he returned to Perth last month as part of the company’s recent merger and listing on the Australian Stock Exchange.

He says BrainChip has the ability to learn autonomously, evolve and associate information and respond to stimuli like a brain.

Mr Van der Made says the company’s chip technology is more than 5,000 faster than other technologies, yet uses only 1/1,000th of the power.

“It’s a hardware only solution, there is no software to slow things down,” he says.

“It doesn’t executes instructions, it learns and supplies what it has learnt to new information.

“BrainChip is on the road to position itself at the forefront of artificial intelligence,” he says.

“We have a clear advantage, at least 10 years, over anybody else in the market, that includes IBM.”

BrainChip is aiming at the global semiconductor market involving almost anything that involves a microprocessor.

You can find out more about the company, BrainChip here. The site does have a little more information about the technology,

Spiking Neuron Adaptive Processor (SNAP)

BrainChip’s inventor, Peter van der Made, has created an exciting new Spiking Neural Networking technology that has the ability to learn autonomously, evolve and associate information just like the human brain. The technology is developed as a digital design containing a configurable “sea of biomimic neurons’.

The technology is fast, completely digital, and consumes very low power, making it feasible to integrate large networks into portable battery-operated products, something that has never been possible before.

BrainChip neurons autonomously learn through a process known as STDP (Synaptic Time Dependent Plasticity). BrainChip’s fully digital neurons process input spikes directly in hardware. Sensory neurons convert physical stimuli into spikes. Learning occurs when the input is intense, or repeating through feedback and this is directly correlated to the way the brain learns.

Computing Artificial Neural Networks (ANNs)

The brain consists of specialized nerve cells that communicate with one another. Each such nerve cell is called a Neuron,. The inputs are memory nodes called synapses. When the neuron associates information, it produces a ‘spike’ or a ‘spike train’. Each spike is a pulse that triggers a value in the next synapse. Synapses store values, similar to the way a computer stores numbers. In combination, these values determine the function of the neural network. Synapses acquire values through learning.

In Artificial Neural Networks (ANNs) this complex function is generally simplified to a static summation and compare function, which severely limits computational power. BrainChip has redefined how neural networks work, replicating the behaviour of the brain. BrainChip’s artificial neurons are completely digital, biologically realistic resulting in increased computational power, high speed and extremely low power consumption.

The Problem with Artificial Neural Networks

Standard ANNs, running on computer hardware are processed sequentially; the processor runs a program that defines the neural network. This consumes considerable time and because these neurons are processed sequentially, all this delayed time adds up resulting in a significant linear decline in network performance with size.

BrainChip neurons are all mapped in parallel. Therefore the performance of the network is not dependent on the size of the network providing a clear speed advantage. So because there is no decline in performance with network size, learning also takes place in parallel within each synapse, making STDP learning very fast.

A hardware solution

BrainChip’s digital neural technology is the only custom hardware solution that is capable of STDP learning. The hardware requires no coding and has no software as it evolves learning through experience and user direction.

The BrainChip neuron is unique in that it is completely digital, behaves asynchronously like an analog neuron, and has a higher level of biological realism. It is more sophisticated than software neural models and is many orders of magnitude faster. The BrainChip neuron consists entirely of binary logic gates with no traditional CPU core. Hence, there are no ‘programming’ steps. Learning and training takes the place of programming and coding. Like of a child learning a task for the first time.

Software ‘neurons’, to compromise for limited processing power, are simplified to a point where they do not resemble any of the features of a biological neuron. This is due to the sequential nature of computers, whereby all data has to pass through a central processor in chunks of 16, 32 or 64 bits. In contrast, the brain’s network is parallel and processes the equivalent of millions of data bits simultaneously.

A significantly faster technology

Performing emulation in digital hardware has distinct advantages over software. As software is processed sequentially, one instruction at a time, Software Neural Networks perform slower with increasing size. Parallel hardware does not have this problem and maintains the same speed no matter how large the network is. Another advantage of hardware is that it is more power efficient by several orders of magnitude.

The speed of the BrainChip device is unparalleled in the industry.

For large neural networks a GPU (Graphics Processing Unit) is ~70 times faster than the Intel i7 executing a similar size neural network. The BrainChip neural network is faster still and takes far fewer CPU (Central Processing Unit) cycles, with just a little communication overhead, which means that the CPU is available for other tasks. The BrainChip network also responds much faster than a software network accelerating the performance of the entire system.

The BrainChip network is completely parallel, with no sequential dependencies. This means that the network does not slow down with increasing size.

Endorsed by the neuroscience community

A number of the world’s pre-eminent neuroscientists have endorsed the technology and are agreeing to joint develop projects.

BrainChip has the potential to become the de facto standard for all autonomous learning technology and computer products.


BrainChip’s autonomous learning technology patent was granted on the 21st September 2008 (Patent number US 8,250,011 “Autonomous learning dynamic artificial neural computing device and brain inspired system”). BrainChip is the only company in the world to have achieved autonomous learning in a network of Digital Neurons without any software.

A prototype Spiking Neuron Adaptive Processor was designed as a ‘proof of concept’ chip.

The first tests were completed at the end of 2007 and this design was used as the foundation for the US patent application which was filed in 2008. BrainChip has also applied for a continuation-in-part patent filed in 2012, the “Method and System for creating Dynamic Neural Function Libraries”, US Patent Application 13/461,800 which is pending.

Van der Made doesn’t seem to have published any papers on this work and the description of the technology provided on the website is frustratingly vague. There are many acronyms for processes but no mention of what this hardware might be. For example, is it based on a memristor or some kind of atomic ionic switch or something else altogether?

It would be interesting to find out more but, presumably, van der Made, wishes to withhold details. There are many companies following the same strategy while pursuing what they view as a business advantage.

* Artificial neuron link added June 26, 2015 at 1017 hours PST.

A more complex memristor: from two terminals to three for brain-like computing

Researchers have developed a more complex memristor device than has been the case according to an April 6, 2015 Northwestern University news release (also on EurekAlert),

Researchers are always searching for improved technologies, but the most efficient computer possible already exists. It can learn and adapt without needing to be programmed or updated. It has nearly limitless memory, is difficult to crash, and works at extremely fast speeds. It’s not a Mac or a PC; it’s the human brain. And scientists around the world want to mimic its abilities.

Both academic and industrial laboratories are working to develop computers that operate more like the human brain. Instead of operating like a conventional, digital system, these new devices could potentially function more like a network of neurons.

“Computers are very impressive in many ways, but they’re not equal to the mind,” said Mark Hersam, the Bette and Neison Harris Chair in Teaching Excellence in Northwestern University’s McCormick School of Engineering. “Neurons can achieve very complicated computation with very low power consumption compared to a digital computer.”

A team of Northwestern researchers, including Hersam, has accomplished a new step forward in electronics that could bring brain-like computing closer to reality. The team’s work advances memory resistors, or “memristors,” which are resistors in a circuit that “remember” how much current has flowed through them.

“Memristors could be used as a memory element in an integrated circuit or computer,” Hersam said. “Unlike other memories that exist today in modern electronics, memristors are stable and remember their state even if you lose power.”

Current computers use random access memory (RAM), which moves very quickly as a user works but does not retain unsaved data if power is lost. Flash drives, on the other hand, store information when they are not powered but work much slower. Memristors could provide a memory that is the best of both worlds: fast and reliable. But there’s a problem: memristors are two-terminal electronic devices, which can only control one voltage channel. Hersam wanted to transform it into a three-terminal device, allowing it to be used in more complex electronic circuits and systems.

The memristor is of some interest to a number of other parties prominent amongst them, the University of Michigan’s Professor Wei Lu and HP (Hewlett Packard) Labs, both of whom are mentioned in one of my more recent memristor pieces, a June 26, 2014 post.

Getting back to Northwestern,

Hersam and his team met this challenge by using single-layer molybdenum disulfide (MoS2), an atomically thin, two-dimensional nanomaterial semiconductor. Much like the way fibers are arranged in wood, atoms are arranged in a certain direction–called “grains”–within a material. The sheet of MoS2 that Hersam used has a well-defined grain boundary, which is the interface where two different grains come together.

“Because the atoms are not in the same orientation, there are unsatisfied chemical bonds at that interface,” Hersam explained. “These grain boundaries influence the flow of current, so they can serve as a means of tuning resistance.”

When a large electric field is applied, the grain boundary literally moves, causing a change in resistance. By using MoS2 with this grain boundary defect instead of the typical metal-oxide-metal memristor structure, the team presented a novel three-terminal memristive device that is widely tunable with a gate electrode.

“With a memristor that can be tuned with a third electrode, we have the possibility to realize a function you could not previously achieve,” Hersam said. “A three-terminal memristor has been proposed as a means of realizing brain-like computing. We are now actively exploring this possibility in the laboratory.”

Here’s a link to and a citation for the paper,

Gate-tunable memristive phenomena mediated by grain boundaries in single-layer MoS2 by Vinod K. Sangwan, Deep Jariwala, In Soo Kim, Kan-Sheng Chen, Tobin J. Marks, Lincoln J. Lauhon, & Mark C. Hersam. Nature Nanotechnology (2015) doi:10.1038/nnano.2015.56 Published online 06 April 2015

This paper is behind a paywall but there is a few preview available through ReadCube Access.

Dexter Johnson has written about this latest memristor development in an April 9, 2015 posting on his Nanoclast blog (on the IEEE [Institute for Electrical and Electronics Engineers] website) where he notes this (Note: A link has been removed),

The memristor seems to generate fairly polarized debate, especially here on this website in the comments on stories covering the technology. The controversy seems to fall along the lines that the device that HP Labs’ Stan Williams and Greg Snider developed back in 2008 doesn’t exactly line up with the original theory of the memristor proposed by Leon Chua back in 1971.

It seems the ‘debate’ has evolved from issues about how the memristor is categorized. I wonder if there’s still discussion about whether or not HP Labs is attempting to develop a patent thicket of sorts.

Brain-like computing with optical fibres

Researchers from Singapore and the United Kingdom are exploring an optical fibre approach to brain-like computing (aka neuromorphic computing) as opposed to approaches featuring a memristor or other devices such as a nanoionic device that I’ve written about previously. A March 10, 2015 news item on Nanowerk describes this new approach,

Computers that function like the human brain could soon become a reality thanks to new research using optical fibres made of speciality glass.

Researchers from the Optoelectronics Research Centre (ORC) at the University of Southampton, UK, and Centre for Disruptive Photonic Technologies (CDPT) at the Nanyang Technological University (NTU), Singapore, have demonstrated how neural networks and synapses in the brain can be reproduced, with optical pulses as information carriers, using special fibres made from glasses that are sensitive to light, known as chalcogenides.

“The project, funded under Singapore’s Agency for Science, Technology and Research (A*STAR) Advanced Optics in Engineering programme, was conducted within The Photonics Institute (TPI), a recently established dual institute between NTU and the ORC.”

A March 10, 2015 University of Southampton press release (also on EurekAlert), which originated the news item, describes the nature of the problem that the scientists are trying address (Note: A link has been removed),

Co-author Professor Dan Hewak from the ORC, says: “Since the dawn of the computer age, scientists have sought ways to mimic the behaviour of the human brain, replacing neurons and our nervous system with electronic switches and memory. Now instead of electrons, light and optical fibres also show promise in achieving a brain-like computer. The cognitive functionality of central neurons underlies the adaptable nature and information processing capability of our brains.”

In the last decade, neuromorphic computing research has advanced software and electronic hardware that mimic brain functions and signal protocols, aimed at improving the efficiency and adaptability of conventional computers.

However, compared to our biological systems, today’s computers are more than a million times less efficient. Simulating five seconds of brain activity takes 500 seconds and needs 1.4 MW of power, compared to the small number of calories burned by the human brain.

Using conventional fibre drawing techniques, microfibers can be produced from chalcogenide (glasses based on sulphur) that possess a variety of broadband photoinduced effects, which allow the fibres to be switched on and off. This optical switching or light switching light, can be exploited for a variety of next generation computing applications capable of processing vast amounts of data in a much more energy-efficient manner.

Co-author Dr Behrad Gholipour explains: “By going back to biological systems for inspiration and using mass-manufacturable photonic platforms, such as chalcogenide fibres, we can start to improve the speed and efficiency of conventional computing architectures, while introducing adaptability and learning into the next generation of devices.”

By exploiting the material properties of the chalcogenides fibres, the team led by Professor Cesare Soci at NTU have demonstrated a range of optical equivalents of brain functions. These include holding a neural resting state and simulating the changes in electrical activity in a nerve cell as it is stimulated. In the proposed optical version of this brain function, the changing properties of the glass act as the varying electrical activity in a nerve cell, and light provides the stimulus to change these properties. This enables switching of a light signal, which is the equivalent to a nerve cell firing.

The research paves the way for scalable brain-like computing systems that enable ‘photonic neurons’ with ultrafast signal transmission speeds, higher bandwidth and lower power consumption than their biological and electronic counterparts.

Professor Cesare Soci said: “This work implies that ‘cognitive’ photonic devices and networks can be effectively used to develop non-Boolean computing and decision-making paradigms that mimic brain functionalities and signal protocols, to overcome bandwidth and power bottlenecks of traditional data processing.”

Here’s a link to and a citation for the paper,

Amorphous Metal-Sulphide Microfibers Enable Photonic Synapses for Brain-Like Computing by Behrad Gholipour, Paul Bastock, Chris Craig, Khouler Khan, Dan Hewak. and Cesare Soci. Advanced Optical Materials DOI: 10.1002/adom.201400472
Article first published online: 15 JAN 2015

© 2015 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim

This article is behind a paywall.

For anyone interested in memristors and nanoionic devices, here are a few posts (from this blog) to get you started:

Memristors, memcapacitors, and meminductors for faster computers (June 30, 2014)

This second one offers more details and links to previous pieces,

Memristor, memristor! What is happening? News from the University of Michigan and HP Laboratories (June 25, 2014)

This post is more of a survey including memristors, nanoionic devices, ‘brain jelly, and more,

Brain-on-a-chip 2014 survey/overview (April 7, 2014)

One comment, this brain-on-a-chip is not to be confused with ‘organs-on-a-chip’ projects which are attempting to simulate human organs (Including the brain) so chemicals and drugs can be tested.

Memristors, memcapacitors, and meminductors for faster computers

While some call memristors a fourth fundamental component alongside resistors, capacitors, and inductors (as mentioned in my June 26, 2014 posting which featured an update of sorts on memristors [scroll down about 80% of the way]), others view memristors as members of an emerging periodic table of circuit elements (as per my April 7, 2010 posting).

It seems scientists, Fabio Traversa, and his colleagues fall into the ‘periodic table of circuit elements’ camp. From Traversa’s  June 27, 2014 posting on nanotechweb.org,

Memristors, memcapacitors and meminductors may retain information even without a power source. Several applications of these devices have already been proposed, yet arguably one of the most appealing is ‘memcomputing’ – a brain-inspired computing paradigm utilizing the ability of emergent nanoscale devices to store and process information on the same physical platform.

A multidisciplinary team of researchers from the Autonomous University of Barcelona in Spain, the University of California San Diego and the University of South Carolina in the US, and the Polytechnic of Turin in Italy, suggest a realization of “memcomputing” based on nanoscale memcapacitors. They propose and analyse a major advancement in using memcapacitive systems (capacitors with memory), as central elements for Very Large Scale Integration (VLSI) circuits capable of storing and processing information on the same physical platform. They name this architecture Dynamic Computing Random Access Memory (DCRAM).

Using the standard configuration of a Dynamic Random Access Memory (DRAM) where the capacitors have been substituted with solid-state based memcapacitive systems, they show the possibility of performing WRITE, READ and polymorphic logic operations by only applying modulated voltage pulses to the memory cells. Being based on memcapacitors, the DCRAM expands very little energy per operation. It is a realistic memcomputing machine that overcomes the von Neumann bottleneck and clearly exhibits intrinsic parallelism and functional polymorphism.

Here’s a link to and a citation for the paper,

Dynamic computing random access memory by F L Traversa, F Bonani, Y V Pershin, and M Di Ventra. Nanotechnology Volume 25 Number 28  doi:10.1088/0957-4484/25/28/285201 Published 27 June 2014

This paper is behind a paywall.

Brains, prostheses, nanotechnology, and human enhancement: summary (part five of five)

The Brain research, ethics, and nanotechnology (part one of five) May 19, 2014 post kicked off a series titled ‘Brains, prostheses, nanotechnology, and human enhancement’ which brings together a number of developments in the worlds of neuroscience, prosthetics, and, incidentally, nanotechnology in the field of interest called human enhancement. Parts one through four are an attempt to draw together a number of new developments, mostly in the US and in Europe. Due to my language skills which extend to English and, more tenuously, French, I can’t provide a more ‘global perspective’.

Now for the summary. Ranging from research meant to divulge more about how the brain operates in hopes of healing conditions such as Parkinson’s and Alzeheimer’s diseases to utilizing public engagement exercises (first developed for nanotechnology) for public education and acceptance of brain research to the development of prostheses for the nervous system such as the Walk Again robotic suit for individuals with paraplegia (and, I expect quadriplegia [aka tetraplegia] in the future), brain research is huge in terms of its impact socially and economically across the globe.

Until now, I have not included information about neuromorphic engineering (creating computers with the processing capabilities of human brains). My May 16, 2014 posting (Wacky oxide. biological synchronicity, and human brainlike computing) features one of the latest developments along with this paragraph providing links to overview materials of the field,

As noted earlier, there are other approaches to creating an artificial brain, i.e., neuromorphic engineering. My April 7, 2014 posting is the most recent synopsis posted here; it includes excerpts from a Nanowerk Spotlight article overview along with a mention of the ‘brain jelly’ approach and a discussion of my somewhat extensive coverage of memristors and a mention of work on nanoionic devices. There is also a published roadmap to neuromorphic engineering featuring both analog and digital devices, mentioned in my April 18, 2014 posting.

There is an international brain (artificial and organic) enterprise underway. Meanwhile, work understanding the brain will lead to new therapies and, inevitably, attempts to enhance intelligence. There are already drugs and magic potions (e.g. oxygenated water in Mental clarity, stamina, endurance — is it in the bottle? Celebrity athletes tout the benefits of oxygenated water, but scientists have their doubts, a May 16,2014 article by Pamela Fayerman for the Vancouver Sun). In a June 19, 2009 posting featured Jamais Cascio’s  speculations about augmenting intelligence in an Atlantic magazine article.

While researchers such Miguel Nicolelis work on exoskeletons (externally worn robotic suits) controlled by the wearer’s thoughts and giving individuals with paraplegia the ability to walk, researchers from one of Germany’s Fraunhofer Institutes reveal a different technology for achieving the same ends. From a May 16, 2014 news item on Nanowerk,

People with severe injuries to their spinal cord currently have no prospect of recovery and remain confined to their wheelchairs. Now, all that could change with a new treatment that stimulates the spinal cord using electric impulses. The hope is that the technique will help paraplegic patients learn to walk again. From June 3 – 5 [2-14], Fraunhofer researchers will be at the Sensor + Test measurement fair in Nürnberg to showcase the implantable microelectrode sensors they have developed in the course of pre-clinical development work (Hall 12, Booth 12-537).

A May 14, 2014 Fraunhofer Institute news release, which originated the news item, provides more details about this technology along with an image of the implantable microelectrode sensors,

The implantable microelectrode sensors are flexible and wafer-thin. © Fraunhofer IMM

The implantable microelectrode sensors are flexible and wafer-thin.
© Fraunhofer IMM

Now a consortium of European research institutions and companies want to get affected patients quite literally back on their feet. In the EU’s [European Union’s] NEUWalk project, which has been awarded funding of some nine million euros, researchers are working on a new method of treatment designed to restore motor function in patients who have suffered severe injuries to their spinal cord. The technique relies on electrically stimulating the nerve pathways in the spinal cord. “In the injured area, the nerve cells have been damaged to such an extent that they no longer receive usable information from the brain, so the stimulation needs to be delivered beneath that,” explains Dr. Peter Detemple, head of department at the Fraunhofer Institute for Chemical Technology’s Mainz branch (IMM) and NEUWalk project coordinator. To do this, Detemple and his team are developing flexible, wafer-thin microelectrodes that are implanted within the spinal canal on the spinal cord. These multichannel electrode arrays stimulate the nerve pathways with electric impulses that are generated by the accompanying by microprocessor-controlled neurostimulator. “The various electrodes of the array are located around the nerve roots responsible for locomotion. By delivering a series of pulses, we can trigger those nerve roots in the correct order to provoke motion sequences of movements and support the motor function,” says Detemple.

Researchers from the consortium have already successfully conducted tests on rats in which the spinal cord had not been completely severed. As well as stimulating the spinal cord, the rats were given a combination of medicine and rehabilitation training. Afterwards the animals were able not only to walk but also to run, climb stairs and surmount obstacles. “We were able to trigger specific movements by delivering certain sequences of pulses to the various electrodes implanted on the spinal cord,” says Detemple. The research scientist and his team believe that the same approach could help people to walk again, too. “We hope that we will be able to transfer the results of our animal testing to people. Of course, people who have suffered injuries to their spinal cord will still be limited when it comes to sport or walking long distances. The first priority is to give them a certain level of independence so that they can move around their apartment and look after themselves, for instance, or walk for short distances without requiring assistance,” says Detemple.

Researchers from the NEUWalk project intend to try out their system on two patients this summer. In this case, the patients are not completely paraplegic, which means there is still some limited communication between the brain and the legs. The scientists are currently working on tailored implants for the intervention. “However, even if both trials are a success, it will still be a few years before the system is ready for the general market. First, the method has to undergo clinical studies and demonstrate its effectiveness among a wider group of patients,” says Detemple.

Patients with Parkinson’s disease could also benefit from the neural prostheses. The most well-known symptoms of the disease are trembling, extreme muscle tremors and a short, [emphasis mine] stooped gait that has a profound effect on patients’ mobility. Until now this neurodegenerative disorder has mostly been treated with dopamine agonists – drugs that chemically imitate the effects of dopamine but that often lead to severe side effects when taken over a longer period of time. Once the disease has reached an advanced stage, doctors often turn to deep brain stimulation. This involves a complex operation to implant electrodes in specific parts of the brain so that the nerve cells in the region can be stimulated or suppressed as required. In the NEUWalk project, researchers are working on electric spinal cord simulation – an altogether less dangerous intervention that should however ease the symptoms of Parkinson’s disease just as effectively. “Initial animal testing has yielded some very promising results,” says Detemple.

(For anyone interested in the NEUWalk project, you can find more here,) Note the reference to Parkinson’s in the context of work designed for people with paraplegia. Brain research and prosthetics (specifically neuroprosthetics or neural prosthetics), are interconnected. As for the nanotechnology connection, in its role as an enabling technology it has provided some of the tools that make these efforts possible. It has also made some of the work in neuromorphic engineering (attempts to create an artificial brain that mimics the human brain) possible. It is a given that research on the human brain will inform efforts in neuromorphic engineering and that attempts will be made to create prostheses for the brain (cyborg brain) and other enhancements.

One final comment, I’m not so sure that transferring approaches and techniques developed to gain public acceptance of nanotechnology are necessarily going to be effective. (Harthorn seemed to be suggesting in her presentation to the Presidential Presidential Commission for the Study of Bioethical Issues that these ‘nano’ approaches could be adopted. Other researchers [Caulfield with the genome and Racine with previous neuroscience efforts] also suggested their experience could be transferred. While some of that is likely true,, it should be noted that some self-interest may be involved as brain research is likely to be a fresh source of funding for social science researchers with experience in nanotechnology and genomics who may be finding their usual funding sources less generous than previously.)

The likelihood there will be a substantive public panic over brain research is higher than it ever was for a nanotechnology panic (I am speaking with the benefit of hindsight re: nano panics). Everyone understands the word, ‘brain’, far fewer understand the word ‘nanotechnology’ which means that the level of interest is lower and people are less likely to get disturbed by an obscure technology. (The GMO panic gained serious traction with the ‘Frankenfood’ branding and when it fused rather unexpectedly with another research story,  stem cell research. In the UK, one can also add the panic over ‘mad cow’ disease or Creutzfeldt-Jakob disease (CJD), as it’s also known, to the mix. It was the GMO and other assorted panics which provided the impetus for much of the public engagement funding for nanotechnology.)

All one has to do in this instance is start discussions about changing someone’s brain and cyborgs and these researchers may find they have a much more volatile situation on their hands. As well, everyone (the general public and civil society groups/activists, not just the social science and science researchers) involved in the nanotechnology public engagement exercises has learned from the experience. In the meantime, pop culture concerns itself with zombies and we all know what they like to eat.

Links to other posts in the Brains, prostheses, nanotechnology, and human enhancement five-part series

Part one: Brain research, ethics, and nanotechnology (May 19, 2014 post)

Part two: BRAIN and ethics in the US with some Canucks (not the hockey team) participating (May 19, 2014)

Part three: Gray Matters: Integrative Approaches for Neuroscience, Ethics, and Society issued May 2014 by US Presidential Bioethics Commission (May 20, 2014)

Part four: Brazil, the 2014 World Cup kickoff, and a mind-controlled exoskeleton (May 20, 2014)

Roadmap to neuromorphic engineering digital and analog) for the creation of artificial brains *from the Georgia (US) Institute of Technology

While I didn’t mention neuromorphic engineering in my April 16, 2014 posting which focused on the more general aspect of nanotechnology in Transcendence, a movie starring Johnny Depp and opening on April 18, that specialty (neuromorphic engineering) is what makes the events in the movie ‘possible’ (assuming very large stretches of imagination bringing us into the realm implausibility and beyond). From the IMDB.com plot synopsis for Transcendence,

Dr. Will Caster (Johnny Depp) is the foremost researcher in the field of Artificial Intelligence, working to create a sentient machine that combines the collective intelligence of everything ever known with the full range of human emotions. His highly controversial experiments have made him famous, but they have also made him the prime target of anti-technology extremists who will do whatever it takes to stop him. However, in their attempt to destroy Will, they inadvertently become the catalyst for him to succeed to be a participant in his own transcendence. For his wife Evelyn (Rebecca Hall) and best friend Max Waters (Paul Bettany), both fellow researchers, the question is not if they canbut [sic] if they should. Their worst fears are realized as Will’s thirst for knowledge evolves into a seemingly omnipresent quest for power, to what end is unknown. The only thing that is becoming terrifyingly clear is there may be no way to stop him.

In the film, Carter’s intelligence/consciousness is uploaded to the computer, which suggests the computer has human brainlike qualities and abilities. The effort to make computer or artificial intelligence more humanlike is called neuromorphic engineering and according to an April 17, 2014 news item on phys.org, researchers at the Georgia Institute of Technology (Georgia Tech) have published a roadmap for this pursuit,

In the field of neuromorphic engineering, researchers study computing techniques that could someday mimic human cognition. Electrical engineers at the Georgia Institute of Technology recently published a “roadmap” that details innovative analog-based techniques that could make it possible to build a practical neuromorphic computer.

A core technological hurdle in this field involves the electrical power requirements of computing hardware. Although a human brain functions on a mere 20 watts of electrical energy, a digital computer that could approximate human cognitive abilities would require tens of thousands of integrated circuits (chips) and a hundred thousand watts of electricity or more – levels that exceed practical limits.

The Georgia Tech roadmap proposes a solution based on analog computing techniques, which require far less electrical power than traditional digital computing. The more efficient analog approach would help solve the daunting cooling and cost problems that presently make digital neuromorphic hardware systems impractical.

“To simulate the human brain, the eventual goal would be large-scale neuromorphic systems that could offer a great deal of computational power, robustness and performance,” said Jennifer Hasler, a professor in the Georgia Tech School of Electrical and Computer Engineering (ECE), who is a pioneer in using analog techniques for neuromorphic computing. “A configurable analog-digital system can be expected to have a power efficiency improvement of up to 10,000 times compared to an all-digital system.”

An April 16, 2014 Georgia Tech news release by Rick Robinson, which originated the news item, describes why Hasler wants to combine analog (based on biological principles) and digital computing approaches to the creation of artificial brains,

Unlike digital computing, in which computers can address many different applications by processing different software programs, analog circuits have traditionally been hard-wired to address a single application. For example, cell phones use energy-efficient analog circuits for a number of specific functions, including capturing the user’s voice, amplifying incoming voice signals, and controlling battery power.

Because analog devices do not have to process binary codes as digital computers do, their performance can be both faster and much less power hungry. Yet traditional analog circuits are limited because they’re built for a specific application, such as processing signals or controlling power. They don’t have the flexibility of digital devices that can process software, and they’re vulnerable to signal disturbance issues, or noise.

In recent years, Hasler has developed a new approach to analog computing, in which silicon-based analog integrated circuits take over many of the functions now performed by familiar digital integrated circuits. These analog chips can be quickly reconfigured to provide a range of processing capabilities, in a manner that resembles conventional digital techniques in some ways.

Over the last several years, Hasler and her research group have developed devices called field programmable analog arrays (FPAA). Like field programmable gate arrays (FPGA), which are digital integrated circuits that are ubiquitous in modern computing, the FPAA can be reconfigured after it’s manufactured – hence the phrase “field-programmable.”

Hasler and Marr’s 29-page paper traces a development process that could lead to the goal of reproducing human-brain complexity. The researchers investigate in detail a number of intermediate steps that would build on one another, helping researchers advance the technology sequentially.

For example, the researchers discuss ways to scale energy efficiency, performance and size in order to eventually achieve large-scale neuromorphic systems. The authors also address how the implementation and the application space of neuromorphic systems can be expected to evolve over time.

“A major concept here is that we have to first build smaller systems capable of a simple representation of one layer of human brain cortex,” Hasler said. “When that system has been successfully demonstrated, we can then replicate it in ways that increase its complexity and performance.”

Among neuromorphic computing’s major hurdles are the communication issues involved in networking integrated circuits in ways that could replicate human cognition. In their paper, Hasler and Marr emphasize local interconnectivity to reduce complexity. Moreover, they argue it’s possible to achieve these capabilities via purely silicon-based techniques, without relying on novel devices that are based on other approaches.

Commenting on the recent publication, Alice C. Parker, a professor of electrical engineering at the University of Southern California, said, “Professor Hasler’s technology roadmap is the first deep analysis of the prospects for large scale neuromorphic intelligent systems, clearly providing practical guidance for such systems, with a nearer-term perspective than our whole-brain emulation predictions. Her expertise in analog circuits, technology and device models positions her to provide this unique perspective on neuromorphic circuits.”

Eugenio Culurciello, an associate professor of biomedical engineering at Purdue University, commented, “I find this paper to be a very accurate description of the field of neuromorphic data processing systems. Hasler’s devices provide some of the best performance per unit power I have ever seen and are surely on the roadmap for one of the major technologies of the future.”

Said Hasler: “In this study, we conclude that useful neural computation machines based on biological principles – and potentially at the size of the human brain — seems technically within our grasp. We think that it’s more a question of gathering the right research teams and finding the funding for research and development than of any insurmountable technical barriers.”

Here’s a link to and a citation for the roadmap,

Finding a roadmap to achieve large neuromorphic hardware systems by Jennifer Hasler and Bo Marr.  Front. Neurosci. (Frontiers in Neuroscience), 10 September 2013 | doi: 10.3389/fnins.2013.00118

This is an open access article (at least, the HTML version is).

I have looked at Hasler’s roadmap and it provides a good and readable overview (even for an amateur like me; Note: you do have to need some tolerance for ‘not knowing’) of the state of neuromorphic engineering’s problems, and suggestions for overcoming them. Here’s a description of a human brain and its power requirements as compared to a computer’s (from the roadmap),

One of the amazing thing about the human brain is its ability to perform tasks beyond current supercomputers using roughly 20 W of average power, a level smaller than most individual computer microprocessor chips. A single neuron emulation can tax a high performance processor; given there is 1012 neurons operating at 20 W, each neuron consumes 20 pW average power. Assuming a neuron is conservatively performing the wordspotting computation (1000 synapses), 100,000 PMAC (PMAC = “Peta” MAC = 1015 MAC/s) would be required to duplicate the neural structure. A higher computational efficiency due to active dendritic line channels is expected as well as additional computation due to learning. The efficiency of a single neuron would be 5000 PMAC/W (or 5 TMAC/μW). A similar efficiency for 1011 neurons and 10,000 synapses is expected.

Building neuromorphic hardware requires that technology must scale from current levels given constraints of power, area, and cost: all issues typical in industrial and defense applications; if hardware technology does not scale as other available technologies, as well as takes advantage of the capabilities of IC technology that are currently visible, it will not be successful.

One of my main areas of interest is the memristor (a nanoscale ‘device/circuit element’ which emulates synaptic plasticity), which was mentioned in a way that allows me to understand how the device fits (or doesn’t fit) into the overall conceptual framework (from the roadmap),

The density for a 10 nm EEPROM device acting as a synapse begs the question of whether other nanotechnologies can improve on the resulting Si [silicon] synapse density. One transistor per synapse is hard to beat by any approach, particularly in scaled down Si (like 10 nm), when the synapse memory, computation, and update is contained within the EEPROM device. Most nano device technologies [i.e., memristors (Snider et al., 2011)] show considerable difficulties to get to two-dimensional arrays at a similar density level. Recently, a team from U. of Michigan announced the first functioning memristor two-dimensional (30 × 30) array built on a CMOS chip in 2012 (Kim et al., 2012), claiming applications in neuromorphic engineering, the same group has published innovative devices for digital (Jo and Lu, 2009) and analog applications (Jo et al., 2011).

I notice that the reference to the University’s of Michigan is relatively neutral in tone and the memristor does not figure substantively in Hasler’s roadmap.

Intriguingly, there is a section on commercialization; I didn’t think the research was at that stage yet (from the roadmap),

Although one can discuss how to build a cortical computer on the size of mammals and humans, the question is how will the technology developed for these large systems impact commercial development. The cost for ICs [integrated circuits or chips] alone for cortex would be approximately $20 M in current prices, which although possible for large users, would not be common to be found in individual households. Throughout the digital processor approach, commercial market opportunities have driven the progress in the field. Getting neuromorphic technology integrated into commercial environment allows us to ride this powerful economic “engine” rather than pull.

In most applications, the important commercial issues include minimization of cost, time to market, just sufficient performance for the application, power consumed, size and weight. The cost of a system built from ICs is, at a macro-level, a function of the area of those ICs, which then affects the number of ICs needed system wide, the number of components used, and the board space used. Efficiency of design tools, testing time and programming time also considerably affect system costs. Time to get an application to market is affected by the ability to reuse or quickly modify existing designs, and is reduced for a new application if existing hardware can be reconfigured, adapting to changing specifications, and a designer can utilize tools that allow rapid modifications to the design. Performance is key for any algorithm, but for a particular product, one only needs a solution to that particular problem; spending time to make the solution elegant is often a losing strategy.

The neuromorphic community has seen some early entries into commercial spaces, but we are just at the very beginning of the process. As the knowledge of neuromorphic engineering has progressed, which have included knowledge of sensor interfaces and analog signal processing, there have been those who have risen to the opportunities to commercialize these technologies. Neuromorphic research led to better understanding of sensory processing, particularly sensory systems interacting with other humans, enabling companies like Synaptics (touch pads), Foveon (CMOS color imagers), and Sonic Innovation (analog–digital hearing aids); Gilder provides a useful history of these two companies elsewhere (Gilder, 2005). From the early progress in analog signal processing we see companies like GTronix (acquired by National Semiconductor, then acquired by Texas Instruments) applying the impact of custom analog signal processing techniques and programmability toward auditory signal processing that improved sound quality requiring ultra-low power levels. Further, we see in companies like Audience there is some success from mapping the computational flow of the early stage auditory system, and implementing part of the event based auditory front-end to achieve useful results for improved voice quality. But the opportunities for the neuromorphic community are just beginning, and directly related to understanding the computational capabilities of these items. The availability of ICs that have these capabilities, whether or not one mentions they have any neuromorphic material, will further drive applications.

One expects that part of a cortex processing system would have significant computational possibilities, as well as cortex structures from smaller animals, and still be able to reach price points for commercial applications. In the following discussion, we will consider the potential of cortical structures at different levels of commercial applications. Figure 24 shows one typical block diagram, algorithms at each stage, resulting power efficiency (say based on current technology), as well as potential applications of the approach. In all cases, we will be considering a single die solution, typical for a commercial product, and will minimize the resulting communication power to I/O off the chip (no power consumed due to external memories or digital processing devices). We will assume a net computational efficiency of 10 TMAC/mW, corresponding to a lower power supply (i.e., mostly 500 mV, but not 180 mV) and slightly larger load capacitances; we make these assumptions as conservative pull back from possible applications, although we expect the more aggressive targets would be reachable. We assume the external power consumed is set by 1 event/second/neuron average event-rate off chip to a nearby IC. Given the input event rate is hard to predict, we don’t include that power requirement but assume it is handled by the input system. In all of these cases, getting the required computation using only digital techniques in a competitive size, weight, and especially power is hard to foresee.

We expect progress in these neuromorphic systems and that should find applications in traditional signal processing and graphics handling approaches. We will continue to have needs in computing that outpace our available computing resources, particularly at a power consumption required for a particular application. For example, the recent emphasis on cloud computing for academic/research problems shows the incredible need for larger computing resources than those directly available, or even projected to be available, for a portable computing platform (i.e., robotics). Of course a server per computing device is not a computing model that scales well. Given scaling limits on computing, both in power, area, and communication, one can expect to see more and more of these issues going forward.

We expect that a range of different ICs and systems will be built, all at different targets in the market. There are options for even larger networks, or integrating these systems with other processing elements on a chip/board. When moving to larger systems, particularly ones with 10–300 chips (3 × 107 to 109 neurons) or more, one can see utilization of stacking of dies, both decreasing the communication capacitance as well as board complexity. Stacking dies should roughly increase the final chip cost by the number of dies stacked.

In the following subsections, we overview general guidelines to consider when considering using neuromorphic ICs in the commercial market, first for low-cost consumer electronics, and second for a larger neuromorphic processor IC.

I have a casual observation to make. while the authors of the roadmap came to this conclusion “This study concludes that useful neural computation machines based on biological principles at the size of the human brain seems technically within our grasp.,” they’re also leaving themselves some wiggle room because the truth is no one knows if copying a human brain with circuits and various devices will lead to ‘thinking’ as we understand the concept.

For anyone who’s interested, you can search this blog for neuromorphic engineering, artificial brains, and/or memristors as I have many postings on these topics. One of my most recent on the topic of artificial brains is an April 7, 2014 piece titled: Brain-on-a-chip 2014 survey/overview.

One last observation about the movie ‘Transcendence’, has no one else noticed that it’s the ‘Easter’ story with a resurrected and digitized ‘Jesus’?

* Space inserted between ‘brains’ and ‘from’ in head on April 21, 2014.

Brain-on-a-chip 2014 survey/overview

Michael Berger has written another of his Nanowerk Spotlight articles focussing on neuromorphic engineering and the concept of a brain-on-a-chip bringing it up-to-date April 2014 style.

It’s a topic he and I have been following (separately) for years. Berger’s April 4, 2014 Brain-on-a-chip Spotlight article provides a very welcome overview of the international neuromorphic engineering effort (Note: Links have been removed),

Constructing realistic simulations of the human brain is a key goal of the Human Brain Project, a massive European-led research project that commenced in 2013.

The Human Brain Project is a large-scale, scientific collaborative project, which aims to gather all existing knowledge about the human brain, build multi-scale models of the brain that integrate this knowledge and use these models to simulate the brain on supercomputers. The resulting “virtual brain” offers the prospect of a fundamentally new and improved understanding of the human brain, opening the way for better treatments for brain diseases and for novel, brain-like computing technologies.

Several years ago, another European project named FACETS (Fast Analog Computing with Emergent Transient States) completed an exhaustive study of neurons to find out exactly how they work, how they connect to each other and how the network can ‘learn’ to do new things. One of the outcomes of the project was PyNN, a simulator-independent language for building neuronal network models.

Scientists have great expectations that nanotechnologies will bring them closer to the goal of creating computer systems that can simulate and emulate the brain’s abilities for sensation, perception, action, interaction and cognition while rivaling its low power consumption and compact size – basically a brain-on-a-chip. Already, scientists are working hard on laying the foundations for what is called neuromorphic engineering – a new interdisciplinary discipline that includes nanotechnologies and whose goal is to design artificial neural systems with physical architectures similar to biological nervous systems.

Several research projects funded with millions of dollars are at work with the goal of developing brain-inspired computer architectures or virtual brains: DARPA’s SyNAPSE, the EU’s BrainScaleS (a successor to FACETS), or the Blue Brain project (one of the predecessors of the Human Brain Project) at Switzerland’s EPFL [École Polytechnique Fédérale de Lausanne].

Berger goes on to describe the raison d’être for neuromorphic engineering (attempts to mimic biological brains),

Programmable machines are limited not only by their computational capacity, but also by an architecture requiring (human-derived) algorithms to both describe and process information from their environment. In contrast, biological neural systems (e.g., brains) autonomously process information in complex environments by automatically learning relevant and probabilistically stable features and associations. Since real world systems are always many body problems with infinite combinatorial complexity, neuromorphic electronic machines would be preferable in a host of applications – but useful and practical implementations do not yet exist.

Researchers are mostly interested in emulating neural plasticity (aka synaptic plasticity), from Berger’s April 4, 2014 article,

Independent from military-inspired research like DARPA’s, nanotechnology researchers in France have developed a hybrid nanoparticle-organic transistor that can mimic the main functionalities of a synapse. This organic transistor, based on pentacene and gold nanoparticles and termed NOMFET (Nanoparticle Organic Memory Field-Effect Transistor), has opened the way to new generations of neuro-inspired computers, capable of responding in a manner similar to the nervous system  (read more: “Scientists use nanotechnology to try building computers modeled after the brain”).

One of the key components of any neuromorphic effort, and its starting point, is the design of artificial synapses. Synapses dominate the architecture of the brain and are responsible for massive parallelism, structural plasticity, and robustness of the brain. They are also crucial to biological computations that underlie perception and learning. Therefore, a compact nanoelectronic device emulating the functions and plasticity of biological synapses will be the most important building block of brain-inspired computational systems.

In 2011, a team at Stanford University demonstrates a new single element nanoscale device, based on the successfully commercialized phase change material technology, emulating the functionality and the plasticity of biological synapses. In their work, the Stanford team demonstrated a single element electronic synapse with the capability of both the modulation of the time constant and the realization of the different synaptic plasticity forms while consuming picojoule level energy for its operation (read more: “Brain-inspired computing with nanoelectronic programmable synapses”).

Berger does mention memristors but not in any great detail in this article,

Researchers have also suggested that memristor devices are capable of emulating the biological synapses with properly designed CMOS neuron components. A memristor is a two-terminal electronic device whose conductance can be precisely modulated by charge or flux through it. It has the special property that its resistance can be programmed (resistor) and subsequently remains stored (memory).

One research project already demonstrated that a memristor can connect conventional circuits and support a process that is the basis for memory and learning in biological systems (read more: “Nanotechnology’s road to artificial brains”).

You can find a number of memristor articles here including these: Memristors have always been with us from June 14, 2013; How to use a memristor to create an artificial brain from Feb. 26, 2013; Electrochemistry of memristors in a critique of the 2008 discovery from Sept. 6, 2012; and many more (type ‘memristor’ into the blog search box and you should receive many postings or alternatively, you can try ‘artificial brains’ if you want everything I have on artificial brains).

Getting back to Berger’s April 4, 2014 article, he mentions one more approach and this one stands out,

A completely different – and revolutionary – human brain model has been designed by researchers in Japan who introduced the concept of a new class of computer which does not use any circuit or logic gate. This artificial brain-building project differs from all others in the world. It does not use logic-gate based computing within the framework of Turing. The decision-making protocol is not a logical reduction of decision rather projection of frequency fractal operations in a real space, it is an engineering perspective of Gödel’s incompleteness theorem.

Berger wrote about this work in much more detail in a Feb. 10, 2014 Nanowerk Spotlight article titled: Brain jelly – design and construction of an organic, brain-like computer, (Note: Links have been removed),

In a previous Nanowerk Spotlight we reported on the concept of a full-fledged massively parallel organic computer at the nanoscale that uses extremely low power (“Will brain-like evolutionary circuit lead to intelligent computers?”). In this work, the researchers created a process of circuit evolution similar to the human brain in an organic molecular layer. This was the first time that such a brain-like ‘evolutionary’ circuit had been realized.

The research team, led by Dr. Anirban Bandyopadhyay, a senior researcher at the Advanced Nano Characterization Center at the National Institute of Materials Science (NIMS) in Tsukuba, Japan, has now finalized their human brain model and introduced the concept of a new class of computer which does not use any circuit or logic gate.

In a new open-access paper published online on January 27, 2014, in Information (“Design and Construction of a Brain-Like Computer: A New Class of Frequency-Fractal Computing Using Wireless Communication in a Supramolecular Organic, Inorganic System”), Bandyopadhyay and his team now describe the fundamental computing principle of a frequency fractal brain like computer.

“Our artificial brain-building project differs from all others in the world for several reasons,” Bandyopadhyay explains to Nanowerk. He lists the four major distinctions:
1) We do not use logic gate based computing within the framework of Turing, our decision-making protocol is not a logical reduction of decision rather projection of frequency fractal operations in a real space, it is an engineering perspective of Gödel’s incompleteness theorem.
2) We do not need to write any software, the argument and basic phase transition for decision-making, ‘if-then’ arguments and the transformation of one set of arguments into another self-assemble and expand spontaneously, the system holds an astronomically large number of ‘if’ arguments and its associative ‘then’ situations.
3) We use ‘spontaneous reply back’, via wireless communication using a unique resonance band coupling mode, not conventional antenna-receiver model, since fractal based non-radiative power management is used, the power expense is negligible.
4) We have carried out our own single DNA, single protein molecule and single brain microtubule neurophysiological study to develop our own Human brain model.

I encourage people to read Berger’s articles on this topic as they provide excellent information and links to much more. Curiously (mind you, it is easy to miss something), he does not mention James Gimzewski’s work at the University of California at Los Angeles (UCLA). Working with colleagues from the National Institute for Materials Science in Japan, Gimzewski published a paper about “two-, three-terminal WO3-x-based nanoionic devices capable of a broad range of neuromorphic and electrical functions”. You can find out more about the paper in my Dec. 24, 2012 posting titled: Synaptic electronics.

As for the ‘brain jelly’ paper, here’s a link to and a citation for it,

Design and Construction of a Brain-Like Computer: A New Class of Frequency-Fractal Computing Using Wireless Communication in a Supramolecular Organic, Inorganic System by Subrata Ghoshemail, Krishna Aswaniemail, Surabhi Singhemail, Satyajit Sahuemail, Daisuke Fujitaemail and Anirban Bandyopadhyay. Information 2014, 5(1), 28-100; doi:10.3390/info5010028

It’s an open access paper.

As for anyone who’s curious about why the US BRAIN initiative ((Brain Research through Advancing Innovative Neurotechnologies, also referred to as the Brain Activity Map Project) is not mentioned, I believe that’s because it’s focussed on biological brains exclusively at this point (you can check its Wikipedia entry to confirm).

Anirban Bandyopadhyay was last mentioned here in a January 16, 2014 posting titled: Controversial theory of consciousness confirmed (maybe) in  the context of a presentation in Amsterdam, Netherlands.