Tag Archives: neuromorphic engineering

A more complex memristor: from two terminals to three for brain-like computing

Researchers have developed a more complex memristor device than has been the case according to an April 6, 2015 Northwestern University news release (also on EurekAlert),

Researchers are always searching for improved technologies, but the most efficient computer possible already exists. It can learn and adapt without needing to be programmed or updated. It has nearly limitless memory, is difficult to crash, and works at extremely fast speeds. It’s not a Mac or a PC; it’s the human brain. And scientists around the world want to mimic its abilities.

Both academic and industrial laboratories are working to develop computers that operate more like the human brain. Instead of operating like a conventional, digital system, these new devices could potentially function more like a network of neurons.

“Computers are very impressive in many ways, but they’re not equal to the mind,” said Mark Hersam, the Bette and Neison Harris Chair in Teaching Excellence in Northwestern University’s McCormick School of Engineering. “Neurons can achieve very complicated computation with very low power consumption compared to a digital computer.”

A team of Northwestern researchers, including Hersam, has accomplished a new step forward in electronics that could bring brain-like computing closer to reality. The team’s work advances memory resistors, or “memristors,” which are resistors in a circuit that “remember” how much current has flowed through them.

“Memristors could be used as a memory element in an integrated circuit or computer,” Hersam said. “Unlike other memories that exist today in modern electronics, memristors are stable and remember their state even if you lose power.”

Current computers use random access memory (RAM), which moves very quickly as a user works but does not retain unsaved data if power is lost. Flash drives, on the other hand, store information when they are not powered but work much slower. Memristors could provide a memory that is the best of both worlds: fast and reliable. But there’s a problem: memristors are two-terminal electronic devices, which can only control one voltage channel. Hersam wanted to transform it into a three-terminal device, allowing it to be used in more complex electronic circuits and systems.

The memristor is of some interest to a number of other parties prominent amongst them, the University of Michigan’s Professor Wei Lu and HP (Hewlett Packard) Labs, both of whom are mentioned in one of my more recent memristor pieces, a June 26, 2014 post.

Getting back to Northwestern,

Hersam and his team met this challenge by using single-layer molybdenum disulfide (MoS2), an atomically thin, two-dimensional nanomaterial semiconductor. Much like the way fibers are arranged in wood, atoms are arranged in a certain direction–called “grains”–within a material. The sheet of MoS2 that Hersam used has a well-defined grain boundary, which is the interface where two different grains come together.

“Because the atoms are not in the same orientation, there are unsatisfied chemical bonds at that interface,” Hersam explained. “These grain boundaries influence the flow of current, so they can serve as a means of tuning resistance.”

When a large electric field is applied, the grain boundary literally moves, causing a change in resistance. By using MoS2 with this grain boundary defect instead of the typical metal-oxide-metal memristor structure, the team presented a novel three-terminal memristive device that is widely tunable with a gate electrode.

“With a memristor that can be tuned with a third electrode, we have the possibility to realize a function you could not previously achieve,” Hersam said. “A three-terminal memristor has been proposed as a means of realizing brain-like computing. We are now actively exploring this possibility in the laboratory.”

Here’s a link to and a citation for the paper,

Gate-tunable memristive phenomena mediated by grain boundaries in single-layer MoS2 by Vinod K. Sangwan, Deep Jariwala, In Soo Kim, Kan-Sheng Chen, Tobin J. Marks, Lincoln J. Lauhon, & Mark C. Hersam. Nature Nanotechnology (2015) doi:10.1038/nnano.2015.56 Published online 06 April 2015

This paper is behind a paywall but there is a few preview available through ReadCube Access.

Dexter Johnson has written about this latest memristor development in an April 9, 2015 posting on his Nanoclast blog (on the IEEE [Institute for Electrical and Electronics Engineers] website) where he notes this (Note: A link has been removed),

The memristor seems to generate fairly polarized debate, especially here on this website in the comments on stories covering the technology. The controversy seems to fall along the lines that the device that HP Labs’ Stan Williams and Greg Snider developed back in 2008 doesn’t exactly line up with the original theory of the memristor proposed by Leon Chua back in 1971.

It seems the ‘debate’ has evolved from issues about how the memristor is categorized. I wonder if there’s still discussion about whether or not HP Labs is attempting to develop a patent thicket of sorts.

Brain-like computing with optical fibres

Researchers from Singapore and the United Kingdom are exploring an optical fibre approach to brain-like computing (aka neuromorphic computing) as opposed to approaches featuring a memristor or other devices such as a nanoionic device that I’ve written about previously. A March 10, 2015 news item on Nanowerk describes this new approach,

Computers that function like the human brain could soon become a reality thanks to new research using optical fibres made of speciality glass.

Researchers from the Optoelectronics Research Centre (ORC) at the University of Southampton, UK, and Centre for Disruptive Photonic Technologies (CDPT) at the Nanyang Technological University (NTU), Singapore, have demonstrated how neural networks and synapses in the brain can be reproduced, with optical pulses as information carriers, using special fibres made from glasses that are sensitive to light, known as chalcogenides.

“The project, funded under Singapore’s Agency for Science, Technology and Research (A*STAR) Advanced Optics in Engineering programme, was conducted within The Photonics Institute (TPI), a recently established dual institute between NTU and the ORC.”

A March 10, 2015 University of Southampton press release (also on EurekAlert), which originated the news item, describes the nature of the problem that the scientists are trying address (Note: A link has been removed),

Co-author Professor Dan Hewak from the ORC, says: “Since the dawn of the computer age, scientists have sought ways to mimic the behaviour of the human brain, replacing neurons and our nervous system with electronic switches and memory. Now instead of electrons, light and optical fibres also show promise in achieving a brain-like computer. The cognitive functionality of central neurons underlies the adaptable nature and information processing capability of our brains.”

In the last decade, neuromorphic computing research has advanced software and electronic hardware that mimic brain functions and signal protocols, aimed at improving the efficiency and adaptability of conventional computers.

However, compared to our biological systems, today’s computers are more than a million times less efficient. Simulating five seconds of brain activity takes 500 seconds and needs 1.4 MW of power, compared to the small number of calories burned by the human brain.

Using conventional fibre drawing techniques, microfibers can be produced from chalcogenide (glasses based on sulphur) that possess a variety of broadband photoinduced effects, which allow the fibres to be switched on and off. This optical switching or light switching light, can be exploited for a variety of next generation computing applications capable of processing vast amounts of data in a much more energy-efficient manner.

Co-author Dr Behrad Gholipour explains: “By going back to biological systems for inspiration and using mass-manufacturable photonic platforms, such as chalcogenide fibres, we can start to improve the speed and efficiency of conventional computing architectures, while introducing adaptability and learning into the next generation of devices.”

By exploiting the material properties of the chalcogenides fibres, the team led by Professor Cesare Soci at NTU have demonstrated a range of optical equivalents of brain functions. These include holding a neural resting state and simulating the changes in electrical activity in a nerve cell as it is stimulated. In the proposed optical version of this brain function, the changing properties of the glass act as the varying electrical activity in a nerve cell, and light provides the stimulus to change these properties. This enables switching of a light signal, which is the equivalent to a nerve cell firing.

The research paves the way for scalable brain-like computing systems that enable ‘photonic neurons’ with ultrafast signal transmission speeds, higher bandwidth and lower power consumption than their biological and electronic counterparts.

Professor Cesare Soci said: “This work implies that ‘cognitive’ photonic devices and networks can be effectively used to develop non-Boolean computing and decision-making paradigms that mimic brain functionalities and signal protocols, to overcome bandwidth and power bottlenecks of traditional data processing.”

Here’s a link to and a citation for the paper,

Amorphous Metal-Sulphide Microfibers Enable Photonic Synapses for Brain-Like Computing by Behrad Gholipour, Paul Bastock, Chris Craig, Khouler Khan, Dan Hewak. and Cesare Soci. Advanced Optical Materials DOI: 10.1002/adom.201400472
Article first published online: 15 JAN 2015

© 2015 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim

This article is behind a paywall.

For anyone interested in memristors and nanoionic devices, here are a few posts (from this blog) to get you started:

Memristors, memcapacitors, and meminductors for faster computers (June 30, 2014)

This second one offers more details and links to previous pieces,

Memristor, memristor! What is happening? News from the University of Michigan and HP Laboratories (June 25, 2014)

This post is more of a survey including memristors, nanoionic devices, ‘brain jelly, and more,

Brain-on-a-chip 2014 survey/overview (April 7, 2014)

One comment, this brain-on-a-chip is not to be confused with ‘organs-on-a-chip’ projects which are attempting to simulate human organs (Including the brain) so chemicals and drugs can be tested.

Memristors, memcapacitors, and meminductors for faster computers

While some call memristors a fourth fundamental component alongside resistors, capacitors, and inductors (as mentioned in my June 26, 2014 posting which featured an update of sorts on memristors [scroll down about 80% of the way]), others view memristors as members of an emerging periodic table of circuit elements (as per my April 7, 2010 posting).

It seems scientists, Fabio Traversa, and his colleagues fall into the ‘periodic table of circuit elements’ camp. From Traversa’s  June 27, 2014 posting on nanotechweb.org,

Memristors, memcapacitors and meminductors may retain information even without a power source. Several applications of these devices have already been proposed, yet arguably one of the most appealing is ‘memcomputing’ – a brain-inspired computing paradigm utilizing the ability of emergent nanoscale devices to store and process information on the same physical platform.

A multidisciplinary team of researchers from the Autonomous University of Barcelona in Spain, the University of California San Diego and the University of South Carolina in the US, and the Polytechnic of Turin in Italy, suggest a realization of “memcomputing” based on nanoscale memcapacitors. They propose and analyse a major advancement in using memcapacitive systems (capacitors with memory), as central elements for Very Large Scale Integration (VLSI) circuits capable of storing and processing information on the same physical platform. They name this architecture Dynamic Computing Random Access Memory (DCRAM).

Using the standard configuration of a Dynamic Random Access Memory (DRAM) where the capacitors have been substituted with solid-state based memcapacitive systems, they show the possibility of performing WRITE, READ and polymorphic logic operations by only applying modulated voltage pulses to the memory cells. Being based on memcapacitors, the DCRAM expands very little energy per operation. It is a realistic memcomputing machine that overcomes the von Neumann bottleneck and clearly exhibits intrinsic parallelism and functional polymorphism.

Here’s a link to and a citation for the paper,

Dynamic computing random access memory by F L Traversa, F Bonani, Y V Pershin, and M Di Ventra. Nanotechnology Volume 25 Number 28  doi:10.1088/0957-4484/25/28/285201 Published 27 June 2014

This paper is behind a paywall.

Brains, prostheses, nanotechnology, and human enhancement: summary (part five of five)

The Brain research, ethics, and nanotechnology (part one of five) May 19, 2014 post kicked off a series titled ‘Brains, prostheses, nanotechnology, and human enhancement’ which brings together a number of developments in the worlds of neuroscience, prosthetics, and, incidentally, nanotechnology in the field of interest called human enhancement. Parts one through four are an attempt to draw together a number of new developments, mostly in the US and in Europe. Due to my language skills which extend to English and, more tenuously, French, I can’t provide a more ‘global perspective’.

Now for the summary. Ranging from research meant to divulge more about how the brain operates in hopes of healing conditions such as Parkinson’s and Alzeheimer’s diseases to utilizing public engagement exercises (first developed for nanotechnology) for public education and acceptance of brain research to the development of prostheses for the nervous system such as the Walk Again robotic suit for individuals with paraplegia (and, I expect quadriplegia [aka tetraplegia] in the future), brain research is huge in terms of its impact socially and economically across the globe.

Until now, I have not included information about neuromorphic engineering (creating computers with the processing capabilities of human brains). My May 16, 2014 posting (Wacky oxide. biological synchronicity, and human brainlike computing) features one of the latest developments along with this paragraph providing links to overview materials of the field,

As noted earlier, there are other approaches to creating an artificial brain, i.e., neuromorphic engineering. My April 7, 2014 posting is the most recent synopsis posted here; it includes excerpts from a Nanowerk Spotlight article overview along with a mention of the ‘brain jelly’ approach and a discussion of my somewhat extensive coverage of memristors and a mention of work on nanoionic devices. There is also a published roadmap to neuromorphic engineering featuring both analog and digital devices, mentioned in my April 18, 2014 posting.

There is an international brain (artificial and organic) enterprise underway. Meanwhile, work understanding the brain will lead to new therapies and, inevitably, attempts to enhance intelligence. There are already drugs and magic potions (e.g. oxygenated water in Mental clarity, stamina, endurance — is it in the bottle? Celebrity athletes tout the benefits of oxygenated water, but scientists have their doubts, a May 16,2014 article by Pamela Fayerman for the Vancouver Sun). In a June 19, 2009 posting featured Jamais Cascio’s  speculations about augmenting intelligence in an Atlantic magazine article.

While researchers such Miguel Nicolelis work on exoskeletons (externally worn robotic suits) controlled by the wearer’s thoughts and giving individuals with paraplegia the ability to walk, researchers from one of Germany’s Fraunhofer Institutes reveal a different technology for achieving the same ends. From a May 16, 2014 news item on Nanowerk,

People with severe injuries to their spinal cord currently have no prospect of recovery and remain confined to their wheelchairs. Now, all that could change with a new treatment that stimulates the spinal cord using electric impulses. The hope is that the technique will help paraplegic patients learn to walk again. From June 3 – 5 [2-14], Fraunhofer researchers will be at the Sensor + Test measurement fair in Nürnberg to showcase the implantable microelectrode sensors they have developed in the course of pre-clinical development work (Hall 12, Booth 12-537).

A May 14, 2014 Fraunhofer Institute news release, which originated the news item, provides more details about this technology along with an image of the implantable microelectrode sensors,

The implantable microelectrode sensors are flexible and wafer-thin. © Fraunhofer IMM

The implantable microelectrode sensors are flexible and wafer-thin.
© Fraunhofer IMM

Now a consortium of European research institutions and companies want to get affected patients quite literally back on their feet. In the EU’s [European Union’s] NEUWalk project, which has been awarded funding of some nine million euros, researchers are working on a new method of treatment designed to restore motor function in patients who have suffered severe injuries to their spinal cord. The technique relies on electrically stimulating the nerve pathways in the spinal cord. “In the injured area, the nerve cells have been damaged to such an extent that they no longer receive usable information from the brain, so the stimulation needs to be delivered beneath that,” explains Dr. Peter Detemple, head of department at the Fraunhofer Institute for Chemical Technology’s Mainz branch (IMM) and NEUWalk project coordinator. To do this, Detemple and his team are developing flexible, wafer-thin microelectrodes that are implanted within the spinal canal on the spinal cord. These multichannel electrode arrays stimulate the nerve pathways with electric impulses that are generated by the accompanying by microprocessor-controlled neurostimulator. “The various electrodes of the array are located around the nerve roots responsible for locomotion. By delivering a series of pulses, we can trigger those nerve roots in the correct order to provoke motion sequences of movements and support the motor function,” says Detemple.

Researchers from the consortium have already successfully conducted tests on rats in which the spinal cord had not been completely severed. As well as stimulating the spinal cord, the rats were given a combination of medicine and rehabilitation training. Afterwards the animals were able not only to walk but also to run, climb stairs and surmount obstacles. “We were able to trigger specific movements by delivering certain sequences of pulses to the various electrodes implanted on the spinal cord,” says Detemple. The research scientist and his team believe that the same approach could help people to walk again, too. “We hope that we will be able to transfer the results of our animal testing to people. Of course, people who have suffered injuries to their spinal cord will still be limited when it comes to sport or walking long distances. The first priority is to give them a certain level of independence so that they can move around their apartment and look after themselves, for instance, or walk for short distances without requiring assistance,” says Detemple.

Researchers from the NEUWalk project intend to try out their system on two patients this summer. In this case, the patients are not completely paraplegic, which means there is still some limited communication between the brain and the legs. The scientists are currently working on tailored implants for the intervention. “However, even if both trials are a success, it will still be a few years before the system is ready for the general market. First, the method has to undergo clinical studies and demonstrate its effectiveness among a wider group of patients,” says Detemple.

Patients with Parkinson’s disease could also benefit from the neural prostheses. The most well-known symptoms of the disease are trembling, extreme muscle tremors and a short, [emphasis mine] stooped gait that has a profound effect on patients’ mobility. Until now this neurodegenerative disorder has mostly been treated with dopamine agonists – drugs that chemically imitate the effects of dopamine but that often lead to severe side effects when taken over a longer period of time. Once the disease has reached an advanced stage, doctors often turn to deep brain stimulation. This involves a complex operation to implant electrodes in specific parts of the brain so that the nerve cells in the region can be stimulated or suppressed as required. In the NEUWalk project, researchers are working on electric spinal cord simulation – an altogether less dangerous intervention that should however ease the symptoms of Parkinson’s disease just as effectively. “Initial animal testing has yielded some very promising results,” says Detemple.

(For anyone interested in the NEUWalk project, you can find more here,) Note the reference to Parkinson’s in the context of work designed for people with paraplegia. Brain research and prosthetics (specifically neuroprosthetics or neural prosthetics), are interconnected. As for the nanotechnology connection, in its role as an enabling technology it has provided some of the tools that make these efforts possible. It has also made some of the work in neuromorphic engineering (attempts to create an artificial brain that mimics the human brain) possible. It is a given that research on the human brain will inform efforts in neuromorphic engineering and that attempts will be made to create prostheses for the brain (cyborg brain) and other enhancements.

One final comment, I’m not so sure that transferring approaches and techniques developed to gain public acceptance of nanotechnology are necessarily going to be effective. (Harthorn seemed to be suggesting in her presentation to the Presidential Presidential Commission for the Study of Bioethical Issues that these ‘nano’ approaches could be adopted. Other researchers [Caulfield with the genome and Racine with previous neuroscience efforts] also suggested their experience could be transferred. While some of that is likely true,, it should be noted that some self-interest may be involved as brain research is likely to be a fresh source of funding for social science researchers with experience in nanotechnology and genomics who may be finding their usual funding sources less generous than previously.)

The likelihood there will be a substantive public panic over brain research is higher than it ever was for a nanotechnology panic (I am speaking with the benefit of hindsight re: nano panics). Everyone understands the word, ‘brain’, far fewer understand the word ‘nanotechnology’ which means that the level of interest is lower and people are less likely to get disturbed by an obscure technology. (The GMO panic gained serious traction with the ‘Frankenfood’ branding and when it fused rather unexpectedly with another research story,  stem cell research. In the UK, one can also add the panic over ‘mad cow’ disease or Creutzfeldt-Jakob disease (CJD), as it’s also known, to the mix. It was the GMO and other assorted panics which provided the impetus for much of the public engagement funding for nanotechnology.)

All one has to do in this instance is start discussions about changing someone’s brain and cyborgs and these researchers may find they have a much more volatile situation on their hands. As well, everyone (the general public and civil society groups/activists, not just the social science and science researchers) involved in the nanotechnology public engagement exercises has learned from the experience. In the meantime, pop culture concerns itself with zombies and we all know what they like to eat.

Links to other posts in the Brains, prostheses, nanotechnology, and human enhancement five-part series

Part one: Brain research, ethics, and nanotechnology (May 19, 2014 post)

Part two: BRAIN and ethics in the US with some Canucks (not the hockey team) participating (May 19, 2014)

Part three: Gray Matters: Integrative Approaches for Neuroscience, Ethics, and Society issued May 2014 by US Presidential Bioethics Commission (May 20, 2014)

Part four: Brazil, the 2014 World Cup kickoff, and a mind-controlled exoskeleton (May 20, 2014)

Roadmap to neuromorphic engineering digital and analog) for the creation of artificial brains *from the Georgia (US) Institute of Technology

While I didn’t mention neuromorphic engineering in my April 16, 2014 posting which focused on the more general aspect of nanotechnology in Transcendence, a movie starring Johnny Depp and opening on April 18, that specialty (neuromorphic engineering) is what makes the events in the movie ‘possible’ (assuming very large stretches of imagination bringing us into the realm implausibility and beyond). From the IMDB.com plot synopsis for Transcendence,

Dr. Will Caster (Johnny Depp) is the foremost researcher in the field of Artificial Intelligence, working to create a sentient machine that combines the collective intelligence of everything ever known with the full range of human emotions. His highly controversial experiments have made him famous, but they have also made him the prime target of anti-technology extremists who will do whatever it takes to stop him. However, in their attempt to destroy Will, they inadvertently become the catalyst for him to succeed to be a participant in his own transcendence. For his wife Evelyn (Rebecca Hall) and best friend Max Waters (Paul Bettany), both fellow researchers, the question is not if they canbut [sic] if they should. Their worst fears are realized as Will’s thirst for knowledge evolves into a seemingly omnipresent quest for power, to what end is unknown. The only thing that is becoming terrifyingly clear is there may be no way to stop him.

In the film, Carter’s intelligence/consciousness is uploaded to the computer, which suggests the computer has human brainlike qualities and abilities. The effort to make computer or artificial intelligence more humanlike is called neuromorphic engineering and according to an April 17, 2014 news item on phys.org, researchers at the Georgia Institute of Technology (Georgia Tech) have published a roadmap for this pursuit,

In the field of neuromorphic engineering, researchers study computing techniques that could someday mimic human cognition. Electrical engineers at the Georgia Institute of Technology recently published a “roadmap” that details innovative analog-based techniques that could make it possible to build a practical neuromorphic computer.

A core technological hurdle in this field involves the electrical power requirements of computing hardware. Although a human brain functions on a mere 20 watts of electrical energy, a digital computer that could approximate human cognitive abilities would require tens of thousands of integrated circuits (chips) and a hundred thousand watts of electricity or more – levels that exceed practical limits.

The Georgia Tech roadmap proposes a solution based on analog computing techniques, which require far less electrical power than traditional digital computing. The more efficient analog approach would help solve the daunting cooling and cost problems that presently make digital neuromorphic hardware systems impractical.

“To simulate the human brain, the eventual goal would be large-scale neuromorphic systems that could offer a great deal of computational power, robustness and performance,” said Jennifer Hasler, a professor in the Georgia Tech School of Electrical and Computer Engineering (ECE), who is a pioneer in using analog techniques for neuromorphic computing. “A configurable analog-digital system can be expected to have a power efficiency improvement of up to 10,000 times compared to an all-digital system.”

An April 16, 2014 Georgia Tech news release by Rick Robinson, which originated the news item, describes why Hasler wants to combine analog (based on biological principles) and digital computing approaches to the creation of artificial brains,

Unlike digital computing, in which computers can address many different applications by processing different software programs, analog circuits have traditionally been hard-wired to address a single application. For example, cell phones use energy-efficient analog circuits for a number of specific functions, including capturing the user’s voice, amplifying incoming voice signals, and controlling battery power.

Because analog devices do not have to process binary codes as digital computers do, their performance can be both faster and much less power hungry. Yet traditional analog circuits are limited because they’re built for a specific application, such as processing signals or controlling power. They don’t have the flexibility of digital devices that can process software, and they’re vulnerable to signal disturbance issues, or noise.

In recent years, Hasler has developed a new approach to analog computing, in which silicon-based analog integrated circuits take over many of the functions now performed by familiar digital integrated circuits. These analog chips can be quickly reconfigured to provide a range of processing capabilities, in a manner that resembles conventional digital techniques in some ways.

Over the last several years, Hasler and her research group have developed devices called field programmable analog arrays (FPAA). Like field programmable gate arrays (FPGA), which are digital integrated circuits that are ubiquitous in modern computing, the FPAA can be reconfigured after it’s manufactured – hence the phrase “field-programmable.”

Hasler and Marr’s 29-page paper traces a development process that could lead to the goal of reproducing human-brain complexity. The researchers investigate in detail a number of intermediate steps that would build on one another, helping researchers advance the technology sequentially.

For example, the researchers discuss ways to scale energy efficiency, performance and size in order to eventually achieve large-scale neuromorphic systems. The authors also address how the implementation and the application space of neuromorphic systems can be expected to evolve over time.

“A major concept here is that we have to first build smaller systems capable of a simple representation of one layer of human brain cortex,” Hasler said. “When that system has been successfully demonstrated, we can then replicate it in ways that increase its complexity and performance.”

Among neuromorphic computing’s major hurdles are the communication issues involved in networking integrated circuits in ways that could replicate human cognition. In their paper, Hasler and Marr emphasize local interconnectivity to reduce complexity. Moreover, they argue it’s possible to achieve these capabilities via purely silicon-based techniques, without relying on novel devices that are based on other approaches.

Commenting on the recent publication, Alice C. Parker, a professor of electrical engineering at the University of Southern California, said, “Professor Hasler’s technology roadmap is the first deep analysis of the prospects for large scale neuromorphic intelligent systems, clearly providing practical guidance for such systems, with a nearer-term perspective than our whole-brain emulation predictions. Her expertise in analog circuits, technology and device models positions her to provide this unique perspective on neuromorphic circuits.”

Eugenio Culurciello, an associate professor of biomedical engineering at Purdue University, commented, “I find this paper to be a very accurate description of the field of neuromorphic data processing systems. Hasler’s devices provide some of the best performance per unit power I have ever seen and are surely on the roadmap for one of the major technologies of the future.”

Said Hasler: “In this study, we conclude that useful neural computation machines based on biological principles – and potentially at the size of the human brain — seems technically within our grasp. We think that it’s more a question of gathering the right research teams and finding the funding for research and development than of any insurmountable technical barriers.”

Here’s a link to and a citation for the roadmap,

Finding a roadmap to achieve large neuromorphic hardware systems by Jennifer Hasler and Bo Marr.  Front. Neurosci. (Frontiers in Neuroscience), 10 September 2013 | doi: 10.3389/fnins.2013.00118

This is an open access article (at least, the HTML version is).

I have looked at Hasler’s roadmap and it provides a good and readable overview (even for an amateur like me; Note: you do have to need some tolerance for ‘not knowing’) of the state of neuromorphic engineering’s problems, and suggestions for overcoming them. Here’s a description of a human brain and its power requirements as compared to a computer’s (from the roadmap),

One of the amazing thing about the human brain is its ability to perform tasks beyond current supercomputers using roughly 20 W of average power, a level smaller than most individual computer microprocessor chips. A single neuron emulation can tax a high performance processor; given there is 1012 neurons operating at 20 W, each neuron consumes 20 pW average power. Assuming a neuron is conservatively performing the wordspotting computation (1000 synapses), 100,000 PMAC (PMAC = “Peta” MAC = 1015 MAC/s) would be required to duplicate the neural structure. A higher computational efficiency due to active dendritic line channels is expected as well as additional computation due to learning. The efficiency of a single neuron would be 5000 PMAC/W (or 5 TMAC/μW). A similar efficiency for 1011 neurons and 10,000 synapses is expected.

Building neuromorphic hardware requires that technology must scale from current levels given constraints of power, area, and cost: all issues typical in industrial and defense applications; if hardware technology does not scale as other available technologies, as well as takes advantage of the capabilities of IC technology that are currently visible, it will not be successful.

One of my main areas of interest is the memristor (a nanoscale ‘device/circuit element’ which emulates synaptic plasticity), which was mentioned in a way that allows me to understand how the device fits (or doesn’t fit) into the overall conceptual framework (from the roadmap),

The density for a 10 nm EEPROM device acting as a synapse begs the question of whether other nanotechnologies can improve on the resulting Si [silicon] synapse density. One transistor per synapse is hard to beat by any approach, particularly in scaled down Si (like 10 nm), when the synapse memory, computation, and update is contained within the EEPROM device. Most nano device technologies [i.e., memristors (Snider et al., 2011)] show considerable difficulties to get to two-dimensional arrays at a similar density level. Recently, a team from U. of Michigan announced the first functioning memristor two-dimensional (30 × 30) array built on a CMOS chip in 2012 (Kim et al., 2012), claiming applications in neuromorphic engineering, the same group has published innovative devices for digital (Jo and Lu, 2009) and analog applications (Jo et al., 2011).

I notice that the reference to the University’s of Michigan is relatively neutral in tone and the memristor does not figure substantively in Hasler’s roadmap.

Intriguingly, there is a section on commercialization; I didn’t think the research was at that stage yet (from the roadmap),

Although one can discuss how to build a cortical computer on the size of mammals and humans, the question is how will the technology developed for these large systems impact commercial development. The cost for ICs [integrated circuits or chips] alone for cortex would be approximately $20 M in current prices, which although possible for large users, would not be common to be found in individual households. Throughout the digital processor approach, commercial market opportunities have driven the progress in the field. Getting neuromorphic technology integrated into commercial environment allows us to ride this powerful economic “engine” rather than pull.

In most applications, the important commercial issues include minimization of cost, time to market, just sufficient performance for the application, power consumed, size and weight. The cost of a system built from ICs is, at a macro-level, a function of the area of those ICs, which then affects the number of ICs needed system wide, the number of components used, and the board space used. Efficiency of design tools, testing time and programming time also considerably affect system costs. Time to get an application to market is affected by the ability to reuse or quickly modify existing designs, and is reduced for a new application if existing hardware can be reconfigured, adapting to changing specifications, and a designer can utilize tools that allow rapid modifications to the design. Performance is key for any algorithm, but for a particular product, one only needs a solution to that particular problem; spending time to make the solution elegant is often a losing strategy.

The neuromorphic community has seen some early entries into commercial spaces, but we are just at the very beginning of the process. As the knowledge of neuromorphic engineering has progressed, which have included knowledge of sensor interfaces and analog signal processing, there have been those who have risen to the opportunities to commercialize these technologies. Neuromorphic research led to better understanding of sensory processing, particularly sensory systems interacting with other humans, enabling companies like Synaptics (touch pads), Foveon (CMOS color imagers), and Sonic Innovation (analog–digital hearing aids); Gilder provides a useful history of these two companies elsewhere (Gilder, 2005). From the early progress in analog signal processing we see companies like GTronix (acquired by National Semiconductor, then acquired by Texas Instruments) applying the impact of custom analog signal processing techniques and programmability toward auditory signal processing that improved sound quality requiring ultra-low power levels. Further, we see in companies like Audience there is some success from mapping the computational flow of the early stage auditory system, and implementing part of the event based auditory front-end to achieve useful results for improved voice quality. But the opportunities for the neuromorphic community are just beginning, and directly related to understanding the computational capabilities of these items. The availability of ICs that have these capabilities, whether or not one mentions they have any neuromorphic material, will further drive applications.

One expects that part of a cortex processing system would have significant computational possibilities, as well as cortex structures from smaller animals, and still be able to reach price points for commercial applications. In the following discussion, we will consider the potential of cortical structures at different levels of commercial applications. Figure 24 shows one typical block diagram, algorithms at each stage, resulting power efficiency (say based on current technology), as well as potential applications of the approach. In all cases, we will be considering a single die solution, typical for a commercial product, and will minimize the resulting communication power to I/O off the chip (no power consumed due to external memories or digital processing devices). We will assume a net computational efficiency of 10 TMAC/mW, corresponding to a lower power supply (i.e., mostly 500 mV, but not 180 mV) and slightly larger load capacitances; we make these assumptions as conservative pull back from possible applications, although we expect the more aggressive targets would be reachable. We assume the external power consumed is set by 1 event/second/neuron average event-rate off chip to a nearby IC. Given the input event rate is hard to predict, we don’t include that power requirement but assume it is handled by the input system. In all of these cases, getting the required computation using only digital techniques in a competitive size, weight, and especially power is hard to foresee.

We expect progress in these neuromorphic systems and that should find applications in traditional signal processing and graphics handling approaches. We will continue to have needs in computing that outpace our available computing resources, particularly at a power consumption required for a particular application. For example, the recent emphasis on cloud computing for academic/research problems shows the incredible need for larger computing resources than those directly available, or even projected to be available, for a portable computing platform (i.e., robotics). Of course a server per computing device is not a computing model that scales well. Given scaling limits on computing, both in power, area, and communication, one can expect to see more and more of these issues going forward.

We expect that a range of different ICs and systems will be built, all at different targets in the market. There are options for even larger networks, or integrating these systems with other processing elements on a chip/board. When moving to larger systems, particularly ones with 10–300 chips (3 × 107 to 109 neurons) or more, one can see utilization of stacking of dies, both decreasing the communication capacitance as well as board complexity. Stacking dies should roughly increase the final chip cost by the number of dies stacked.

In the following subsections, we overview general guidelines to consider when considering using neuromorphic ICs in the commercial market, first for low-cost consumer electronics, and second for a larger neuromorphic processor IC.

I have a casual observation to make. while the authors of the roadmap came to this conclusion “This study concludes that useful neural computation machines based on biological principles at the size of the human brain seems technically within our grasp.,” they’re also leaving themselves some wiggle room because the truth is no one knows if copying a human brain with circuits and various devices will lead to ‘thinking’ as we understand the concept.

For anyone who’s interested, you can search this blog for neuromorphic engineering, artificial brains, and/or memristors as I have many postings on these topics. One of my most recent on the topic of artificial brains is an April 7, 2014 piece titled: Brain-on-a-chip 2014 survey/overview.

One last observation about the movie ‘Transcendence’, has no one else noticed that it’s the ‘Easter’ story with a resurrected and digitized ‘Jesus’?

* Space inserted between ‘brains’ and ‘from’ in head on April 21, 2014.

Brain-on-a-chip 2014 survey/overview

Michael Berger has written another of his Nanowerk Spotlight articles focussing on neuromorphic engineering and the concept of a brain-on-a-chip bringing it up-to-date April 2014 style.

It’s a topic he and I have been following (separately) for years. Berger’s April 4, 2014 Brain-on-a-chip Spotlight article provides a very welcome overview of the international neuromorphic engineering effort (Note: Links have been removed),

Constructing realistic simulations of the human brain is a key goal of the Human Brain Project, a massive European-led research project that commenced in 2013.

The Human Brain Project is a large-scale, scientific collaborative project, which aims to gather all existing knowledge about the human brain, build multi-scale models of the brain that integrate this knowledge and use these models to simulate the brain on supercomputers. The resulting “virtual brain” offers the prospect of a fundamentally new and improved understanding of the human brain, opening the way for better treatments for brain diseases and for novel, brain-like computing technologies.

Several years ago, another European project named FACETS (Fast Analog Computing with Emergent Transient States) completed an exhaustive study of neurons to find out exactly how they work, how they connect to each other and how the network can ‘learn’ to do new things. One of the outcomes of the project was PyNN, a simulator-independent language for building neuronal network models.

Scientists have great expectations that nanotechnologies will bring them closer to the goal of creating computer systems that can simulate and emulate the brain’s abilities for sensation, perception, action, interaction and cognition while rivaling its low power consumption and compact size – basically a brain-on-a-chip. Already, scientists are working hard on laying the foundations for what is called neuromorphic engineering – a new interdisciplinary discipline that includes nanotechnologies and whose goal is to design artificial neural systems with physical architectures similar to biological nervous systems.

Several research projects funded with millions of dollars are at work with the goal of developing brain-inspired computer architectures or virtual brains: DARPA’s SyNAPSE, the EU’s BrainScaleS (a successor to FACETS), or the Blue Brain project (one of the predecessors of the Human Brain Project) at Switzerland’s EPFL [École Polytechnique Fédérale de Lausanne].

Berger goes on to describe the raison d’être for neuromorphic engineering (attempts to mimic biological brains),

Programmable machines are limited not only by their computational capacity, but also by an architecture requiring (human-derived) algorithms to both describe and process information from their environment. In contrast, biological neural systems (e.g., brains) autonomously process information in complex environments by automatically learning relevant and probabilistically stable features and associations. Since real world systems are always many body problems with infinite combinatorial complexity, neuromorphic electronic machines would be preferable in a host of applications – but useful and practical implementations do not yet exist.

Researchers are mostly interested in emulating neural plasticity (aka synaptic plasticity), from Berger’s April 4, 2014 article,

Independent from military-inspired research like DARPA’s, nanotechnology researchers in France have developed a hybrid nanoparticle-organic transistor that can mimic the main functionalities of a synapse. This organic transistor, based on pentacene and gold nanoparticles and termed NOMFET (Nanoparticle Organic Memory Field-Effect Transistor), has opened the way to new generations of neuro-inspired computers, capable of responding in a manner similar to the nervous system  (read more: “Scientists use nanotechnology to try building computers modeled after the brain”).

One of the key components of any neuromorphic effort, and its starting point, is the design of artificial synapses. Synapses dominate the architecture of the brain and are responsible for massive parallelism, structural plasticity, and robustness of the brain. They are also crucial to biological computations that underlie perception and learning. Therefore, a compact nanoelectronic device emulating the functions and plasticity of biological synapses will be the most important building block of brain-inspired computational systems.

In 2011, a team at Stanford University demonstrates a new single element nanoscale device, based on the successfully commercialized phase change material technology, emulating the functionality and the plasticity of biological synapses. In their work, the Stanford team demonstrated a single element electronic synapse with the capability of both the modulation of the time constant and the realization of the different synaptic plasticity forms while consuming picojoule level energy for its operation (read more: “Brain-inspired computing with nanoelectronic programmable synapses”).

Berger does mention memristors but not in any great detail in this article,

Researchers have also suggested that memristor devices are capable of emulating the biological synapses with properly designed CMOS neuron components. A memristor is a two-terminal electronic device whose conductance can be precisely modulated by charge or flux through it. It has the special property that its resistance can be programmed (resistor) and subsequently remains stored (memory).

One research project already demonstrated that a memristor can connect conventional circuits and support a process that is the basis for memory and learning in biological systems (read more: “Nanotechnology’s road to artificial brains”).

You can find a number of memristor articles here including these: Memristors have always been with us from June 14, 2013; How to use a memristor to create an artificial brain from Feb. 26, 2013; Electrochemistry of memristors in a critique of the 2008 discovery from Sept. 6, 2012; and many more (type ‘memristor’ into the blog search box and you should receive many postings or alternatively, you can try ‘artificial brains’ if you want everything I have on artificial brains).

Getting back to Berger’s April 4, 2014 article, he mentions one more approach and this one stands out,

A completely different – and revolutionary – human brain model has been designed by researchers in Japan who introduced the concept of a new class of computer which does not use any circuit or logic gate. This artificial brain-building project differs from all others in the world. It does not use logic-gate based computing within the framework of Turing. The decision-making protocol is not a logical reduction of decision rather projection of frequency fractal operations in a real space, it is an engineering perspective of Gödel’s incompleteness theorem.

Berger wrote about this work in much more detail in a Feb. 10, 2014 Nanowerk Spotlight article titled: Brain jelly – design and construction of an organic, brain-like computer, (Note: Links have been removed),

In a previous Nanowerk Spotlight we reported on the concept of a full-fledged massively parallel organic computer at the nanoscale that uses extremely low power (“Will brain-like evolutionary circuit lead to intelligent computers?”). In this work, the researchers created a process of circuit evolution similar to the human brain in an organic molecular layer. This was the first time that such a brain-like ‘evolutionary’ circuit had been realized.

The research team, led by Dr. Anirban Bandyopadhyay, a senior researcher at the Advanced Nano Characterization Center at the National Institute of Materials Science (NIMS) in Tsukuba, Japan, has now finalized their human brain model and introduced the concept of a new class of computer which does not use any circuit or logic gate.

In a new open-access paper published online on January 27, 2014, in Information (“Design and Construction of a Brain-Like Computer: A New Class of Frequency-Fractal Computing Using Wireless Communication in a Supramolecular Organic, Inorganic System”), Bandyopadhyay and his team now describe the fundamental computing principle of a frequency fractal brain like computer.

“Our artificial brain-building project differs from all others in the world for several reasons,” Bandyopadhyay explains to Nanowerk. He lists the four major distinctions:
1) We do not use logic gate based computing within the framework of Turing, our decision-making protocol is not a logical reduction of decision rather projection of frequency fractal operations in a real space, it is an engineering perspective of Gödel’s incompleteness theorem.
2) We do not need to write any software, the argument and basic phase transition for decision-making, ‘if-then’ arguments and the transformation of one set of arguments into another self-assemble and expand spontaneously, the system holds an astronomically large number of ‘if’ arguments and its associative ‘then’ situations.
3) We use ‘spontaneous reply back’, via wireless communication using a unique resonance band coupling mode, not conventional antenna-receiver model, since fractal based non-radiative power management is used, the power expense is negligible.
4) We have carried out our own single DNA, single protein molecule and single brain microtubule neurophysiological study to develop our own Human brain model.

I encourage people to read Berger’s articles on this topic as they provide excellent information and links to much more. Curiously (mind you, it is easy to miss something), he does not mention James Gimzewski’s work at the University of California at Los Angeles (UCLA). Working with colleagues from the National Institute for Materials Science in Japan, Gimzewski published a paper about “two-, three-terminal WO3-x-based nanoionic devices capable of a broad range of neuromorphic and electrical functions”. You can find out more about the paper in my Dec. 24, 2012 posting titled: Synaptic electronics.

As for the ‘brain jelly’ paper, here’s a link to and a citation for it,

Design and Construction of a Brain-Like Computer: A New Class of Frequency-Fractal Computing Using Wireless Communication in a Supramolecular Organic, Inorganic System by Subrata Ghoshemail, Krishna Aswaniemail, Surabhi Singhemail, Satyajit Sahuemail, Daisuke Fujitaemail and Anirban Bandyopadhyay. Information 2014, 5(1), 28-100; doi:10.3390/info5010028

It’s an open access paper.

As for anyone who’s curious about why the US BRAIN initiative ((Brain Research through Advancing Innovative Neurotechnologies, also referred to as the Brain Activity Map Project) is not mentioned, I believe that’s because it’s focussed on biological brains exclusively at this point (you can check its Wikipedia entry to confirm).

Anirban Bandyopadhyay was last mentioned here in a January 16, 2014 posting titled: Controversial theory of consciousness confirmed (maybe) in  the context of a presentation in Amsterdam, Netherlands.

Chaos, brains, and ferroelectrics: “We started to see things that should have been completely impossible …”

Given my interest in neuromorphic (mimicking the human brain) engineering, this work at the US Oak Ridge National Laboratories was guaranteed to catch my attention. From the Nov. 18, 2013 news item on Nanowerk,

Unexpected behavior in ferroelectric materials explored by researchers at the Department of Energy’s Oak Ridge National Laboratory supports a new approach to information storage and processing.

Ferroelectric materials are known for their ability to spontaneously switch polarization when an electric field is applied. Using a scanning probe microscope, the ORNL-led team took advantage of this property to draw areas of switched polarization called domains on the surface of a ferroelectric material. To the researchers’ surprise, when written in dense arrays, the domains began forming complex and unpredictable patterns on the material’s surface.

“When we reduced the distance between domains, we started to see things that should have been completely impossible,” said ORNL’s Anton Ievlev, …

The Nov. 18, 2013 Oak Ridge National Laboratory news release, which originated the news item, provides more details,

“All of a sudden, when we tried to draw a domain, it wouldn’t form, or it would form in an alternating pattern like a checkerboard.  At first glance, it didn’t make any sense. We thought that when a domain forms, it forms. It shouldn’t be dependent on surrounding domains.”  [said Ievlev]

After studying patterns of domain formation under varying conditions, the researchers realized the complex behavior could be explained through chaos theory. One domain would suppress the creation of a second domain nearby but facilitate the formation of one farther away — a precondition of chaotic behavior, says ORNL’s Sergei Kalinin, who led the study.

“Chaotic behavior is generally realized in time, not in space,” he said. ”An example is a dripping faucet: sometimes the droplets fall in a regular pattern, sometimes not, but it is a time-dependent process. To see chaotic behavior realized in space, as in our experiment, is highly unusual.”

Collaborator Yuriy Pershin of the University of South Carolina explains that the team’s system possesses key characteristics needed for memcomputing, an emergent computing paradigm in which information storage and processing occur on the same physical platform.

Memcomputing is basically how the human brain operates: [emphasis mine] Neurons and their connections–synapses–can store and process information in the same location,” Pershin said. “This experiment with ferroelectric domains demonstrates the possibility of memcomputing.”

Encoding information in the domain radius could allow researchers to create logic operations on a surface of ferroelectric material, thereby combining the locations of information storage and processing.

The researchers note that although the system in principle has a universal computing ability, much more work is required to design a commercially attractive all-electronic computing device based on the domain interaction effect.

“These studies also make us rethink the role of surface and electrochemical phenomena in ferroelectric materials, since the domain interactions are directly traced to the behavior of surface screening charges liberated during electrochemical reaction coupled to the switching process,” Kalinin said.

For anyone who’s interested in exploring this particular approach to mimicking the human brain, here’s a citation for and a link to the researchers’ paper,

Intermittency, quasiperiodicity and chaos in probe-induced ferroelectric domain switching by A. V. Ievlev, S. Jesse, A. N. Morozovska, E. Strelcov, E. A. Eliseev, Y. V. Pershin, A. Kumar, V. Ya. Shur, & S. V. Kalinin. Nature Physics (2013) doi:10.1038/nphys2796 Published online 17 November 2013

This paper is behind a paywall although it is possible to preview it for free via ReadCube Access.

How to use a memristor to create an artificial brain

Dr. Andy Thomas of Bielefeld University’s (Germany) Faculty of Physics has developed a ‘blueprint’ for an artificial brain based on memristors. From the Feb. 26, 2013, news item on phys.org,

Scientists have long been dreaming about building a computer that would work like a brain. This is because a brain is far more energy-saving than a computer, it can learn by itself, and it doesn’t need any programming. Privatdozent [senior lecturer] Dr. Andy Thomas from Bielefeld University’s Faculty of Physics is experimenting with memristors – electronic microcomponents that imitate natural nerves. Thomas and his colleagues proved that they could do this a year ago. They constructed a memristor that is capable of learning. Andy Thomas is now using his memristors as key components in a blueprint for an artificial brain. He will be presenting his results at the beginning of March in the print edition of the Journal of Physics D: Applied Physics.

The Feb. 26, 2013 University of Bielefeld news release, which originated the news item, describes why memristors are the foundation for Thomas’s proposed artificial brain,

Memristors are made of fine nanolayers and can be used to connect electric circuits. For several years now, the memristor has been considered to be the electronic equivalent of the synapse. Synapses are, so to speak, the bridges across which nerve cells (neurons) contact each other. Their connections increase in strength the more often they are used. Usually, one nerve cell is connected to other nerve cells across thousands of synapses.

Like synapses, memristors learn from earlier impulses. In their case, these are electrical impulses that (as yet) do not come from nerve cells but from the electric circuits to which they are connected. The amount of current a memristor allows to pass depends on how strong the current was that flowed through it in the past and how long it was exposed to it.

Andy Thomas explains that because of their similarity to synapses, memristors are particularly suitable for building an artificial brain – a new generation of computers. ‘They allow us to construct extremely energy-efficient and robust processors that are able to learn by themselves.’ Based on his own experiments and research findings from biology and physics, his article is the first to summarize which principles taken from nature need to be transferred to technological systems if such a neuromorphic (nerve like) computer is to function. Such principles are that memristors, just like synapses, have to ‘note’ earlier impulses, and that neurons react to an impulse only when it passes a certain threshold.

‘… a memristor can store information more precisely than the bits on which previous computer processors have been based,’ says Thomas. Both a memristor and a bit work with electrical impulses. However, a bit does not allow any fine adjustment – it can only work with ‘on’ and ‘off’. In contrast, a memristor can raise or lower its resistance continuously. ‘This is how memristors deliver a basis for the gradual learning and forgetting of an artificial brain,’ explains Thomas.

A nanocomponent that is capable of learning: The Bielefeld memristor built into a chip here is 600 times thinner than a human hair. [ downloaded from http://ekvv.uni-bielefeld.de/blog/uninews/entry/blueprint_for_an_artificial_brain]

A nanocomponent that is capable of learning: The Bielefeld memristor built into a chip here is 600 times thinner than a human hair. [ downloaded from http://ekvv.uni-bielefeld.de/blog/uninews/entry/blueprint_for_an_artificial_brain]

Here’s a citation for and link to the paper (from the university news release),

Andy Thomas, ‘Memristor-based neural networks’, Journal of Physics D: Applied Physics, http://dx.doi.org/10.1088/0022-3727/46/9/093001, released online on 5 February 2013, published in print on 6 March 2013.

This paper is available until March 5, 2013 as IOP Science (publisher of Journal Physics D: Applied Physics), makes their papers freely available (with some provisos) for the first 30 days after online publication, from the Access Options page for Memristor-based neural networks,

As a service to the community, IOP is pleased to make papers in its journals freely available for 30 days from date of online publication – but only fair use of the content is permitted.

Under fair use, IOP content may only be used by individuals for the sole purpose of their own private study or research. Such individuals may access, download, store, search and print hard copies of the text. Copying should be limited to making single printed or electronic copies.

Other use is not considered fair use. In particular, use by persons other than for the purpose of their own private study or research is not fair use. Nor is altering, recompiling, reselling, systematic or programmatic copying, redistributing or republishing. Regular/systematic downloading of content or the downloading of a substantial proportion of the content is not fair use either.

Getting back to the memristor, I’ve been writing about it for some years, it was most recently mentioned here  in a Feb.7, 2013 posting and I mentioned in a Dec. 24, 2012 posting nanoionic nanodevices  also described as resembling synapses.

Synaptic electronics

There’s been a lot about the memristor, being developed at HP Labs, at the University of Michigan, and elsewhere, on this blog and significantly less on other approaches to creating nanodevices with neuromorphic properties by researchers in Japan and in the US. The Dec. 20, 2012 news item on ScienceDaily notes,

Researchers in Japan and the US propose a nanoionic device with a range of neuromorphic and electrical multifunctions that may allow the fabrication of on-demand configurable circuits, analog memories and digital-neural fused networks in one device architecture.

… Now Rui Yang, Kazuya Terabe and colleagues at the National Institute for Materials Science in Japan and the University of California, Los Angeles, in the US have developed two-, three-terminal WO3-x-based nanoionic devices capable of a broad range of neuromorphic and electrical functions.

The originating Dec. 20, 2012 news release from Japan’s International Center for Materials draws a parallel between the device’s properties and neural behaviour,  explains the ‘why’ of the process, and mentions what applications the researchers believe could be developed,

The researchers draw similarities between the device properties — volatile and non-volatile states and the current fading process following positive voltage pulses — with models for neural behaviour —that is, short- and long-term memory and forgetting processes. They explain the behaviour as the result of oxygen ions migrating within the device in response to the voltage sweeps. Accumulation of the oxygen ions at the electrode leads to Schottky-like potential barriers and the resulting changes in resistance and rectifying characteristics. The stable bipolar switching behaviour at the Pt/WO3-x interface is attributed to the formation of the electric conductive filament and oxygen absorbability of the Pt electrode.

As the researchers conclude, “These capabilities open a new avenue for circuits, analog memories, and artificially fused digital neural networks using on-demand programming by input pulse polarity, magnitude, and repetition history.”

For those who wish to delve more deeply, here’s the citation (from the ScienceDaily news item),

Rui Yang, Kazuya Terabe, Guangqiang Liu, Tohru Tsuruoka, Tsuyoshi Hasegawa, James K. Gimzewski, Masakazu Aono. On-Demand Nanodevice with Electrical and Neuromorphic Multifunction Realized by Local Ion Migration. ACS Nano, 2012; 6 (11): 9515 DOI: 10.1021/nn302510e

The news release does not state explicitly why this would be considered an on-demand device. The article is behind a paywall.

There was a recent attempt to mimic brain processing not based in nanoelectronics but on mimicking brain activity by creating virtual neurons. A Canadian team at the University of Waterloo led by Chris Eliasmith made a sensation  with SPAUN (Semantic Pointer Architecture Unified Network) in late Nov. 2012 (mentioned in my Nov. 29, 2012 posting).

University of Waterloo researchers use 2.5M (virtual) neurons to simulate a brain

I hinted about some related work at the University of Waterloo earlier this week in my Nov. 26, 2012 posting (Existential risk) about a proposed centre at the University of Cambridge which would be tasked with examining possible risks associated with ‘ultra intelligent machines’.  Today (Science (magazine) published an article about SPAUN (Semantic Pointer Architecture Unified Network) [behind a paywall])and its ability to solve simple arithmetic and perform other tasks as well.

Ed Yong writing for Nature magazine (Simulated brain scores top test marks, Nov. 29, 2012) offers this description,

Spaun sees a series of digits: 1 2 3; 5 6 7; 3 4 ?. Its neurons fire, and it calculates the next logical number in the sequence. It scrawls out a 5, in legible if messy writing.

This is an unremarkable feat for a human, but Spaun is actually a simulated brain. It contains2.5 millionvirtual neurons — many fewer than the 86 billion in the average human head, but enough to recognize lists of numbers, do simple arithmetic and solve reasoning problems.

Here’s a video demonstration, from the University of Waterloo’s Nengo Neural Simulator home page,

The University of Waterloo’s Nov. 29, 2012 news release offers more technical detail,

… The model captures biological details of each neuron, including which neurotransmitters are used, how voltages are generated in the cell, and how they communicate. Spaun uses this network of neurons to process visual images in order to control an arm that draws Spaun’s answers to perceptual, cognitive and motor tasks. …

“This is the first model that begins to get at how our brains can perform a wide variety of tasks in a flexible manner—how the brain coordinates the flow of information between different areas to exhibit complex behaviour,” said Professor Chris Eliasmith, Director of the Centre for Theoretical Neuroscience at Waterloo. He is Canada Research Chair in Theoretical Neuroscience, and professor in Waterloo’s Department of Philosophy and Department of Systems Design Engineering.

Unlike other large brain models, Spaun can perform several tasks. Researchers can show patterns of digits and letters the model’s eye, which it then processes, causing it to write its responses to any of eight tasks.  And, just like the human brain, it can shift from task to task, recognizing an object one moment and memorizing a list of numbers the next. [emphasis mine] Because of its biological underpinnings, Spaun can also be used to understand how changes to the brain affect changes to behaviour.

“In related work, we have shown how the loss of neurons with aging leads to decreased performance on cognitive tests,” said Eliasmith. “More generally, we can test our hypotheses about how the brain works, resulting in a better understanding of the effects of drugs or damage to the brain.”

In addition, the model provides new insights into the sorts of algorithms that might be useful for improving machine intelligence. [emphasis mine] For instance, it suggests new methods for controlling the flow of information through a large system attempting to solve challenging cognitive tasks.

Laura Sanders’ Nov. 29, 2012 article for ScienceNews suggests that there is some controversy as to whether or not SPAUN does resemble a human brain,

… Henry Markram, who leads a different project to reconstruct the human brain called the Blue Brain, questions whether Spaun really captures human brain behavior. Because Spaun’s design ignores some important neural properties, it’s unlikely to reveal anything about the brain’s mechanics, says Markram, of the Swiss Federal Institute of Technology in Lausanne. “It is not a brain model.”

Personally, I have a little difficulty seeing lines of code as ever being able to truly simulate brain activity. I think the notion of moving to something simpler (using fewer neurons as the Eliasmith team does) is a move in the right direction but I’m still more interested in devices such as the memristor and the electrochemical atomic switch and their potential.

Blue Brain Project

Memristor and artificial synapses in my April 19, 2012 posting

Atomic or electrochemical atomic switches and neuromorphic engineering briefly mentioned (scroll 1/2 way down) in my Oct. 17, 2011 posting.

ETA Dec. 19, 2012: There was an AMA (ask me anything) session on Reddit with the SPAUN team in early December, if you’re interested, you can still access the questions and answers,

We are the computational neuroscientists behind the world’s largest functional brain model