Tag Archives: artificial brains

Brainy and brainy: a novel synaptic architecture and a neuromorphic computing platform called SpiNNaker

I have two items about brainlike computing. The first item hearkens back to memristors, a topic I have been following since 2008. (If you’re curious about the various twists and turns just enter  the term ‘memristor’ in this blog’s search engine.) The latest on memristors is from a team than includes IBM (US), École Politechnique Fédérale de Lausanne (EPFL; Swizterland), and the New Jersey Institute of Technology (NJIT; US). The second bit comes from a Jülich Research Centre team in Germany and concerns an approach to brain-like computing that does not include memristors.

Multi-memristive synapses

In the inexorable march to make computers function more like human brains (neuromorphic engineering/computing), an international team has announced its latest results in a July 10, 2018 news item on Nanowerk,

Two New Jersey Institute of Technology (NJIT) researchers, working with collaborators from the IBM Research Zurich Laboratory and the École Polytechnique Fédérale de Lausanne, have demonstrated a novel synaptic architecture that could lead to a new class of information processing systems inspired by the brain.

The findings are an important step toward building more energy-efficient computing systems that also are capable of learning and adaptation in the real world. …

A July 10, 2018 NJIT news release (also on EurekAlert) by Tracey Regan, which originated by the news item, adds more details,

The researchers, Bipin Rajendran, an associate professor of electrical and computer engineering, and S. R. Nandakumar, a graduate student in electrical engineering, have been developing brain-inspired computing systems that could be used for a wide range of big data applications.

Over the past few years, deep learning algorithms have proven to be highly successful in solving complex cognitive tasks such as controlling self-driving cars and language understanding. At the heart of these algorithms are artificial neural networks – mathematical models of the neurons and synapses of the brain – that are fed huge amounts of data so that the synaptic strengths are autonomously adjusted to learn the intrinsic features and hidden correlations in these data streams.

However, the implementation of these brain-inspired algorithms on conventional computers is highly inefficient, consuming huge amounts of power and time. This has prompted engineers to search for new materials and devices to build special-purpose computers that can incorporate the algorithms. Nanoscale memristive devices, electrical components whose conductivity depends approximately on prior signaling activity, can be used to represent the synaptic strength between the neurons in artificial neural networks.

While memristive devices could potentially lead to faster and more power-efficient computing systems, they are also plagued by several reliability issues that are common to nanoscale devices. Their efficiency stems from their ability to be programmed in an analog manner to store multiple bits of information; however, their electrical conductivities vary in a non-deterministic and non-linear fashion.

In the experiment, the team showed how multiple nanoscale memristive devices exhibiting these characteristics could nonetheless be configured to efficiently implement artificial intelligence algorithms such as deep learning. Prototype chips from IBM containing more than one million nanoscale phase-change memristive devices were used to implement a neural network for the detection of hidden patterns and correlations in time-varying signals.

“In this work, we proposed and experimentally demonstrated a scheme to obtain high learning efficiencies with nanoscale memristive devices for implementing learning algorithms,” Nandakumar says. “The central idea in our demonstration was to use several memristive devices in parallel to represent the strength of a synapse of a neural network, but only chose one of them to be updated at each step based on the neuronal activity.”

Here’s a link to and a citation for the paper,

Neuromorphic computing with multi-memristive synapses by Irem Boybat, Manuel Le Gallo, S. R. Nandakumar, Timoleon Moraitis, Thomas Parnell, Tomas Tuma, Bipin Rajendran, Yusuf Leblebici, Abu Sebastian, & Evangelos Eleftheriou. Nature Communications volume 9, Article number: 2514 (2018) DOI: https://doi.org/10.1038/s41467-018-04933-y Published 28 June 2018

This is an open access paper.

Also they’ve got a couple of very nice introductory paragraphs which I’m including here, (from the June 28, 2018 paper in Nature Communications; Note: Links have been removed),

The human brain with less than 20 W of power consumption offers a processing capability that exceeds the petaflops mark, and thus outperforms state-of-the-art supercomputers by several orders of magnitude in terms of energy efficiency and volume. Building ultra-low-power cognitive computing systems inspired by the operating principles of the brain is a promising avenue towards achieving such efficiency. Recently, deep learning has revolutionized the field of machine learning by providing human-like performance in areas, such as computer vision, speech recognition, and complex strategic games1. However, current hardware implementations of deep neural networks are still far from competing with biological neural systems in terms of real-time information-processing capabilities with comparable energy consumption.

One of the reasons for this inefficiency is that most neural networks are implemented on computing systems based on the conventional von Neumann architecture with separate memory and processing units. There are a few attempts to build custom neuromorphic hardware that is optimized to implement neural algorithms2,3,4,5. However, as these custom systems are typically based on conventional silicon complementary metal oxide semiconductor (CMOS) circuitry, the area efficiency of such hardware implementations will remain relatively low, especially if in situ learning and non-volatile synaptic behavior have to be incorporated. Recently, a new class of nanoscale devices has shown promise for realizing the synaptic dynamics in a compact and power-efficient manner. These memristive devices store information in their resistance/conductance states and exhibit conductivity modulation based on the programming history6,7,8,9. The central idea in building cognitive hardware based on memristive devices is to store the synaptic weights as their conductance states and to perform the associated computational tasks in place.

The two essential synaptic attributes that need to be emulated by memristive devices are the synaptic efficacy and plasticity. …

It gets more complicated from there.

Now onto the next bit.

SpiNNaker

At a guess, those capitalized N’s are meant to indicate ‘neural networks’. As best I can determine, SpiNNaker is not based on the memristor. Moving on, a July 11, 2018 news item on phys.org announces work from a team examining how neuromorphic hardware and neuromorphic software work together,

A computer built to mimic the brain’s neural networks produces similar results to that of the best brain-simulation supercomputer software currently used for neural-signaling research, finds a new study published in the open-access journal Frontiers in Neuroscience. Tested for accuracy, speed and energy efficiency, this custom-built computer named SpiNNaker, has the potential to overcome the speed and power consumption problems of conventional supercomputers. The aim is to advance our knowledge of neural processing in the brain, to include learning and disorders such as epilepsy and Alzheimer’s disease.

A July 11, 2018 Frontiers Publishing news release on EurekAlert, which originated the news item, expands on the latest work,

“SpiNNaker can support detailed biological models of the cortex–the outer layer of the brain that receives and processes information from the senses–delivering results very similar to those from an equivalent supercomputer software simulation,” says Dr. Sacha van Albada, lead author of this study and leader of the Theoretical Neuroanatomy group at the Jülich Research Centre, Germany. “The ability to run large-scale detailed neural networks quickly and at low power consumption will advance robotics research and facilitate studies on learning and brain disorders.”

The human brain is extremely complex, comprising 100 billion interconnected brain cells. We understand how individual neurons and their components behave and communicate with each other and on the larger scale, which areas of the brain are used for sensory perception, action and cognition. However, we know less about the translation of neural activity into behavior, such as turning thought into muscle movement.

Supercomputer software has helped by simulating the exchange of signals between neurons, but even the best software run on the fastest supercomputers to date can only simulate 1% of the human brain.

“It is presently unclear which computer architecture is best suited to study whole-brain networks efficiently. The European Human Brain Project and Jülich Research Centre have performed extensive research to identify the best strategy for this highly complex problem. Today’s supercomputers require several minutes to simulate one second of real time, so studies on processes like learning, which take hours and days in real time are currently out of reach.” explains Professor Markus Diesmann, co-author, head of the Computational and Systems Neuroscience department at the Jülich Research Centre.

He continues, “There is a huge gap between the energy consumption of the brain and today’s supercomputers. Neuromorphic (brain-inspired) computing allows us to investigate how close we can get to the energy efficiency of the brain using electronics.”

Developed over the past 15 years and based on the structure and function of the human brain, SpiNNaker — part of the Neuromorphic Computing Platform of the Human Brain Project — is a custom-built computer composed of half a million of simple computing elements controlled by its own software. The researchers compared the accuracy, speed and energy efficiency of SpiNNaker with that of NEST–a specialist supercomputer software currently in use for brain neuron-signaling research.

“The simulations run on NEST and SpiNNaker showed very similar results,” reports Steve Furber, co-author and Professor of Computer Engineering at the University of Manchester, UK. “This is the first time such a detailed simulation of the cortex has been run on SpiNNaker, or on any neuromorphic platform. SpiNNaker comprises 600 circuit boards incorporating over 500,000 small processors in total. The simulation described in this study used just six boards–1% of the total capability of the machine. The findings from our research will improve the software to reduce this to a single board.”

Van Albada shares her future aspirations for SpiNNaker, “We hope for increasingly large real-time simulations with these neuromorphic computing systems. In the Human Brain Project, we already work with neuroroboticists who hope to use them for robotic control.”

Before getting to the link and citation for the paper, here’s a description of SpiNNaker’s hardware from the ‘Spiking neural netowrk’ Wikipedia entry, Note: Links have been removed,

Neurogrid, built at Stanford University, is a board that can simulate spiking neural networks directly in hardware. SpiNNaker (Spiking Neural Network Architecture) [emphasis mine], designed at the University of Manchester, uses ARM processors as the building blocks of a massively parallel computing platform based on a six-layer thalamocortical model.[5]

Now for the link and citation,

Performance Comparison of the Digital Neuromorphic Hardware SpiNNaker and the Neural Network Simulation Software NEST for a Full-Scale Cortical Microcircuit Model by
Sacha J. van Albada, Andrew G. Rowley, Johanna Senk, Michael Hopkins, Maximilian Schmidt, Alan B. Stokes, David R. Lester, Markus Diesmann, and Steve B. Furber. Neurosci. 12:291. doi: 10.3389/fnins.2018.00291 Published: 23 May 2018

As noted earlier, this is an open access paper.

Japanese art of flower arranging (Ikebana) leads to brain organoids

Testing brain cells in a petri dish doesn’t necessarily tell you what’s going on in a 3D brain according to Christian Naus, a professor in the department of cellular and physiological sciences at the University of British Columbia (UBC; Canada). A Dec. 5, 2016 UBC news release received via email (also available on EurekAlert) elaborates on Naus’ work,

The ancient Japanese art of flower arranging was the inspiration for a groundbreaking technique to create tiny “artificial brains” that could be used to develop personalized cancer treatments.

The organoids, clusters of thousands of human brain cells, cannot perform a brain’s basic functions, much less generate thought. But they provide a far more authentic model – the first of its kind – for studying how brain tumours grow, and how they can be stopped.

“This puts the tumour within the context of a brain, instead of a flat plastic dish,” said Christian Naus, a professor in the department of cellular and physiological sciences, who conceived the project with a Japanese company that specializes in bioprinting. He shared details about the technique at November’s annual Society for Neuroscience conference in San Diego. “When cells grow in three dimensions instead of two, adhering only to each other and not to plastic, an entirely different set of genes are activated.”

Naus studies glioblastoma, a particularly aggressive brain cancer that usually takes root deep inside the brain, and easily spreads. The standard care is surgery, followed by radiation and/or chemotherapy, but gliomas almost always return because a few malignant cells manage to leave the tumour and invade surrounding brain tissue. From the time of diagnosis, average survival is one year.

The idea for creating a more authentic model of glioblastoma originated when Naus partnered with a Japanese biotechnology company, Cyfuse, that has developed a particular technique for printing human tissues based on the Japanese art of flower arranging known as ikebana. In ikebana, artists use a heavy plate with brass needles sticking up, upon which the stems of flowers are affixed. Cyfuse’s bioprinting technique uses a much smaller plate covered with microneedles.

Working with Naus and research associate Wun Chey Sin, Kaori Harada of Cyfuse skewered small spheres of human neural stem cells on the microneedles. As the stem cells multiplied and differentiated into brain tissue, they merged and formed larger structures known as organoids, about two millimetres to three millimetres in diameter. Although the organoids lack blood vessels, they are small enough to allow oxygen and nutrients to permeate the tissue.

“The cells make their own environment,” said Naus, Canada Research Chair in Gap Junctions and Neurological Disorders. “We’re not doing anything except printing them, and then they self-assemble.”

The team then implanted cancerous glioma cells inside the organoids. Naus found that the gliomas spread into the surrounding normal cells.

Having shown that the tumour invades the surrounding tissue, Naus envisions that such a technique can be used with a patient’s own cells – both their normal brain cells and their cancerous cells – to grow a personalized organoid with a glioma at its core, and then test a variety of possible drugs or combinations of treatment to see if any of them stop the cancer from growing and invading.

“With this method, we can easily and authentically replicate a model of the patient’s brain, or at least some of the conditions under which a tumour grows in that brain,” said Naus. “Then we could feasibly test hundreds of different chemical combinations on that patient’s cells to identify a drug combination that shows the most promising result, offering a personalized therapy for brain cancer patients.”

Presumably this technique would be useful for other organoids (liver, kidney, etc.).

You can find the Cyfuse website here.

Blue Brain Project builds a digital piece of brain

Caption: This is a photo of a virtual brain slice. Credit: Makram et al./Cell 2015

Caption: This is a photo of a virtual brain slice. Credit: Makram et al./Cell 2015

Here’s more *about this virtual brain slice* from an Oct. 8, 2015 Cell (magazine) news release on EurekAlert,

If you want to learn how something works, one strategy is to take it apart and put it back together again [also known as reverse engineering]. For 10 years, a global initiative called the Blue Brain Project–hosted at the Ecole Polytechnique Federale de Lausanne (EPFL)–has been attempting to do this digitally with a section of juvenile rat brain. The project presents a first draft of this reconstruction, which contains over 31,000 neurons, 55 layers of cells, and 207 different neuron subtypes, on October 8 [2015] in Cell.

Heroic efforts are currently being made to define all the different types of neurons in the brain, to measure their electrical firing properties, and to map out the circuits that connect them to one another. These painstaking efforts are giving us a glimpse into the building blocks and logic of brain wiring. However, getting a full, high-resolution picture of all the features and activity of the neurons within a brain region and the circuit-level behaviors of these neurons is a major challenge.

Henry Markram and colleagues have taken an engineering approach to this question by digitally reconstructing a slice of the neocortex, an area of the brain that has benefitted from extensive characterization. Using this wealth of data, they built a virtual brain slice representing the different neuron types present in this region and the key features controlling their firing and, most notably, modeling their connectivity, including nearly 40 million synapses and 2,000 connections between each brain cell type.

“The reconstruction required an enormous number of experiments,” says Markram, of the EPFL. “It paves the way for predicting the location, numbers, and even the amount of ion currents flowing through all 40 million synapses.”

Once the reconstruction was complete, the investigators used powerful supercomputers to simulate the behavior of neurons under different conditions. Remarkably, the researchers found that, by slightly adjusting just one parameter, the level of calcium ions, they could produce broader patterns of circuit-level activity that could not be predicted based on features of the individual neurons. For instance, slow synchronous waves of neuronal activity, which have been observed in the brain during sleep, were triggered in their simulations, suggesting that neural circuits may be able to switch into different “states” that could underlie important behaviors.

“An analogy would be a computer processor that can reconfigure to focus on certain tasks,” Markram says. “The experiments suggest the existence of a spectrum of states, so this raises new types of questions, such as ‘what if you’re stuck in the wrong state?'” For instance, Markram suggests that the findings may open up new avenues for explaining how initiating the fight-or-flight response through the adrenocorticotropic hormone yields tunnel vision and aggression.

The Blue Brain Project researchers plan to continue exploring the state-dependent computational theory while improving the model they’ve built. All of the results to date are now freely available to the scientific community at https://bbp.epfl.ch/nmc-portal.

An Oct. 8, 2015 Hebrew University of Jerusalem press release on the Canadian Friends of the Hebrew University of Jerusalem website provides more detail,

Published by the renowned journal Cell, the paper is the result of a massive effort by 82 scientists and engineers at EPFL and at institutions in Israel, Spain, Hungary, USA, China, Sweden, and the UK. It represents the culmination of 20 years of biological experimentation that generated the core dataset, and 10 years of computational science work that developed the algorithms and built the software ecosystem required to digitally reconstruct and simulate the tissue.

The Hebrew University of Jerusalem’s Prof. Idan Segev, a senior author of the research paper, said: “With the Blue Brain Project, we are creating a digital reconstruction of the brain and using supercomputer simulations of its electrical behavior to reveal a variety of brain states. This allows us to examine brain phenomena within a purely digital environment and conduct experiments previously only possible using biological tissue. The insights we gather from these experiments will help us to understand normal and abnormal brain states, and in the future may have the potential to help us develop new avenues for treating brain disorders.”

Segev, a member of the Hebrew University’s Edmond and Lily Safra Center for Brain Sciences and director of the university’s Department of Neurobiology, sees the paper as building on the pioneering work of the Spanish anatomist Ramon y Cajal from more than 100 years ago: “Ramon y Cajal began drawing every type of neuron in the brain by hand. He even drew in arrows to describe how he thought the information was flowing from one neuron to the next. Today, we are doing what Cajal would be doing with the tools of the day: building a digital representation of the neurons and synapses, and simulating the flow of information between neurons on supercomputers. Furthermore, the digitization of the tissue is open to the community and allows the data and the models to be preserved and reused for future generations.”

While a long way from digitizing the whole brain, the study demonstrates that it is feasible to digitally reconstruct and simulate brain tissue, and most importantly, to reveal novel insights into the brain’s functioning. Simulating the emergent electrical behavior of this virtual tissue on supercomputers reproduced a range of previous observations made in experiments on the brain, validating its biological accuracy and providing new insights into the functioning of the neocortex. This is a first step and a significant contribution to Europe’s Human Brain Project, which Henry Markram founded, and where EPFL is the coordinating partner.

Cell has made a video abstract available (it can be found with the Hebrew University of Jerusalem press release)

Here’s a link to and a citation for the paper,

Reconstruction and Simulation of Neocortical Microcircuitry by Henry Markram, Eilif Muller, Srikanth Ramaswamy, Michael W. Reimann, Marwan Abdellah, Carlos Aguado Sanchez, Anastasia Ailamaki, Lidia Alonso-Nanclares, Nicolas Antille, Selim Arsever, Guy Antoine Atenekeng Kahou, Thomas K. Berger, Ahmet Bilgili, Nenad Buncic, Athanassia Chalimourda, Giuseppe Chindemi, Jean-Denis Courcol, Fabien Delalondre, Vincent Delattre, Shaul Druckmann, Raphael Dumusc, James Dynes, Stefan Eilemann, Eyal Gal, Michael Emiel Gevaert, Jean-Pierre Ghobril, Albert Gidon, Joe W. Graham, Anirudh Gupta, Valentin Haenel, Etay Hay, Thomas Heinis, Juan B. Hernando, Michael Hines, Lida Kanari, Daniel Keller, John Kenyon, Georges Khazen, Yihwa Kim, James G. King, Zoltan Kisvarday, Pramod Kumbhar, Sébastien Lasserre, Jean-Vincent Le Bé, Bruno R.C. Magalhães, Angel Merchán-Pérez, Julie Meystre, Benjamin Roy Morrice, Jeffrey Muller, Alberto Muñoz-Céspedes, Shruti Muralidhar, Keerthan Muthurasa, Daniel Nachbaur, Taylor H. Newton, Max Nolte, Aleksandr Ovcharenko, Juan Palacios, Luis Pastor, Rodrigo Perin, Rajnish Ranjan, Imad Riachi, José-Rodrigo Rodríguez, Juan Luis Riquelme, Christian Rössert, Konstantinos Sfyrakis, Ying Shi, Julian C. Shillcock, Gilad Silberberg, Ricardo Silva, Farhan Tauheed, Martin Telefont, Maria Toledo-Rodriguez, Thomas Tränkler, Werner Van Geit, Jafet Villafranca Díaz, Richard Walker, Yun Wang, Stefano M. Zaninetta, Javier DeFelipe, Sean L. Hill, Idan Segev, Felix Schürmann. Cell, Volume 163, Issue 2, p456–492, 8 October 2015 DOI: http://dx.doi.org/10.1016/j.cell.2015.09.029

This paper appears to be open access.

My most substantive description of the Blue Brain Project , previous to this, was in a Jan. 29, 2013 posting featuring the European Union’s (EU) Human Brain project and involvement from countries that are not members.

* I edited a redundant lede (That’s a virtual slice of a rat brain.), moved the second sentence to the lede while adding this:  *about this virtual brain slice* on Oct. 16, 2015 at 0955 hours PST.

Memristor, memristor, you are popular

Regular readers know I have a long-standing interest in memristor and artificial brains. I have three memristor-related pieces of research,  published in the last month or so, for this post.

First, there’s some research into nano memory at RMIT University, Australia, and the University of California at Santa Barbara (UC Santa Barbara). From a May 12, 2015 news item on ScienceDaily,

RMIT University researchers have mimicked the way the human brain processes information with the development of an electronic long-term memory cell.

Researchers at the MicroNano Research Facility (MNRF) have built the one of the world’s first electronic multi-state memory cell which mirrors the brain’s ability to simultaneously process and store multiple strands of information.

The development brings them closer to imitating key electronic aspects of the human brain — a vital step towards creating a bionic brain — which could help unlock successful treatments for common neurological conditions such as Alzheimer’s and Parkinson’s diseases.

A May 11, 2015 RMIT University news release, which originated the news item, reveals more about the researchers’ excitement and about the research,

“This is the closest we have come to creating a brain-like system with memory that learns and stores analog information and is quick at retrieving this stored information,” Dr Sharath said.

“The human brain is an extremely complex analog computer… its evolution is based on its previous experiences, and up until now this functionality has not been able to be adequately reproduced with digital technology.”

The ability to create highly dense and ultra-fast analog memory cells paves the way for imitating highly sophisticated biological neural networks, he said.

The research builds on RMIT’s previous discovery where ultra-fast nano-scale memories were developed using a functional oxide material in the form of an ultra-thin film – 10,000 times thinner than a human hair.

Dr Hussein Nili, lead author of the study, said: “This new discovery is significant as it allows the multi-state cell to store and process information in the very same way that the brain does.

“Think of an old camera which could only take pictures in black and white. The same analogy applies here, rather than just black and white memories we now have memories in full color with shade, light and texture, it is a major step.”

While these new devices are able to store much more information than conventional digital memories (which store just 0s and 1s), it is their brain-like ability to remember and retain previous information that is exciting.

“We have now introduced controlled faults or defects in the oxide material along with the addition of metallic atoms, which unleashes the full potential of the ‘memristive’ effect – where the memory element’s behaviour is dependent on its past experiences,” Dr Nili said.

Nano-scale memories are precursors to the storage components of the complex artificial intelligence network needed to develop a bionic brain.

Dr Nili said the research had myriad practical applications including the potential for scientists to replicate the human brain outside of the body.

“If you could replicate a brain outside the body, it would minimise ethical issues involved in treating and experimenting on the brain which can lead to better understanding of neurological conditions,” Dr Nili said.

The research, supported by the Australian Research Council, was conducted in collaboration with the University of California Santa Barbara.

Here’s a link to and a citation for this memristive nano device,

Donor-Induced Performance Tuning of Amorphous SrTiO3 Memristive Nanodevices: Multistate Resistive Switching and Mechanical Tunability by  Hussein Nili, Sumeet Walia, Ahmad Esmaielzadeh Kandjani, Rajesh Ramanathan, Philipp Gutruf, Taimur Ahmed, Sivacarendran Balendhran, Vipul Bansal, Dmitri B. Strukov, Omid Kavehei, Madhu Bhaskaran, and Sharath Sriram. Advanced Functional Materials DOI: 10.1002/adfm.201501019 Article first published online: 14 APR 2015

© 2015 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim

This paper is behind a paywall.

The second published piece of memristor-related research comes from a UC Santa Barbara and  Stony Brook University (New York state) team but is being publicized by UC Santa Barbara. From a May 11, 2015 news item on Nanowerk (Note: A link has been removed),

In what marks a significant step forward for artificial intelligence, researchers at UC Santa Barbara have demonstrated the functionality of a simple artificial neural circuit (Nature, “Training and operation of an integrated neuromorphic network based on metal-oxide memristors”). For the first time, a circuit of about 100 artificial synapses was proved to perform a simple version of a typical human task: image classification.

A May 11, 2015 UC Santa Barbara news release (also on EurekAlert)by Sonia Fernandez, which originated the news item, situates this development within the ‘artificial brain’ effort while describing it in more detail (Note: A link has been removed),

“It’s a small, but important step,” said Dmitri Strukov, a professor of electrical and computer engineering. With time and further progress, the circuitry may eventually be expanded and scaled to approach something like the human brain’s, which has 1015 (one quadrillion) synaptic connections.

For all its errors and potential for faultiness, the human brain remains a model of computational power and efficiency for engineers like Strukov and his colleagues, Mirko Prezioso, Farnood Merrikh-Bayat, Brian Hoskins and Gina Adam. That’s because the brain can accomplish certain functions in a fraction of a second what computers would require far more time and energy to perform.

… As you read this, your brain is making countless split-second decisions about the letters and symbols you see, classifying their shapes and relative positions to each other and deriving different levels of meaning through many channels of context, in as little time as it takes you to scan over this print. Change the font, or even the orientation of the letters, and it’s likely you would still be able to read this and derive the same meaning.

In the researchers’ demonstration, the circuit implementing the rudimentary artificial neural network was able to successfully classify three letters (“z”, “v” and “n”) by their images, each letter stylized in different ways or saturated with “noise”. In a process similar to how we humans pick our friends out from a crowd, or find the right key from a ring of similar keys, the simple neural circuitry was able to correctly classify the simple images.

“While the circuit was very small compared to practical networks, it is big enough to prove the concept of practicality,” said Merrikh-Bayat. According to Gina Adam, as interest grows in the technology, so will research momentum.

“And, as more solutions to the technological challenges are proposed the technology will be able to make it to the market sooner,” she said.

Key to this technology is the memristor (a combination of “memory” and “resistor”), an electronic component whose resistance changes depending on the direction of the flow of the electrical charge. Unlike conventional transistors, which rely on the drift and diffusion of electrons and their holes through semiconducting material, memristor operation is based on ionic movement, similar to the way human neural cells generate neural electrical signals.

“The memory state is stored as a specific concentration profile of defects that can be moved back and forth within the memristor,” said Strukov. The ionic memory mechanism brings several advantages over purely electron-based memories, which makes it very attractive for artificial neural network implementation, he added.

“For example, many different configurations of ionic profiles result in a continuum of memory states and hence analog memory functionality,” he said. “Ions are also much heavier than electrons and do not tunnel easily, which permits aggressive scaling of memristors without sacrificing analog properties.”

This is where analog memory trumps digital memory: In order to create the same human brain-type functionality with conventional technology, the resulting device would have to be enormous — loaded with multitudes of transistors that would require far more energy.

“Classical computers will always find an ineluctable limit to efficient brain-like computation in their very architecture,” said lead researcher Prezioso. “This memristor-based technology relies on a completely different way inspired by biological brain to carry on computation.”

To be able to approach functionality of the human brain, however, many more memristors would be required to build more complex neural networks to do the same kinds of things we can do with barely any effort and energy, such as identify different versions of the same thing or infer the presence or identity of an object not based on the object itself but on other things in a scene.

Potential applications already exist for this emerging technology, such as medical imaging, the improvement of navigation systems or even for searches based on images rather than on text. The energy-efficient compact circuitry the researchers are striving to create would also go a long way toward creating the kind of high-performance computers and memory storage devices users will continue to seek long after the proliferation of digital transistors predicted by Moore’s Law becomes too unwieldy for conventional electronics.

Here’s a link to and a citation for the paper,

Training and operation of an integrated neuromorphic network based on metal-oxide memristors by M. Prezioso, F. Merrikh-Bayat, B. D. Hoskins, G. C. Adam, K. K. Likharev,    & D. B. Strukov. Nature 521, 61–64 (07 May 2015) doi:10.1038/nature14441

This paper is behind a paywall but a free preview is available through ReadCube Access.

The third and last piece of research, which is from Rice University, hasn’t received any publicity yet, unusual given Rice’s very active communications/media department. Here’s a link to and a citation for their memristor paper,

2D materials: Memristor goes two-dimensional by Jiangtan Yuan & Jun Lou. Nature Nanotechnology 10, 389–390 (2015) doi:10.1038/nnano.2015.94 Published online 07 May 2015

This paper is behind a paywall but a free preview is available through ReadCube Access.

Dexter Johnson has written up the RMIT research (his May 14, 2015 post on the Nanoclast blog on the IEEE [Institute of Electrical and Electronics Engineers] website). He linked it to research from Mark Hersam’s team at Northwestern University (my April 10, 2015 posting) on creating a three-terminal memristor enabling its use in complex electronics systems. Dexter strongly hints in his headline that these developments could lead to bionic brains.

For those who’d like more memristor information, this June 26, 2014 posting which brings together some developments at the University of Michigan and information about developments in the industrial sector is my suggestion for a starting point. Also, you may want to check out my material on HP Labs, especially prominent in the story due to the company’s 2008 ‘discovery’ of the memristor, described on a page in my Nanotech Mysteries wiki, and the controversy triggered by the company’s terminology (there’s more about the controversy in my April 7, 2010 interview with Forrest H Bennett III).

Brain-like computing with optical fibres

Researchers from Singapore and the United Kingdom are exploring an optical fibre approach to brain-like computing (aka neuromorphic computing) as opposed to approaches featuring a memristor or other devices such as a nanoionic device that I’ve written about previously. A March 10, 2015 news item on Nanowerk describes this new approach,

Computers that function like the human brain could soon become a reality thanks to new research using optical fibres made of speciality glass.

Researchers from the Optoelectronics Research Centre (ORC) at the University of Southampton, UK, and Centre for Disruptive Photonic Technologies (CDPT) at the Nanyang Technological University (NTU), Singapore, have demonstrated how neural networks and synapses in the brain can be reproduced, with optical pulses as information carriers, using special fibres made from glasses that are sensitive to light, known as chalcogenides.

“The project, funded under Singapore’s Agency for Science, Technology and Research (A*STAR) Advanced Optics in Engineering programme, was conducted within The Photonics Institute (TPI), a recently established dual institute between NTU and the ORC.”

A March 10, 2015 University of Southampton press release (also on EurekAlert), which originated the news item, describes the nature of the problem that the scientists are trying address (Note: A link has been removed),

Co-author Professor Dan Hewak from the ORC, says: “Since the dawn of the computer age, scientists have sought ways to mimic the behaviour of the human brain, replacing neurons and our nervous system with electronic switches and memory. Now instead of electrons, light and optical fibres also show promise in achieving a brain-like computer. The cognitive functionality of central neurons underlies the adaptable nature and information processing capability of our brains.”

In the last decade, neuromorphic computing research has advanced software and electronic hardware that mimic brain functions and signal protocols, aimed at improving the efficiency and adaptability of conventional computers.

However, compared to our biological systems, today’s computers are more than a million times less efficient. Simulating five seconds of brain activity takes 500 seconds and needs 1.4 MW of power, compared to the small number of calories burned by the human brain.

Using conventional fibre drawing techniques, microfibers can be produced from chalcogenide (glasses based on sulphur) that possess a variety of broadband photoinduced effects, which allow the fibres to be switched on and off. This optical switching or light switching light, can be exploited for a variety of next generation computing applications capable of processing vast amounts of data in a much more energy-efficient manner.

Co-author Dr Behrad Gholipour explains: “By going back to biological systems for inspiration and using mass-manufacturable photonic platforms, such as chalcogenide fibres, we can start to improve the speed and efficiency of conventional computing architectures, while introducing adaptability and learning into the next generation of devices.”

By exploiting the material properties of the chalcogenides fibres, the team led by Professor Cesare Soci at NTU have demonstrated a range of optical equivalents of brain functions. These include holding a neural resting state and simulating the changes in electrical activity in a nerve cell as it is stimulated. In the proposed optical version of this brain function, the changing properties of the glass act as the varying electrical activity in a nerve cell, and light provides the stimulus to change these properties. This enables switching of a light signal, which is the equivalent to a nerve cell firing.

The research paves the way for scalable brain-like computing systems that enable ‘photonic neurons’ with ultrafast signal transmission speeds, higher bandwidth and lower power consumption than their biological and electronic counterparts.

Professor Cesare Soci said: “This work implies that ‘cognitive’ photonic devices and networks can be effectively used to develop non-Boolean computing and decision-making paradigms that mimic brain functionalities and signal protocols, to overcome bandwidth and power bottlenecks of traditional data processing.”

Here’s a link to and a citation for the paper,

Amorphous Metal-Sulphide Microfibers Enable Photonic Synapses for Brain-Like Computing by Behrad Gholipour, Paul Bastock, Chris Craig, Khouler Khan, Dan Hewak. and Cesare Soci. Advanced Optical Materials DOI: 10.1002/adom.201400472
Article first published online: 15 JAN 2015

© 2015 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim

This article is behind a paywall.

For anyone interested in memristors and nanoionic devices, here are a few posts (from this blog) to get you started:

Memristors, memcapacitors, and meminductors for faster computers (June 30, 2014)

This second one offers more details and links to previous pieces,

Memristor, memristor! What is happening? News from the University of Michigan and HP Laboratories (June 25, 2014)

This post is more of a survey including memristors, nanoionic devices, ‘brain jelly, and more,

Brain-on-a-chip 2014 survey/overview (April 7, 2014)

One comment, this brain-on-a-chip is not to be confused with ‘organs-on-a-chip’ projects which are attempting to simulate human organs (Including the brain) so chemicals and drugs can be tested.

Brain-on-a-chip 2014 survey/overview

Michael Berger has written another of his Nanowerk Spotlight articles focussing on neuromorphic engineering and the concept of a brain-on-a-chip bringing it up-to-date April 2014 style.

It’s a topic he and I have been following (separately) for years. Berger’s April 4, 2014 Brain-on-a-chip Spotlight article provides a very welcome overview of the international neuromorphic engineering effort (Note: Links have been removed),

Constructing realistic simulations of the human brain is a key goal of the Human Brain Project, a massive European-led research project that commenced in 2013.

The Human Brain Project is a large-scale, scientific collaborative project, which aims to gather all existing knowledge about the human brain, build multi-scale models of the brain that integrate this knowledge and use these models to simulate the brain on supercomputers. The resulting “virtual brain” offers the prospect of a fundamentally new and improved understanding of the human brain, opening the way for better treatments for brain diseases and for novel, brain-like computing technologies.

Several years ago, another European project named FACETS (Fast Analog Computing with Emergent Transient States) completed an exhaustive study of neurons to find out exactly how they work, how they connect to each other and how the network can ‘learn’ to do new things. One of the outcomes of the project was PyNN, a simulator-independent language for building neuronal network models.

Scientists have great expectations that nanotechnologies will bring them closer to the goal of creating computer systems that can simulate and emulate the brain’s abilities for sensation, perception, action, interaction and cognition while rivaling its low power consumption and compact size – basically a brain-on-a-chip. Already, scientists are working hard on laying the foundations for what is called neuromorphic engineering – a new interdisciplinary discipline that includes nanotechnologies and whose goal is to design artificial neural systems with physical architectures similar to biological nervous systems.

Several research projects funded with millions of dollars are at work with the goal of developing brain-inspired computer architectures or virtual brains: DARPA’s SyNAPSE, the EU’s BrainScaleS (a successor to FACETS), or the Blue Brain project (one of the predecessors of the Human Brain Project) at Switzerland’s EPFL [École Polytechnique Fédérale de Lausanne].

Berger goes on to describe the raison d’être for neuromorphic engineering (attempts to mimic biological brains),

Programmable machines are limited not only by their computational capacity, but also by an architecture requiring (human-derived) algorithms to both describe and process information from their environment. In contrast, biological neural systems (e.g., brains) autonomously process information in complex environments by automatically learning relevant and probabilistically stable features and associations. Since real world systems are always many body problems with infinite combinatorial complexity, neuromorphic electronic machines would be preferable in a host of applications – but useful and practical implementations do not yet exist.

Researchers are mostly interested in emulating neural plasticity (aka synaptic plasticity), from Berger’s April 4, 2014 article,

Independent from military-inspired research like DARPA’s, nanotechnology researchers in France have developed a hybrid nanoparticle-organic transistor that can mimic the main functionalities of a synapse. This organic transistor, based on pentacene and gold nanoparticles and termed NOMFET (Nanoparticle Organic Memory Field-Effect Transistor), has opened the way to new generations of neuro-inspired computers, capable of responding in a manner similar to the nervous system  (read more: “Scientists use nanotechnology to try building computers modeled after the brain”).

One of the key components of any neuromorphic effort, and its starting point, is the design of artificial synapses. Synapses dominate the architecture of the brain and are responsible for massive parallelism, structural plasticity, and robustness of the brain. They are also crucial to biological computations that underlie perception and learning. Therefore, a compact nanoelectronic device emulating the functions and plasticity of biological synapses will be the most important building block of brain-inspired computational systems.

In 2011, a team at Stanford University demonstrates a new single element nanoscale device, based on the successfully commercialized phase change material technology, emulating the functionality and the plasticity of biological synapses. In their work, the Stanford team demonstrated a single element electronic synapse with the capability of both the modulation of the time constant and the realization of the different synaptic plasticity forms while consuming picojoule level energy for its operation (read more: “Brain-inspired computing with nanoelectronic programmable synapses”).

Berger does mention memristors but not in any great detail in this article,

Researchers have also suggested that memristor devices are capable of emulating the biological synapses with properly designed CMOS neuron components. A memristor is a two-terminal electronic device whose conductance can be precisely modulated by charge or flux through it. It has the special property that its resistance can be programmed (resistor) and subsequently remains stored (memory).

One research project already demonstrated that a memristor can connect conventional circuits and support a process that is the basis for memory and learning in biological systems (read more: “Nanotechnology’s road to artificial brains”).

You can find a number of memristor articles here including these: Memristors have always been with us from June 14, 2013; How to use a memristor to create an artificial brain from Feb. 26, 2013; Electrochemistry of memristors in a critique of the 2008 discovery from Sept. 6, 2012; and many more (type ‘memristor’ into the blog search box and you should receive many postings or alternatively, you can try ‘artificial brains’ if you want everything I have on artificial brains).

Getting back to Berger’s April 4, 2014 article, he mentions one more approach and this one stands out,

A completely different – and revolutionary – human brain model has been designed by researchers in Japan who introduced the concept of a new class of computer which does not use any circuit or logic gate. This artificial brain-building project differs from all others in the world. It does not use logic-gate based computing within the framework of Turing. The decision-making protocol is not a logical reduction of decision rather projection of frequency fractal operations in a real space, it is an engineering perspective of Gödel’s incompleteness theorem.

Berger wrote about this work in much more detail in a Feb. 10, 2014 Nanowerk Spotlight article titled: Brain jelly – design and construction of an organic, brain-like computer, (Note: Links have been removed),

In a previous Nanowerk Spotlight we reported on the concept of a full-fledged massively parallel organic computer at the nanoscale that uses extremely low power (“Will brain-like evolutionary circuit lead to intelligent computers?”). In this work, the researchers created a process of circuit evolution similar to the human brain in an organic molecular layer. This was the first time that such a brain-like ‘evolutionary’ circuit had been realized.

The research team, led by Dr. Anirban Bandyopadhyay, a senior researcher at the Advanced Nano Characterization Center at the National Institute of Materials Science (NIMS) in Tsukuba, Japan, has now finalized their human brain model and introduced the concept of a new class of computer which does not use any circuit or logic gate.

In a new open-access paper published online on January 27, 2014, in Information (“Design and Construction of a Brain-Like Computer: A New Class of Frequency-Fractal Computing Using Wireless Communication in a Supramolecular Organic, Inorganic System”), Bandyopadhyay and his team now describe the fundamental computing principle of a frequency fractal brain like computer.

“Our artificial brain-building project differs from all others in the world for several reasons,” Bandyopadhyay explains to Nanowerk. He lists the four major distinctions:
1) We do not use logic gate based computing within the framework of Turing, our decision-making protocol is not a logical reduction of decision rather projection of frequency fractal operations in a real space, it is an engineering perspective of Gödel’s incompleteness theorem.
2) We do not need to write any software, the argument and basic phase transition for decision-making, ‘if-then’ arguments and the transformation of one set of arguments into another self-assemble and expand spontaneously, the system holds an astronomically large number of ‘if’ arguments and its associative ‘then’ situations.
3) We use ‘spontaneous reply back’, via wireless communication using a unique resonance band coupling mode, not conventional antenna-receiver model, since fractal based non-radiative power management is used, the power expense is negligible.
4) We have carried out our own single DNA, single protein molecule and single brain microtubule neurophysiological study to develop our own Human brain model.

I encourage people to read Berger’s articles on this topic as they provide excellent information and links to much more. Curiously (mind you, it is easy to miss something), he does not mention James Gimzewski’s work at the University of California at Los Angeles (UCLA). Working with colleagues from the National Institute for Materials Science in Japan, Gimzewski published a paper about “two-, three-terminal WO3-x-based nanoionic devices capable of a broad range of neuromorphic and electrical functions”. You can find out more about the paper in my Dec. 24, 2012 posting titled: Synaptic electronics.

As for the ‘brain jelly’ paper, here’s a link to and a citation for it,

Design and Construction of a Brain-Like Computer: A New Class of Frequency-Fractal Computing Using Wireless Communication in a Supramolecular Organic, Inorganic System by Subrata Ghoshemail, Krishna Aswaniemail, Surabhi Singhemail, Satyajit Sahuemail, Daisuke Fujitaemail and Anirban Bandyopadhyay. Information 2014, 5(1), 28-100; doi:10.3390/info5010028

It’s an open access paper.

As for anyone who’s curious about why the US BRAIN initiative ((Brain Research through Advancing Innovative Neurotechnologies, also referred to as the Brain Activity Map Project) is not mentioned, I believe that’s because it’s focussed on biological brains exclusively at this point (you can check its Wikipedia entry to confirm).

Anirban Bandyopadhyay was last mentioned here in a January 16, 2014 posting titled: Controversial theory of consciousness confirmed (maybe) in  the context of a presentation in Amsterdam, Netherlands.

Whose Electric Brain? the video

After a few fits and starts, the video of my March 15, 2012 presentation to the Canadian Academy of Independent Scholars at Simon Fraser University has been uploaded to Vimeo. Unfortunately the original recording was fuzzy (camera issues) so we (camera operator, director, and editor, Sama Shodjai [samashodjai@gmail.com]) and I rerecorded the presentation and this second version is the one we’ve uploaded.

Whose Electric Brain? (Presentation) from Maryse de la Giroday on Vimeo.

I’ve come across a few errors; at one point, I refer to Buckminster Fuller as Buckminster Fullerene and I state that the opening image visualizes a neuron from someone with Parkinson’s disease, I should have said Huntingdon’s disease. Perhaps, you’ll come across more, please do let me know. If this should become a viral sensation (no doubt feeding a pent up demand for grey-haired women talking about memristors and brains), it’s important that corrections be added.

Finally, a big thank you to Mark Dwor who provides my introduction at the beginning, the Canadian Academy of Independent Scholars whose grant made the video possible, and Simon Fraser University.

ETA March 29, 2012: This is an updated version of the presentation I was hoping to give at ISEA (International Symposium on Electronic Arts) 2011 in Istanbul. Sadly, I was never able to raise all of the funds I needed for that venture. The funds I raised separately from the CAIS grant are being held until I can find another suitable opportunity to present my work.

Nanocellulose as scaffolding for nerve cells

Swedish scientists have announced success with growing nerve cells using nanocellulose as the scaffolding. From the March 19, 2012 news item on Naowerk,

Researchers from Chalmers and the University of Gothenburg have shown that nanocellulose stimulates the formation of neural networks. This is the first step toward creating a three-dimensional model of the brain. Such a model could elevate brain research to totally new levels, with regard to Alzheimer’s disease and Parkinson’s disease, for example.

“This has been a great challenge,” says Paul Gatenholm, Professor of Biopolymer Technology at Chalmers.?Until recently the cells were dying after a while, since we weren’t able to get them to adhere to the scaffold. But after many experiments we discovered a method to get them to attach to the scaffold by making it more positively charged. Now we have a stable method for cultivating nerve cells on nanocellulose.”

When the nerve cells finally attached to the scaffold they began to develop and generate contacts with one another, so-called synapses. A neural network of hundreds of cells was produced. The researchers can now use electrical impulses and chemical signal substances to generate nerve impulses, that spread through the network in much the same way as they do in the brain. They can also study how nerve cells react with other molecules, such as pharmaceuticals.

I found the original March 19, 2012 press release  and an image on the University of Chalmers website,

Nerve cells growing on a three-dimensional nanocellulose scaffold. One of the applications the research group would like to study is destruction of synapses between nerve cells, which is one of the earliest signs of Alzheimer’s disease. Synapses are the connections between nerve cells. In the image, the functioning synapses are yellow and the red spots show where synapses have been destroyed. Illustration: Philip Krantz, Chalmers

This latest research from Gatenholm and his team will be presented at the American Chemical Society annual meeting in San Diego, March 25, 2012.

The research team from Chalmers University and its partners are working on other applications for nanocellulose including one for artificial ears. From the Chalmers University Jan. 22, 2012 press release,

As the first group in the world, researchers from Chalmers will build up body parts using nanocellulose and the body’s own cells. Funding will be from the European network for nanomedicine, EuroNanoMed.

Professor Paul Gatenholm at Chalmers is leading and co-ordinating this European research programme, which will construct an outer ear using nanocellulose and a mixture of the patient’s own cartilage cells and stem cells.

Previously, Paul Gatenholm and his colleagues succeeded, in close co-operation with Sahlgrenska University Hospital, in developing artificial blood vessels using nanocellulose, where small bacteria “spin” the cellulose.

In the new programme , the researchers will build up a three-dimensional nanocellulose network that is an exact copy of the patient’s healthy outer ear and construct an exact mirror image of the ear. It will have sufficient mechanical stability for it to be used as a bioreactor, which means that the patient’s own cartilage and stem cells can be cultivated directly inside the body or on the patient, in this case on the head. [Presumably the patient has one ear that is healthy and the researchers are attempting to repair or replace an unhealthy ear on the other side of the head.]

As for the Swedish perspective on nanocellulose (from the 2010 press release),

Cellulose-based material is of strategic significance to Sweden and materials science is one of Chalmers eight areas of advance. Biopolymers are highly interesting as they are renewable and could be of major significance in the development of future materials.

Further research into using the forest as a resource for new materials is continuing at Chalmers within the new research programme that is being built up with different research groups at Chalmers and Swerea – IVF. The programme is part of the Wallenberg Wood Science Center, which is being run jointly by the Royal Institute of Technology in Stockholm and Chalmers under the leadership of Professor Lars Berglund at the Royal Institute of Technology.

The 2012 press release announcing the work on nerve cells had this about nanocellulose,

Nanocellulose is a material that consists of nanosized cellulose fibers. Typical dimensions are widths of 5 to 20 nanometers and lengths of up to 2,000 nanometers. Nanocellulose can be produced by bacteria that spin a close-meshed structure of cellulose fibers. It can also be isolated from wood pulp through processing in a high-pressure homogenizer.

I last wrote about the Swedes and nanocellulose in a Feb. 15, 2012 posting about recovering it (nanocellulose) from wood-based sludge.

As for anyone interested in the Canadian scene, there is an article by David Manly in the Jan.-Feb. 2012 issue of Canadian Biomass Magazine that focuses largely on economic impacts and value-added products as they pertain to nanocellulose manufacturing production in Canada. You can also search this blog as I have covered the nanocellulose story in Canada and elsewhere as extensively as I can.