While some call memristors a fourth fundamental component alongside resistors, capacitors, and inductors (as mentioned in my June 26, 2014 posting which featured an update of sorts on memristors [scroll down about 80% of the way]), others view memristors as members of an emerging periodic table of circuit elements (as per my April 7, 2010 posting).
It seems scientists, Fabio Traversa, and his colleagues fall into the ‘periodic table of circuit elements’ camp. From Traversa’s June 27, 2014 posting on nanotechweb.org,
Memristors, memcapacitors and meminductors may retain information even without a power source. Several applications of these devices have already been proposed, yet arguably one of the most appealing is ‘memcomputing’ – a brain-inspired computing paradigm utilizing the ability of emergent nanoscale devices to store and process information on the same physical platform.
A multidisciplinary team of researchers from the Autonomous University of Barcelona in Spain, the University of California San Diego and the University of South Carolina in the US, and the Polytechnic of Turin in Italy, suggest a realization of “memcomputing” based on nanoscale memcapacitors. They propose and analyse a major advancement in using memcapacitive systems (capacitors with memory), as central elements for Very Large Scale Integration (VLSI) circuits capable of storing and processing information on the same physical platform. They name this architecture Dynamic Computing Random Access Memory (DCRAM).
Using the standard configuration of a Dynamic Random Access Memory (DRAM) where the capacitors have been substituted with solid-state based memcapacitive systems, they show the possibility of performing WRITE, READ and polymorphic logic operations by only applying modulated voltage pulses to the memory cells. Being based on memcapacitors, the DCRAM expands very little energy per operation. It is a realistic memcomputing machine that overcomes the von Neumann bottleneck and clearly exhibits intrinsic parallelism and functional polymorphism.
The Brain research, ethics, and nanotechnology (part one of five) May 19, 2014 post kicked off a series titled ‘Brains, prostheses, nanotechnology, and human enhancement’ which brings together a number of developments in the worlds of neuroscience, prosthetics, and, incidentally, nanotechnology in the field of interest called human enhancement. Parts one through four are an attempt to draw together a number of new developments, mostly in the US and in Europe. Due to my language skills which extend to English and, more tenuously, French, I can’t provide a more ‘global perspective’.
Now for the summary. Ranging from research meant to divulge more about how the brain operates in hopes of healing conditions such as Parkinson’s and Alzeheimer’s diseases to utilizing public engagement exercises (first developed for nanotechnology) for public education and acceptance of brain research to the development of prostheses for the nervous system such as the Walk Again robotic suit for individuals with paraplegia (and, I expect quadriplegia [aka tetraplegia] in the future), brain research is huge in terms of its impact socially and economically across the globe.
Until now, I have not included information about neuromorphic engineering (creating computers with the processing capabilities of human brains). My May 16, 2014 posting (Wacky oxide. biological synchronicity, and human brainlike computing) features one of the latest developments along with this paragraph providing links to overview materials of the field,
As noted earlier, there are other approaches to creating an artificial brain, i.e., neuromorphic engineering. My April 7, 2014 posting is the most recent synopsis posted here; it includes excerpts from a Nanowerk Spotlight article overview along with a mention of the ‘brain jelly’ approach and a discussion of my somewhat extensive coverage of memristors and a mention of work on nanoionic devices. There is also a published roadmap to neuromorphic engineering featuring both analog and digital devices, mentioned in my April 18, 2014 posting.
While researchers such Miguel Nicolelis work on exoskeletons (externally worn robotic suits) controlled by the wearer’s thoughts and giving individuals with paraplegia the ability to walk, researchers from one of Germany’s Fraunhofer Institutes reveal a different technology for achieving the same ends. From a May 16, 2014 news item on Nanowerk,
People with severe injuries to their spinal cord currently have no prospect of recovery and remain confined to their wheelchairs. Now, all that could change with a new treatment that stimulates the spinal cord using electric impulses. The hope is that the technique will help paraplegic patients learn to walk again. From June 3 – 5 [2-14], Fraunhofer researchers will be at the Sensor + Test measurement fair in Nürnberg to showcase the implantable microelectrode sensors they have developed in the course of pre-clinical development work (Hall 12, Booth 12-537).
Now a consortium of European research institutions and companies want to get affected patients quite literally back on their feet. In the EU’s [European Union's] NEUWalk project, which has been awarded funding of some nine million euros, researchers are working on a new method of treatment designed to restore motor function in patients who have suffered severe injuries to their spinal cord. The technique relies on electrically stimulating the nerve pathways in the spinal cord. “In the injured area, the nerve cells have been damaged to such an extent that they no longer receive usable information from the brain, so the stimulation needs to be delivered beneath that,” explains Dr. Peter Detemple, head of department at the Fraunhofer Institute for Chemical Technology’s Mainz branch (IMM) and NEUWalk project coordinator. To do this, Detemple and his team are developing flexible, wafer-thin microelectrodes that are implanted within the spinal canal on the spinal cord. These multichannel electrode arrays stimulate the nerve pathways with electric impulses that are generated by the accompanying by microprocessor-controlled neurostimulator. “The various electrodes of the array are located around the nerve roots responsible for locomotion. By delivering a series of pulses, we can trigger those nerve roots in the correct order to provoke motion sequences of movements and support the motor function,” says Detemple.
Researchers from the consortium have already successfully conducted tests on rats in which the spinal cord had not been completely severed. As well as stimulating the spinal cord, the rats were given a combination of medicine and rehabilitation training. Afterwards the animals were able not only to walk but also to run, climb stairs and surmount obstacles. “We were able to trigger specific movements by delivering certain sequences of pulses to the various electrodes implanted on the spinal cord,” says Detemple. The research scientist and his team believe that the same approach could help people to walk again, too. “We hope that we will be able to transfer the results of our animal testing to people. Of course, people who have suffered injuries to their spinal cord will still be limited when it comes to sport or walking long distances. The first priority is to give them a certain level of independence so that they can move around their apartment and look after themselves, for instance, or walk for short distances without requiring assistance,” says Detemple.
Researchers from the NEUWalk project intend to try out their system on two patients this summer. In this case, the patients are not completely paraplegic, which means there is still some limited communication between the brain and the legs. The scientists are currently working on tailored implants for the intervention. “However, even if both trials are a success, it will still be a few years before the system is ready for the general market. First, the method has to undergo clinical studies and demonstrate its effectiveness among a wider group of patients,” says Detemple.
Patients with Parkinson’s disease could also benefit from the neural prostheses. The most well-known symptoms of the disease are trembling, extreme muscle tremors and a short, [emphasis mine] stooped gait that has a profound effect on patients’ mobility. Until now this neurodegenerative disorder has mostly been treated with dopamine agonists – drugs that chemically imitate the effects of dopamine but that often lead to severe side effects when taken over a longer period of time. Once the disease has reached an advanced stage, doctors often turn to deep brain stimulation. This involves a complex operation to implant electrodes in specific parts of the brain so that the nerve cells in the region can be stimulated or suppressed as required. In the NEUWalk project, researchers are working on electric spinal cord simulation – an altogether less dangerous intervention that should however ease the symptoms of Parkinson’s disease just as effectively. “Initial animal testing has yielded some very promising results,” says Detemple.
(For anyone interested in the NEUWalk project, you can find more here,) Note the reference to Parkinson’s in the context of work designed for people with paraplegia. Brain research and prosthetics (specifically neuroprosthetics or neural prosthetics), are interconnected. As for the nanotechnology connection, in its role as an enabling technology it has provided some of the tools that make these efforts possible. It has also made some of the work in neuromorphic engineering (attempts to create an artificial brain that mimics the human brain) possible. It is a given that research on the human brain will inform efforts in neuromorphic engineering and that attempts will be made to create prostheses for the brain (cyborg brain) and other enhancements.
One final comment, I’m not so sure that transferring approaches and techniques developed to gain public acceptance of nanotechnology are necessarily going to be effective. (Harthorn seemed to be suggesting in her presentation to the Presidential Presidential Commission for the Study of Bioethical Issues that these ‘nano’ approaches could be adopted. Other researchers [Caulfield with the genome and Racine with previous neuroscience efforts] also suggested their experience could be transferred. While some of that is likely true,, it should be noted that some self-interest may be involved as brain research is likely to be a fresh source of funding for social science researchers with experience in nanotechnology and genomics who may be finding their usual funding sources less generous than previously.)
The likelihood there will be a substantive public panic over brain research is higher than it ever was for a nanotechnology panic (I am speaking with the benefit of hindsight re: nano panics). Everyone understands the word, ‘brain’, far fewer understand the word ‘nanotechnology’ which means that the level of interest is lower and people are less likely to get disturbed by an obscure technology. (The GMO panic gained serious traction with the ‘Frankenfood’ branding and when it fused rather unexpectedly with another research story, stem cell research. In the UK, one can also add the panic over ‘mad cow’ disease or Creutzfeldt-Jakob disease (CJD), as it’s also known, to the mix. It was the GMO and other assorted panics which provided the impetus for much of the public engagement funding for nanotechnology.)
All one has to do in this instance is start discussions about changing someone’s brain and cyborgs and these researchers may find they have a much more volatile situation on their hands. As well, everyone (the general public and civil society groups/activists, not just the social science and science researchers) involved in the nanotechnology public engagement exercises has learned from the experience. In the meantime, pop culture concerns itself with zombies and we all know what they like to eat.
Links to other posts in the Brains, prostheses, nanotechnology, and human enhancement five-part series
While I didn’t mention neuromorphic engineering in my April 16, 2014 posting which focused on the more general aspect of nanotechnology in Transcendence, a movie starring Johnny Depp and opening on April 18, that specialty (neuromorphic engineering) is what makes the events in the movie ‘possible’ (assuming very large stretches of imagination bringing us into the realm implausibility and beyond). From the IMDB.com plot synopsis for Transcendence,
Dr. Will Caster (Johnny Depp) is the foremost researcher in the field of Artificial Intelligence, working to create a sentient machine that combines the collective intelligence of everything ever known with the full range of human emotions. His highly controversial experiments have made him famous, but they have also made him the prime target of anti-technology extremists who will do whatever it takes to stop him. However, in their attempt to destroy Will, they inadvertently become the catalyst for him to succeed to be a participant in his own transcendence. For his wife Evelyn (Rebecca Hall) and best friend Max Waters (Paul Bettany), both fellow researchers, the question is not if they canbut [sic] if they should. Their worst fears are realized as Will’s thirst for knowledge evolves into a seemingly omnipresent quest for power, to what end is unknown. The only thing that is becoming terrifyingly clear is there may be no way to stop him.
In the film, Carter’s intelligence/consciousness is uploaded to the computer, which suggests the computer has human brainlike qualities and abilities. The effort to make computer or artificial intelligence more humanlike is called neuromorphic engineering and according to an April 17, 2014 news item on phys.org, researchers at the Georgia Institute of Technology (Georgia Tech) have published a roadmap for this pursuit,
In the field of neuromorphic engineering, researchers study computing techniques that could someday mimic human cognition. Electrical engineers at the Georgia Institute of Technology recently published a “roadmap” that details innovative analog-based techniques that could make it possible to build a practical neuromorphic computer.
A core technological hurdle in this field involves the electrical power requirements of computing hardware. Although a human brain functions on a mere 20 watts of electrical energy, a digital computer that could approximate human cognitive abilities would require tens of thousands of integrated circuits (chips) and a hundred thousand watts of electricity or more – levels that exceed practical limits.
The Georgia Tech roadmap proposes a solution based on analog computing techniques, which require far less electrical power than traditional digital computing. The more efficient analog approach would help solve the daunting cooling and cost problems that presently make digital neuromorphic hardware systems impractical.
“To simulate the human brain, the eventual goal would be large-scale neuromorphic systems that could offer a great deal of computational power, robustness and performance,” said Jennifer Hasler, a professor in the Georgia Tech School of Electrical and Computer Engineering (ECE), who is a pioneer in using analog techniques for neuromorphic computing. “A configurable analog-digital system can be expected to have a power efficiency improvement of up to 10,000 times compared to an all-digital system.”
An April 16, 2014 Georgia Tech news release by Rick Robinson, which originated the news item, describes why Hasler wants to combine analog (based on biological principles) and digital computing approaches to the creation of artificial brains,
Unlike digital computing, in which computers can address many different applications by processing different software programs, analog circuits have traditionally been hard-wired to address a single application. For example, cell phones use energy-efficient analog circuits for a number of specific functions, including capturing the user’s voice, amplifying incoming voice signals, and controlling battery power.
Because analog devices do not have to process binary codes as digital computers do, their performance can be both faster and much less power hungry. Yet traditional analog circuits are limited because they’re built for a specific application, such as processing signals or controlling power. They don’t have the flexibility of digital devices that can process software, and they’re vulnerable to signal disturbance issues, or noise.
In recent years, Hasler has developed a new approach to analog computing, in which silicon-based analog integrated circuits take over many of the functions now performed by familiar digital integrated circuits. These analog chips can be quickly reconfigured to provide a range of processing capabilities, in a manner that resembles conventional digital techniques in some ways.
Over the last several years, Hasler and her research group have developed devices called field programmable analog arrays (FPAA). Like field programmable gate arrays (FPGA), which are digital integrated circuits that are ubiquitous in modern computing, the FPAA can be reconfigured after it’s manufactured – hence the phrase “field-programmable.”
Hasler and Marr’s 29-page paper traces a development process that could lead to the goal of reproducing human-brain complexity. The researchers investigate in detail a number of intermediate steps that would build on one another, helping researchers advance the technology sequentially.
For example, the researchers discuss ways to scale energy efficiency, performance and size in order to eventually achieve large-scale neuromorphic systems. The authors also address how the implementation and the application space of neuromorphic systems can be expected to evolve over time.
“A major concept here is that we have to first build smaller systems capable of a simple representation of one layer of human brain cortex,” Hasler said. “When that system has been successfully demonstrated, we can then replicate it in ways that increase its complexity and performance.”
Among neuromorphic computing’s major hurdles are the communication issues involved in networking integrated circuits in ways that could replicate human cognition. In their paper, Hasler and Marr emphasize local interconnectivity to reduce complexity. Moreover, they argue it’s possible to achieve these capabilities via purely silicon-based techniques, without relying on novel devices that are based on other approaches.
Commenting on the recent publication, Alice C. Parker, a professor of electrical engineering at the University of Southern California, said, “Professor Hasler’s technology roadmap is the first deep analysis of the prospects for large scale neuromorphic intelligent systems, clearly providing practical guidance for such systems, with a nearer-term perspective than our whole-brain emulation predictions. Her expertise in analog circuits, technology and device models positions her to provide this unique perspective on neuromorphic circuits.”
Eugenio Culurciello, an associate professor of biomedical engineering at Purdue University, commented, “I find this paper to be a very accurate description of the field of neuromorphic data processing systems. Hasler’s devices provide some of the best performance per unit power I have ever seen and are surely on the roadmap for one of the major technologies of the future.”
Said Hasler: “In this study, we conclude that useful neural computation machines based on biological principles – and potentially at the size of the human brain — seems technically within our grasp. We think that it’s more a question of gathering the right research teams and finding the funding for research and development than of any insurmountable technical barriers.”
This is an open access article (at least, the HTML version is).
I have looked at Hasler’s roadmap and it provides a good and readable overview (even for an amateur like me; Note: you do have to need some tolerance for ‘not knowing’) of the state of neuromorphic engineering’s problems, and suggestions for overcoming them. Here’s a description of a human brain and its power requirements as compared to a computer’s (from the roadmap),
One of the amazing thing about the human brain is its ability to perform tasks beyond current supercomputers using roughly 20 W of average power, a level smaller than most individual computer microprocessor chips. A single neuron emulation can tax a high performance processor; given there is 1012 neurons operating at 20 W, each neuron consumes 20 pW average power. Assuming a neuron is conservatively performing the wordspotting computation (1000 synapses), 100,000 PMAC (PMAC = “Peta” MAC = 1015 MAC/s) would be required to duplicate the neural structure. A higher computational efficiency due to active dendritic line channels is expected as well as additional computation due to learning. The efficiency of a single neuron would be 5000 PMAC/W (or 5 TMAC/μW). A similar efficiency for 1011 neurons and 10,000 synapses is expected.
Building neuromorphic hardware requires that technology must scale from current levels given constraints of power, area, and cost: all issues typical in industrial and defense applications; if hardware technology does not scale as other available technologies, as well as takes advantage of the capabilities of IC technology that are currently visible, it will not be successful.
One of my main areas of interest is the memristor (a nanoscale ‘device/circuit element’ which emulates synaptic plasticity), which was mentioned in a way that allows me to understand how the device fits (or doesn’t fit) into the overall conceptual framework (from the roadmap),
The density for a 10 nm EEPROM device acting as a synapse begs the question of whether other nanotechnologies can improve on the resulting Si [silicon] synapse density. One transistor per synapse is hard to beat by any approach, particularly in scaled down Si (like 10 nm), when the synapse memory, computation, and update is contained within the EEPROM device. Most nano device technologies [i.e., memristors (Snider et al., 2011)] show considerable difficulties to get to two-dimensional arrays at a similar density level. Recently, a team from U. of Michigan announced the first functioning memristor two-dimensional (30 × 30) array built on a CMOS chip in 2012 (Kim et al., 2012), claiming applications in neuromorphic engineering, the same group has published innovative devices for digital (Jo and Lu, 2009) and analog applications (Jo et al., 2011).
I notice that the reference to the University’s of Michigan is relatively neutral in tone and the memristor does not figure substantively in Hasler’s roadmap.
Intriguingly, there is a section on commercialization; I didn’t think the research was at that stage yet (from the roadmap),
Although one can discuss how to build a cortical computer on the size of mammals and humans, the question is how will the technology developed for these large systems impact commercial development. The cost for ICs [integrated circuits or chips] alone for cortex would be approximately $20 M in current prices, which although possible for large users, would not be common to be found in individual households. Throughout the digital processor approach, commercial market opportunities have driven the progress in the field. Getting neuromorphic technology integrated into commercial environment allows us to ride this powerful economic “engine” rather than pull.
In most applications, the important commercial issues include minimization of cost, time to market, just sufficient performance for the application, power consumed, size and weight. The cost of a system built from ICs is, at a macro-level, a function of the area of those ICs, which then affects the number of ICs needed system wide, the number of components used, and the board space used. Efficiency of design tools, testing time and programming time also considerably affect system costs. Time to get an application to market is affected by the ability to reuse or quickly modify existing designs, and is reduced for a new application if existing hardware can be reconfigured, adapting to changing specifications, and a designer can utilize tools that allow rapid modifications to the design. Performance is key for any algorithm, but for a particular product, one only needs a solution to that particular problem; spending time to make the solution elegant is often a losing strategy.
The neuromorphic community has seen some early entries into commercial spaces, but we are just at the very beginning of the process. As the knowledge of neuromorphic engineering has progressed, which have included knowledge of sensor interfaces and analog signal processing, there have been those who have risen to the opportunities to commercialize these technologies. Neuromorphic research led to better understanding of sensory processing, particularly sensory systems interacting with other humans, enabling companies like Synaptics (touch pads), Foveon (CMOS color imagers), and Sonic Innovation (analog–digital hearing aids); Gilder provides a useful history of these two companies elsewhere (Gilder, 2005). From the early progress in analog signal processing we see companies like GTronix (acquired by National Semiconductor, then acquired by Texas Instruments) applying the impact of custom analog signal processing techniques and programmability toward auditory signal processing that improved sound quality requiring ultra-low power levels. Further, we see in companies like Audience there is some success from mapping the computational flow of the early stage auditory system, and implementing part of the event based auditory front-end to achieve useful results for improved voice quality. But the opportunities for the neuromorphic community are just beginning, and directly related to understanding the computational capabilities of these items. The availability of ICs that have these capabilities, whether or not one mentions they have any neuromorphic material, will further drive applications.
One expects that part of a cortex processing system would have significant computational possibilities, as well as cortex structures from smaller animals, and still be able to reach price points for commercial applications. In the following discussion, we will consider the potential of cortical structures at different levels of commercial applications. Figure 24 shows one typical block diagram, algorithms at each stage, resulting power efficiency (say based on current technology), as well as potential applications of the approach. In all cases, we will be considering a single die solution, typical for a commercial product, and will minimize the resulting communication power to I/O off the chip (no power consumed due to external memories or digital processing devices). We will assume a net computational efficiency of 10 TMAC/mW, corresponding to a lower power supply (i.e., mostly 500 mV, but not 180 mV) and slightly larger load capacitances; we make these assumptions as conservative pull back from possible applications, although we expect the more aggressive targets would be reachable. We assume the external power consumed is set by 1 event/second/neuron average event-rate off chip to a nearby IC. Given the input event rate is hard to predict, we don’t include that power requirement but assume it is handled by the input system. In all of these cases, getting the required computation using only digital techniques in a competitive size, weight, and especially power is hard to foresee.
We expect progress in these neuromorphic systems and that should find applications in traditional signal processing and graphics handling approaches. We will continue to have needs in computing that outpace our available computing resources, particularly at a power consumption required for a particular application. For example, the recent emphasis on cloud computing for academic/research problems shows the incredible need for larger computing resources than those directly available, or even projected to be available, for a portable computing platform (i.e., robotics). Of course a server per computing device is not a computing model that scales well. Given scaling limits on computing, both in power, area, and communication, one can expect to see more and more of these issues going forward.
We expect that a range of different ICs and systems will be built, all at different targets in the market. There are options for even larger networks, or integrating these systems with other processing elements on a chip/board. When moving to larger systems, particularly ones with 10–300 chips (3 × 107 to 109 neurons) or more, one can see utilization of stacking of dies, both decreasing the communication capacitance as well as board complexity. Stacking dies should roughly increase the final chip cost by the number of dies stacked.
In the following subsections, we overview general guidelines to consider when considering using neuromorphic ICs in the commercial market, first for low-cost consumer electronics, and second for a larger neuromorphic processor IC.
I have a casual observation to make. while the authors of the roadmap came to this conclusion “This study concludes that useful neural computation machines based on biological principles at the size of the human brain seems technically within our grasp.,” they’re also leaving themselves some wiggle room because the truth is no one knows if copying a human brain with circuits and various devices will lead to ‘thinking’ as we understand the concept.
For anyone who’s interested, you can search this blog for neuromorphic engineering, artificial brains, and/or memristors as I have many postings on these topics. One of my most recent on the topic of artificial brains is an April 7, 2014 piece titled: Brain-on-a-chip 2014 survey/overview.
One last observation about the movie ‘Transcendence’, has no one else noticed that it’s the ‘Easter’ story with a resurrected and digitized ‘Jesus’?
* Space inserted between ‘brains’ and ‘from’ in head on April 21, 2014.
Michael Berger has written another of his Nanowerk Spotlight articles focussing on neuromorphic engineering and the concept of a brain-on-a-chip bringing it up-to-date April 2014 style.
It’s a topic he and I have been following (separately) for years. Berger’s April 4, 2014 Brain-on-a-chip Spotlight article provides a very welcome overview of the international neuromorphic engineering effort (Note: Links have been removed),
Constructing realistic simulations of the human brain is a key goal of the Human Brain Project, a massive European-led research project that commenced in 2013.
The Human Brain Project is a large-scale, scientific collaborative project, which aims to gather all existing knowledge about the human brain, build multi-scale models of the brain that integrate this knowledge and use these models to simulate the brain on supercomputers. The resulting “virtual brain” offers the prospect of a fundamentally new and improved understanding of the human brain, opening the way for better treatments for brain diseases and for novel, brain-like computing technologies.
Several years ago, another European project named FACETS (Fast Analog Computing with Emergent Transient States) completed an exhaustive study of neurons to find out exactly how they work, how they connect to each other and how the network can ‘learn’ to do new things. One of the outcomes of the project was PyNN, a simulator-independent language for building neuronal network models.
Scientists have great expectations that nanotechnologies will bring them closer to the goal of creating computer systems that can simulate and emulate the brain’s abilities for sensation, perception, action, interaction and cognition while rivaling its low power consumption and compact size – basically a brain-on-a-chip. Already, scientists are working hard on laying the foundations for what is called neuromorphic engineering – a new interdisciplinary discipline that includes nanotechnologies and whose goal is to design artificial neural systems with physical architectures similar to biological nervous systems.
Several research projects funded with millions of dollars are at work with the goal of developing brain-inspired computer architectures or virtual brains: DARPA’s SyNAPSE, the EU’s BrainScaleS (a successor to FACETS), or the Blue Brain project (one of the predecessors of the Human Brain Project) at Switzerland’s EPFL [École Polytechnique Fédérale de Lausanne].
Berger goes on to describe the raison d’être for neuromorphic engineering (attempts to mimic biological brains),
Programmable machines are limited not only by their computational capacity, but also by an architecture requiring (human-derived) algorithms to both describe and process information from their environment. In contrast, biological neural systems (e.g., brains) autonomously process information in complex environments by automatically learning relevant and probabilistically stable features and associations. Since real world systems are always many body problems with infinite combinatorial complexity, neuromorphic electronic machines would be preferable in a host of applications – but useful and practical implementations do not yet exist.
Researchers are mostly interested in emulating neural plasticity (aka synaptic plasticity), from Berger’s April 4, 2014 article,
Independent from military-inspired research like DARPA’s, nanotechnology researchers in France have developed a hybrid nanoparticle-organic transistor that can mimic the main functionalities of a synapse. This organic transistor, based on pentacene and gold nanoparticles and termed NOMFET (Nanoparticle Organic Memory Field-Effect Transistor), has opened the way to new generations of neuro-inspired computers, capable of responding in a manner similar to the nervous system (read more: “Scientists use nanotechnology to try building computers modeled after the brain”).
One of the key components of any neuromorphic effort, and its starting point, is the design of artificial synapses. Synapses dominate the architecture of the brain and are responsible for massive parallelism, structural plasticity, and robustness of the brain. They are also crucial to biological computations that underlie perception and learning. Therefore, a compact nanoelectronic device emulating the functions and plasticity of biological synapses will be the most important building block of brain-inspired computational systems.
In 2011, a team at Stanford University demonstrates a new single element nanoscale device, based on the successfully commercialized phase change material technology, emulating the functionality and the plasticity of biological synapses. In their work, the Stanford team demonstrated a single element electronic synapse with the capability of both the modulation of the time constant and the realization of the different synaptic plasticity forms while consuming picojoule level energy for its operation (read more: “Brain-inspired computing with nanoelectronic programmable synapses”).
Berger does mention memristors but not in any great detail in this article,
Researchers have also suggested that memristor devices are capable of emulating the biological synapses with properly designed CMOS neuron components. A memristor is a two-terminal electronic device whose conductance can be precisely modulated by charge or flux through it. It has the special property that its resistance can be programmed (resistor) and subsequently remains stored (memory).
One research project already demonstrated that a memristor can connect conventional circuits and support a process that is the basis for memory and learning in biological systems (read more: “Nanotechnology’s road to artificial brains”).
Getting back to Berger’s April 4, 2014 article, he mentions one more approach and this one stands out,
A completely different – and revolutionary – human brain model has been designed by researchers in Japan who introduced the concept of a new class of computer which does not use any circuit or logic gate. This artificial brain-building project differs from all others in the world. It does not use logic-gate based computing within the framework of Turing. The decision-making protocol is not a logical reduction of decision rather projection of frequency fractal operations in a real space, it is an engineering perspective of Gödel’s incompleteness theorem.
In a previous Nanowerk Spotlight we reported on the concept of a full-fledged massively parallel organic computer at the nanoscale that uses extremely low power (“Will brain-like evolutionary circuit lead to intelligent computers?”). In this work, the researchers created a process of circuit evolution similar to the human brain in an organic molecular layer. This was the first time that such a brain-like ‘evolutionary’ circuit had been realized.
The research team, led by Dr. Anirban Bandyopadhyay, a senior researcher at the Advanced Nano Characterization Center at the National Institute of Materials Science (NIMS) in Tsukuba, Japan, has now finalized their human brain model and introduced the concept of a new class of computer which does not use any circuit or logic gate.
In a new open-access paper published online on January 27, 2014, in Information (“Design and Construction of a Brain-Like Computer: A New Class of Frequency-Fractal Computing Using Wireless Communication in a Supramolecular Organic, Inorganic System”), Bandyopadhyay and his team now describe the fundamental computing principle of a frequency fractal brain like computer.
“Our artificial brain-building project differs from all others in the world for several reasons,” Bandyopadhyay explains to Nanowerk. He lists the four major distinctions:
1) We do not use logic gate based computing within the framework of Turing, our decision-making protocol is not a logical reduction of decision rather projection of frequency fractal operations in a real space, it is an engineering perspective of Gödel’s incompleteness theorem.
2) We do not need to write any software, the argument and basic phase transition for decision-making, ‘if-then’ arguments and the transformation of one set of arguments into another self-assemble and expand spontaneously, the system holds an astronomically large number of ‘if’ arguments and its associative ‘then’ situations.
3) We use ‘spontaneous reply back’, via wireless communication using a unique resonance band coupling mode, not conventional antenna-receiver model, since fractal based non-radiative power management is used, the power expense is negligible.
4) We have carried out our own single DNA, single protein molecule and single brain microtubule neurophysiological study to develop our own Human brain model.
I encourage people to read Berger’s articles on this topic as they provide excellent information and links to much more. Curiously (mind you, it is easy to miss something), he does not mention James Gimzewski’s work at the University of California at Los Angeles (UCLA). Working with colleagues from the National Institute for Materials Science in Japan, Gimzewski published a paper about “two-, three-terminal WO3-x-based nanoionic devices capable of a broad range of neuromorphic and electrical functions”. You can find out more about the paper in my Dec. 24, 2012 posting titled: Synaptic electronics.
As for the ‘brain jelly’ paper, here’s a link to and a citation for it,
As for anyone who’s curious about why the US BRAIN initiative ((Brain Research through Advancing Innovative Neurotechnologies, also referred to as the Brain Activity Map Project) is not mentioned, I believe that’s because it’s focussed on biological brains exclusively at this point (you can check its Wikipedia entry to confirm).
Given my interest in neuromorphic (mimicking the human brain) engineering, this work at the US Oak Ridge National Laboratories was guaranteed to catch my attention. From the Nov. 18, 2013 news item on Nanowerk,
Unexpected behavior in ferroelectric materials explored by researchers at the Department of Energy’s Oak Ridge National Laboratory supports a new approach to information storage and processing.
Ferroelectric materials are known for their ability to spontaneously switch polarization when an electric field is applied. Using a scanning probe microscope, the ORNL-led team took advantage of this property to draw areas of switched polarization called domains on the surface of a ferroelectric material. To the researchers’ surprise, when written in dense arrays, the domains began forming complex and unpredictable patterns on the material’s surface.
“When we reduced the distance between domains, we started to see things that should have been completely impossible,” said ORNL’s Anton Ievlev, …
“All of a sudden, when we tried to draw a domain, it wouldn’t form, or it would form in an alternating pattern like a checkerboard. At first glance, it didn’t make any sense. We thought that when a domain forms, it forms. It shouldn’t be dependent on surrounding domains.” [said Ievlev]
After studying patterns of domain formation under varying conditions, the researchers realized the complex behavior could be explained through chaos theory. One domain would suppress the creation of a second domain nearby but facilitate the formation of one farther away — a precondition of chaotic behavior, says ORNL’s Sergei Kalinin, who led the study.
“Chaotic behavior is generally realized in time, not in space,” he said. ”An example is a dripping faucet: sometimes the droplets fall in a regular pattern, sometimes not, but it is a time-dependent process. To see chaotic behavior realized in space, as in our experiment, is highly unusual.”
Collaborator Yuriy Pershin of the University of South Carolina explains that the team’s system possesses key characteristics needed for memcomputing, an emergent computing paradigm in which information storage and processing occur on the same physical platform.
“Memcomputing is basically how the human brain operates: [emphasis mine] Neurons and their connections–synapses–can store and process information in the same location,” Pershin said. “This experiment with ferroelectric domains demonstrates the possibility of memcomputing.”
Encoding information in the domain radius could allow researchers to create logic operations on a surface of ferroelectric material, thereby combining the locations of information storage and processing.
The researchers note that although the system in principle has a universal computing ability, much more work is required to design a commercially attractive all-electronic computing device based on the domain interaction effect.
“These studies also make us rethink the role of surface and electrochemical phenomena in ferroelectric materials, since the domain interactions are directly traced to the behavior of surface screening charges liberated during electrochemical reaction coupled to the switching process,” Kalinin said.
For anyone who’s interested in exploring this particular approach to mimicking the human brain, here’s a citation for and a link to the researchers’ paper,
Dr. Andy Thomas of Bielefeld University’s (Germany) Faculty of Physics has developed a ‘blueprint’ for an artificial brain based on memristors. From the Feb. 26, 2013, news item on phys.org,
Scientists have long been dreaming about building a computer that would work like a brain. This is because a brain is far more energy-saving than a computer, it can learn by itself, and it doesn’t need any programming. Privatdozent [senior lecturer] Dr. Andy Thomas from Bielefeld University’s Faculty of Physics is experimenting with memristors – electronic microcomponents that imitate natural nerves. Thomas and his colleagues proved that they could do this a year ago. They constructed a memristor that is capable of learning. Andy Thomas is now using his memristors as key components in a blueprint for an artificial brain. He will be presenting his results at the beginning of March in the print edition of the Journal of Physics D: Applied Physics.
Memristors are made of fine nanolayers and can be used to connect electric circuits. For several years now, the memristor has been considered to be the electronic equivalent of the synapse. Synapses are, so to speak, the bridges across which nerve cells (neurons) contact each other. Their connections increase in strength the more often they are used. Usually, one nerve cell is connected to other nerve cells across thousands of synapses.
Like synapses, memristors learn from earlier impulses. In their case, these are electrical impulses that (as yet) do not come from nerve cells but from the electric circuits to which they are connected. The amount of current a memristor allows to pass depends on how strong the current was that flowed through it in the past and how long it was exposed to it.
Andy Thomas explains that because of their similarity to synapses, memristors are particularly suitable for building an artificial brain – a new generation of computers. ‘They allow us to construct extremely energy-efficient and robust processors that are able to learn by themselves.’ Based on his own experiments and research findings from biology and physics, his article is the first to summarize which principles taken from nature need to be transferred to technological systems if such a neuromorphic (nerve like) computer is to function. Such principles are that memristors, just like synapses, have to ‘note’ earlier impulses, and that neurons react to an impulse only when it passes a certain threshold.
‘… a memristor can store information more precisely than the bits on which previous computer processors have been based,’ says Thomas. Both a memristor and a bit work with electrical impulses. However, a bit does not allow any fine adjustment – it can only work with ‘on’ and ‘off’. In contrast, a memristor can raise or lower its resistance continuously. ‘This is how memristors deliver a basis for the gradual learning and forgetting of an artificial brain,’ explains Thomas.
A nanocomponent that is capable of learning: The Bielefeld memristor built into a chip here is 600 times thinner than a human hair. [ downloaded from http://ekvv.uni-bielefeld.de/blog/uninews/entry/blueprint_for_an_artificial_brain]
Here’s a citation for and link to the paper (from the university news release),
This paper is available until March 5, 2013 as IOP Science (publisher of Journal Physics D: Applied Physics), makes their papers freely available (with some provisos) for the first 30 days after online publication, from the Access Options page for Memristor-based neural networks,
As a service to the community, IOP is pleased to make papers in its journals freely available for 30 days from date of online publication – but only fair use of the content is permitted.
Under fair use, IOP content may only be used by individuals for the sole purpose of their own private study or research. Such individuals may access, download, store, search and print hard copies of the text. Copying should be limited to making single printed or electronic copies.
Other use is not considered fair use. In particular, use by persons other than for the purpose of their own private study or research is not fair use. Nor is altering, recompiling, reselling, systematic or programmatic copying, redistributing or republishing. Regular/systematic downloading of content or the downloading of a substantial proportion of the content is not fair use either.
Getting back to the memristor, I’ve been writing about it for some years, it was most recently mentioned here in a Feb.7, 2013 posting and I mentioned in a Dec. 24, 2012 posting nanoionic nanodevices also described as resembling synapses.
There’s been a lot about the memristor, being developed at HP Labs, at the University of Michigan, and elsewhere, on this blog and significantly less on other approaches to creating nanodevices with neuromorphic properties by researchers in Japan and in the US. The Dec. 20, 2012 news item on ScienceDaily notes,
Researchers in Japan and the US propose a nanoionic device with a range of neuromorphic and electrical multifunctions that may allow the fabrication of on-demand configurable circuits, analog memories and digital-neural fused networks in one device architecture.
… Now Rui Yang, Kazuya Terabe and colleagues at the National Institute for Materials Science in Japan and the University of California, Los Angeles, in the US have developed two-, three-terminal WO3-x-based nanoionic devices capable of a broad range of neuromorphic and electrical functions.
The researchers draw similarities between the device properties — volatile and non-volatile states and the current fading process following positive voltage pulses — with models for neural behaviour —that is, short- and long-term memory and forgetting processes. They explain the behaviour as the result of oxygen ions migrating within the device in response to the voltage sweeps. Accumulation of the oxygen ions at the electrode leads to Schottky-like potential barriers and the resulting changes in resistance and rectifying characteristics. The stable bipolar switching behaviour at the Pt/WO3-x interface is attributed to the formation of the electric conductive filament and oxygen absorbability of the Pt electrode.
As the researchers conclude, “These capabilities open a new avenue for circuits, analog memories, and artificially fused digital neural networks using on-demand programming by input pulse polarity, magnitude, and repetition history.”
For those who wish to delve more deeply, here’s the citation (from the ScienceDaily news item),
The news release does not state explicitly why this would be considered an on-demand device. The article is behind a paywall.
There was a recent attempt to mimic brain processing not based in nanoelectronics but on mimicking brain activity by creating virtual neurons. A Canadian team at the University of Waterloo led by Chris Eliasmith made a sensation with SPAUN (Semantic Pointer Architecture Unified Network) in late Nov. 2012 (mentioned in my Nov. 29, 2012 posting).
I hinted about some related work at the University of Waterloo earlier this week in my Nov. 26, 2012 posting (Existential risk) about a proposed centre at the University of Cambridge which would be tasked with examining possible risks associated with ‘ultra intelligent machines’. Today (Science (magazine) published an article about SPAUN (Semantic Pointer Architecture Unified Network) [behind a paywall])and its ability to solve simple arithmetic and perform other tasks as well.
Spaun sees a series of digits: 1 2 3; 5 6 7; 3 4 ?. Its neurons fire, and it calculates the next logical number in the sequence. It scrawls out a 5, in legible if messy writing.
This is an unremarkable feat for a human, but Spaun is actually a simulated brain. It contains2.5 millionvirtual neurons — many fewer than the 86 billion in the average human head, but enough to recognize lists of numbers, do simple arithmetic and solve reasoning problems.
… The model captures biological details of each neuron, including which neurotransmitters are used, how voltages are generated in the cell, and how they communicate. Spaun uses this network of neurons to process visual images in order to control an arm that draws Spaun’s answers to perceptual, cognitive and motor tasks. …
“This is the first model that begins to get at how our brains can perform a wide variety of tasks in a flexible manner—how the brain coordinates the flow of information between different areas to exhibit complex behaviour,” said Professor Chris Eliasmith, Director of the Centre for Theoretical Neuroscience at Waterloo. He is Canada Research Chair in Theoretical Neuroscience, and professor in Waterloo’s Department of Philosophy and Department of Systems Design Engineering.
Unlike other large brain models, Spaun can perform several tasks. Researchers can show patterns of digits and letters the model’s eye, which it then processes, causing it to write its responses to any of eight tasks. And, just like the human brain, it can shift from task to task, recognizing an object one moment and memorizing a list of numbers the next. [emphasis mine] Because of its biological underpinnings, Spaun can also be used to understand how changes to the brain affect changes to behaviour.
“In related work, we have shown how the loss of neurons with aging leads to decreased performance on cognitive tests,” said Eliasmith. “More generally, we can test our hypotheses about how the brain works, resulting in a better understanding of the effects of drugs or damage to the brain.”
In addition, the model provides new insights into the sorts of algorithms that might be useful for improving machine intelligence. [emphasis mine] For instance, it suggests new methods for controlling the flow of information through a large system attempting to solve challenging cognitive tasks.
Laura Sanders’ Nov. 29, 2012 article for ScienceNews suggests that there is some controversy as to whether or not SPAUN does resemble a human brain,
… Henry Markram, who leads a different project to reconstruct the human brain called the Blue Brain, questions whether Spaun really captures human brain behavior. Because Spaun’s design ignores some important neural properties, it’s unlikely to reveal anything about the brain’s mechanics, says Markram, of the Swiss Federal Institute of Technology in Lausanne. “It is not a brain model.”
Personally, I have a little difficulty seeing lines of code as ever being able to truly simulate brain activity. I think the notion of moving to something simpler (using fewer neurons as the Eliasmith team does) is a move in the right direction but I’m still more interested in devices such as the memristor and the electrochemical atomic switch and their potential.
Anne Dijkstra’s presentation (at the 2012 S.NET [Society for the Study of Nanoscience and Emerging Technologies] conference on “Science Cafés and scientific citizens. The Nanotrail project as a case” provided a contrast to the local (Vancouver, Canada) science café scene I wasn’t expecting. The Dutch science cafés Dijkstra described were formal both in tone and organization. She featured five science cafés focussed on discussions of nanotechnology. The most striking image in Dijkstra’s presentation was of someone taking notes at one of the meetings. By contrast, the Vancouver café scientifique get togethers take place in a local bar/pub (The Railway Club) and are organized by members of the local science community. (There are some life science café scientifique Vancouver meetings which may be more formal as they take place at the University of British Columbia.)
I was quite fascinated to hear about the Dutch children’s science cafés that have been organized by the parents featuring presentations by children to their peers. It’s a grassroots effort/community-based initiative.
The next and final presentation set was when I presented my work on ‘Zombies, brains, collapsing boundaries, and entanglements’. (People at the conference kept laughing when I told them when my presentation was scheduled.) Briefly, my area of interest is in neuromorphic engineering (artificial brains), memristors and other devices which can mimic synaptic plasticity, pop culture (zombies), and something I’ve termed ‘cognitive entanglement’. My basic question is: what does it mean to be human at a time when notions about what constitutes life and nonlife are being obliterated? In addition, although I didn’t do this deliberately, this passage from my Oct. 31, 2012 posting (Part 1 of this series) touches on a related issue,
His [Chris Groves' plenary] quote from Hannah Arendt, “What we make remakes us” brought home the notion that there is a feedback loop and that science and invention are not unidirectional pursuits, i.e., we do not create the world and stand apart from it; the world we create, in turn, recreates us.
I have more about this ‘conversation’ regarding artificial brains taking place in business, pop culture, philosophy, advertising, science, engineering, and elsewhere but I think I need to write up a paper. Once I do that I”ll post it. As for the response from the conference goers, there were no questions but there were a few comments (I’m not the only one interested in zombies and the living dead) and a suggestion to me for further reading (Andrew Pickering, The cybernetic brain: sketches of another future).
You don’t have to be a Jedi to make things move with your mind.
Granted, we may not be able to lift a spaceship out of a swamp like Yoda does in The Empire Strikes Back, but it is possible to steer a model car, drive a wheelchair and control a robotic exoskeleton with just your thoughts.
We are standing in a testing room at IBM’s Emerging Technologies lab in Winchester, England.
On my head is a strange headset that looks like a black plastic squid. Its 14 tendrils, each capped with a moistened electrode, are supposed to detect specific brain signals.
In front of us is a computer screen, displaying an image of a floating cube.
As I think about pushing it, the cube responds by drifting into the distance.
Moskvitch goes on to discuss a number of projects that translate thought into movement via various pieces of equipment before she mentions a project at Brown University (US) where researchers are implanting computer chips into brains,
Headsets and helmets offer cheap, easy-to-use ways of tapping into the mind. But there are other,
Imagine some kind of a wireless computer device in your head that you’ll use for mind control – what if people hacked into that”
At Brown Institute for Brain Science in the US, scientists are busy inserting chips right into the human brain.
The technology, dubbed BrainGate, sends mental commands directly to a PC.
Subjects still have to be physically “plugged” into a computer via cables coming out of their heads, in a setup reminiscent of the film The Matrix. However, the team is now working on miniaturising the chips and making them wireless.
The purpose of the first phase of the pilot clinical study of the BrainGate2 Neural Interface System is to obtain preliminary device safety information and to demonstrate the feasibility of people with tetraplegia using the System to control a computer cursor and other assistive devices with their thoughts. Another goal of the study is to determine the participants’ ability to operate communication software, such as e-mail, simply by imagining the movement of their own hand. The study is invasive and requires surgery.
Individuals with limited or no ability to use both hands due to cervical spinal cord injury, brainstem stroke, muscular dystrophy, or amyotrophic lateral sclerosis (ALS) or other motor neuron diseases are being recruited into a clinical study at Massachusetts General Hospital (MGH) and Stanford University Medical Center. Clinical trial participants must live within a three-hour drive of Boston, MA or Palo Alto, CA. Clinical trial sites at other locations may be opened in the future. The study requires a commitment of 13 months.
They have been recruiting since at least November 2011, from the Nov. 14, 2011 news item by Tanya Lewis on MedicalXpress,
Stanford University researchers are enrolling participants in a pioneering study investigating the feasibility of people with paralysis using a technology that interfaces directly with the brain to control computer cursors, robotic arms and other assistive devices.
The pilot clinical trial, known as BrainGate2, is based on technology developed at Brown University and is led by researchers at Massachusetts General Hospital, Brown and the Providence Veterans Affairs Medical Center. The researchers have now invited the Stanford team to establish the only trial site outside of New England.
Under development since 2002, BrainGate is a combination of hardware and software that directly senses electrical signals in the brain that control movement. The device — a baby-aspirin-sized array of electrodes — is implanted in the cerebral cortex (the outer layer of the brain) and records its signals; computer algorithms then translate the signals into digital instructions that may allow people with paralysis to control external devices.
Confusingly, there seemto be two BrainGate organizations. One appears to be a research entity where a number of institutions collaborate and the other is some sort of jointly held company. From the About Us webpage of the BrainGate research entity,
In the late 1990s, the initial translation of fundamental neuroengineering research from “bench to bedside” – that is, to pilot clinical testing – would require a level of financial commitment ($10s of millions) available only from private sources. In 2002, a Brown University spin-off/startup medical device company, Cyberkinetics, Inc. (later, Cyberkinetics Neurotechnology Systems, Inc.) was formed to collect the regulatory permissions and financial resources required to launch pilot clinical trials of a first-generation neural interface system. The company’s efforts and substantial initial capital investment led to the translation of the preclinical research at Brown University to an initial human device, the BrainGate Neural Interface System [Caution: Investigational Device. Limited by Federal Law to Investigational Use]. The BrainGate system uses a brain-implantable sensor to detect neural signals that are then decoded to provide control signals for assistive technologies. In 2004, Cyberkinetics received from the U.S. Food and Drug Administration (FDA) the first of two Investigational Device Exemptions (IDEs) to perform this research. Hospitals in Rhode Island, Massachusetts, and Illinois were established as clinical sites for the pilot clinical trial run by Cyberkinetics. Four trial participants with tetraplegia (decreased ability to use the arms and legs) were enrolled in the study and further helped to develop the BrainGate device. Initial results from these trials have been published or presented, with additional publications in preparation.
While scientific progress towards the creation of this promising technology has been steady and encouraging, Cyberkinetics’ financial sponsorship of the BrainGate research – without which the research could not have been started – began to wane. In 2007, in response to business pressures and changes in the capital markets, Cyberkinetics turned its focus to other medical devices. Although Cyberkinetics’ own funds became unavailable for BrainGate research, the research continued through grants and subcontracts from federal sources. By early 2008 it became clear that Cyberkinetics would eventually need to withdraw completely from directing the pilot clinical trials of the BrainGate device. Also in 2008, Cyberkinetics spun off its device manufacturing to new ownership, BlackRock Microsystems, Inc., which now produces and is further developing research products as well as clinically-validated (510(k)-cleared) implantable neural recording devices.
Beginning in mid 2008, with the agreement of Cyberkinetics, a new, fully academically-based IDE application (for the “BrainGate2 Neural Interface System”) was developed to continue this important research. In May 2009, the FDA provided a new IDE for the BrainGate2 pilot clinical trial. [Caution: Investigational Device. Limited by Federal Law to Investigational Use.] The BrainGate2 pilot clinical trial is directed by faculty in the Department of Neurology at Massachusetts General Hospital, a teaching affiliate of Harvard Medical School; the research is performed in close scientific collaboration with Brown University’s Department of Neuroscience, School of Engineering, and Brown Institute for Brain Sciences, and the Rehabilitation Research and Development Service of the U.S. Department of Veteran’s Affairs at the Providence VA Medical Center. Additionally, in late 2011, Stanford University joined the BrainGate Research Team as a clinical site and is currently enrolling participants in the clinical trial. This interdisciplinary research team includes scientific partners from the Functional Electrical Stimulation Center at Case Western Reserve University and the Cleveland VA Medical Center. As was true of the decades of fundamental, preclinical research that provided the basis for the recent clinical studies, funding for BrainGate research is now entirely from federal and philanthropic sources.
The BrainGate Research Team at Brown University, Massachusetts General Hospital, Stanford University, and Providence VA Medical Center comprises physicians, scientists, and engineers working together to advance understanding of human brain function and to develop neurotechnologies for people with neurologic disease, injury, or limb loss.
The BrainGate™ Co. is a privately-held firm focused on the advancement of the BrainGate™ Neural Interface System. The Company owns the Intellectual property of the BrainGate™ system as well as new technology being developed by the BrainGate company. In addition, the Company also owns the intellectual property of Cyberkinetics which it purchased in April 2009.
Meanwhile, in Europe there are two projects BrainAble and the Human Brain Project. The BrainAble project is similar to BrainGate in that it is intended for people with injuries but they seem to be concentrating on a helmet or cap for thought transmission (as per Moskovitch’s experience at the beginning of this posting). From the Feb. 28, 2012 news item on Science Daily,
In the 2009 film Surrogates, humans live vicariously through robots while safely remaining in their own homes. That sci-fi future is still a long way off, but recent advances in technology, supported by EU funding, are bringing this technology a step closer to reality in order to give disabled people more autonomy and independence than ever before.
“Our aim is to give people with motor disabilities as much autonomy as technology currently allows and in turn greatly improve their quality of life,” says Felip Miralles at Barcelona Digital Technology Centre, a Spanish ICT research centre.
Mr. Miralles is coordinating the BrainAble* project (http://www.brainable.org/), a three-year initiative supported by EUR 2.3 million in funding from the European Commission to develop and integrate a range of different technologies, services and applications into a commercial system for people with motor disabilities.
In terms of HCI [human-computer interface], BrainAble improves both direct and indirect interaction between the user and his smart home. Direct control is upgraded by creating tools that allow controlling inner and outer environments using a “hybrid” Brain Computer Interface (BNCI) systemable to take into account other sources of information such as measures of boredom, confusion, frustration by means of the so-called physiological and affective sensors.
Furthermore, interaction is enhanced by means of Ambient Intelligence (AmI) focused on creating a proactive and context-aware environments by adding intelligence to the user’s surroundings. AmI’s main purpose is to aid and facilitate the user’s living conditions by creating proactive environments to provide assistance.
Human-Computer Interfaces are complemented by an intelligent Virtual Reality-based user interface with avatars and scenarios that will help the disabled move around freely, and interact with any sort of devices. Even more the VR will provide self-expression assets using music, pictures and text, communicate online and offline with other people, play games to counteract cognitive decline, and get trained in new functionalities and tasks.
Perhaps this video helps,
Another European project, NeuroCare, which I discussed in my March 5, 2012 posting, is focused on creating neural implants to replace damaged and/or destroyed sensory cells in the eye or the ear.
The Human Brain Project is, despite its title, a neuromorphic engineering project (although the researchers do mention some medical applications on the project’s home page) in common with the work being done at the University of Michigan/HRL Labs mentioned in my April 19, 2012 posting (A step closer to artificial synapses courtesy of memritors) about that project. From the April 11, 2012 news item about the Human Brain Project on Science Daily,
Researchers at the EPFL [Ecole Polytechnique Fédérale de Lausanne] have discovered rules that relate the genes that a neuron switches on and off, to the shape of that neuron, its electrical properties and its location in the brain.
The discovery, using state-of-the-art informatics tools, increases the likelihood that it will be possible to predict much of the fundamental structure and function of the brain without having to measure every aspect of it. That in turn makes the Holy Grail of modelling the brain in silico — the goal of the proposed Human Brain Project — a more realistic, less Herculean, prospect. “It is the door that opens to a world of predictive biology,” says Henry Markram, the senior author on the study, which is published this week in PLoS ONE.
Here’s a bit more about the Human Brain Project (from the home page),
Today, simulating a single neuron requires the full power of a laptop computer. But the brain has billions of neurons and simulating all them simultaneously is a huge challenge. To get round this problem, the project will develop novel techniques of multi-level simulation in which only groups of neurons that are highly active are simulated in detail. But even in this way, simulating the complete human brain will require a computer a thousand times more powerful than the most powerful machine available today. This means that some of the key players in the Human Brain Project will be specialists in supercomputing. Their task: to work with industry to provide the project with the computing power it will need at each stage of its work.
The Human Brain Project will impact many different areas of society. Brain simulation will provide new insights into the basic causes of neurological diseases such as autism, depression, Parkinson’s, and Alzheimer’s. It will give us new ways of testing drugs and understanding the way they work. It will provide a test platform for new drugs that directly target the causes of disease and that have fewer side effects than current treatments. It will allow us to design prosthetic devices to help people with disabilities. The benefits are potentially huge. As world populations grow older, more than a third will be affected by some kind of brain disease. Brain simulation provides us with a powerful new strategy to tackle the problem.
The project also promises to become a source of new Information Technologies. Unlike the computers of today, the brain has the ability to repair itself, to take decisions, to learn, and to think creatively – all while consuming no more energy than an electric light bulb. The Human Brain Project will bring these capabilities to a new generation of neuromorphic computing devices, with circuitry directly derived from the circuitry of the brain. The new devices will help us to build a new generation of genuinely intelligent robots to help us at work and in our daily lives.
The Human Brain Project builds on the work of the Blue Brain Project. Led by Henry Markram of the Ecole Polytechnique Fédérale de Lausanne (EPFL), the Blue Brain Project has already taken an essential first towards simulation of the complete brain. Over the last six years, the project has developed a prototype facility with the tools, know-how and supercomputing technology necessary to build brain models, potentially of any species at any stage in its development. As a proof of concept, the project has successfully built the first ever, detailed model of the neocortical column, one of the brain’s basic building blocks.
The Human Brain Project is a flagship project in contention for the 1B Euro research prize that I’ve mentioned in the context of the GRAPHENE-CA flagship project (my Feb. 13, 2012 posting gives a better description of these flagship projects while mentioned both GRAPHENE-CA and another brain-computer interface project, PRESENCCIA).
Part of the reason for doing this roundup, is the opportunity to look at a number of these projects in one posting; the effect is more overwhelming than I expected.
For anyone who’s interested in Markram’s paper (open access),
Georges Khazen, Sean L. Hill, Felix Schürmann, Henry Markram. Combinatorial Expression Rules of Ion Channel Genes in Juvenile Rat (Rattus norvegicus) Neocortical Neurons. PLoS ONE, 2012; 7 (4): e34786 DOI: 10.1371/journal.pone.0034786
I do have earlier postings on brains and neuroprostheses, one of the more recent ones is this March 16, 2012 posting. Meanwhile, there are new announcements from Northwestern University (US) and the US National Institutes of Health (National Institute of Neurological Disorders and Stroke). From the April 18, 2012 news item (originating from the National Institutes of Health) on Science Daily,
An artificial connection between the brain and muscles can restore complex hand movements in monkeys following paralysis, according to a study funded by the National Institutes of Health.
In a report in the journal Nature, researchers describe how they combined two pieces of technology to create a neuroprosthesis — a device that replaces lost or impaired nervous system function. One piece is a multi-electrode array implanted directly into the brain which serves as a brain-computer interface (BCI). The array allows researchers to detect the activity of about 100 brain cells and decipher the signals that generate arm and hand movements. The second piece is a functional electrical stimulation (FES) device that delivers electrical current to the paralyzed muscles, causing them to contract. The brain array activates the FES device directly, bypassing the spinal cord to allow intentional, brain-controlled muscle contractions and restore movement.
A new Northwestern Medicine brain-machine technology delivers messages from the brain directly to the muscles — bypassing the spinal cord — to enable voluntary and complex movement of a paralyzed hand. The device could eventually be tested on, and perhaps aid, paralyzed patients.
The research was done in monkeys, whose electrical brain and muscle signals were recorded by implanted electrodes when they grasped a ball, lifted it and released it into a small tube. Those recordings allowed the researchers to develop an algorithm or “decoder” that enabled them to process the brain signals and predict the patterns of muscle activity when the monkeys wanted to move the ball.
These experiments were performed by Christian Ethier, a post-doctoral fellow, and Emily Oby, a graduate student in neuroscience, both at the Feinberg School of Medicine. The researchers gave the monkeys a local anesthetic to block nerve activity at the elbow, causing temporary, painless paralysis of the hand. With the help of the special devices in the brain and the arm — together called a neuroprosthesis — the monkeys’ brain signals were used to control tiny electric currents delivered in less than 40 milliseconds to their muscles, causing them to contract, and allowing the monkeys to pick up the ball and complete the task nearly as well as they did before.
“The monkey won’t use his hand perfectly, but there is a process of motor learning that we think is very similar to the process you go through when you learn to use a new computer mouse or a different tennis racquet. Things are different and you learn to adjust to them,” said Miller [Lee E. Miller], also a professor of physiology and of physical medicine and rehabilitation at Feinberg and a Sensory Motor Performance Program lab chief at the Rehabilitation Institute of Chicago.
The National Institutes of Health news item supplies a little history and background for this latest breakthrough while the Northwestern University news item offers more technical details more technical details.
You can find the researchers’ paper with this citation (assuming you can get past the paywall,
C. Ethier, E. R. Oby, M. J. Bauman, L. E. Miller. Restoration of grasp following paralysis through brain-controlled stimulation of muscles. Nature, 2012; DOI: 10.1038/nature10987
I was surprised to find the Health Research Fund of Québec listed as one of the funders but perhaps Christian Ethier has some connection with the province.