Tag Archives: brains

Brain-on-a-chip 2014 survey/overview

Michael Berger has written another of his Nanowerk Spotlight articles focussing on neuromorphic engineering and the concept of a brain-on-a-chip bringing it up-to-date April 2014 style.

It’s a topic he and I have been following (separately) for years. Berger’s April 4, 2014 Brain-on-a-chip Spotlight article provides a very welcome overview of the international neuromorphic engineering effort (Note: Links have been removed),

Constructing realistic simulations of the human brain is a key goal of the Human Brain Project, a massive European-led research project that commenced in 2013.

The Human Brain Project is a large-scale, scientific collaborative project, which aims to gather all existing knowledge about the human brain, build multi-scale models of the brain that integrate this knowledge and use these models to simulate the brain on supercomputers. The resulting “virtual brain” offers the prospect of a fundamentally new and improved understanding of the human brain, opening the way for better treatments for brain diseases and for novel, brain-like computing technologies.

Several years ago, another European project named FACETS (Fast Analog Computing with Emergent Transient States) completed an exhaustive study of neurons to find out exactly how they work, how they connect to each other and how the network can ‘learn’ to do new things. One of the outcomes of the project was PyNN, a simulator-independent language for building neuronal network models.

Scientists have great expectations that nanotechnologies will bring them closer to the goal of creating computer systems that can simulate and emulate the brain’s abilities for sensation, perception, action, interaction and cognition while rivaling its low power consumption and compact size – basically a brain-on-a-chip. Already, scientists are working hard on laying the foundations for what is called neuromorphic engineering – a new interdisciplinary discipline that includes nanotechnologies and whose goal is to design artificial neural systems with physical architectures similar to biological nervous systems.

Several research projects funded with millions of dollars are at work with the goal of developing brain-inspired computer architectures or virtual brains: DARPA’s SyNAPSE, the EU’s BrainScaleS (a successor to FACETS), or the Blue Brain project (one of the predecessors of the Human Brain Project) at Switzerland’s EPFL [École Polytechnique Fédérale de Lausanne].

Berger goes on to describe the raison d’être for neuromorphic engineering (attempts to mimic biological brains),

Programmable machines are limited not only by their computational capacity, but also by an architecture requiring (human-derived) algorithms to both describe and process information from their environment. In contrast, biological neural systems (e.g., brains) autonomously process information in complex environments by automatically learning relevant and probabilistically stable features and associations. Since real world systems are always many body problems with infinite combinatorial complexity, neuromorphic electronic machines would be preferable in a host of applications – but useful and practical implementations do not yet exist.

Researchers are mostly interested in emulating neural plasticity (aka synaptic plasticity), from Berger’s April 4, 2014 article,

Independent from military-inspired research like DARPA’s, nanotechnology researchers in France have developed a hybrid nanoparticle-organic transistor that can mimic the main functionalities of a synapse. This organic transistor, based on pentacene and gold nanoparticles and termed NOMFET (Nanoparticle Organic Memory Field-Effect Transistor), has opened the way to new generations of neuro-inspired computers, capable of responding in a manner similar to the nervous system  (read more: “Scientists use nanotechnology to try building computers modeled after the brain”).

One of the key components of any neuromorphic effort, and its starting point, is the design of artificial synapses. Synapses dominate the architecture of the brain and are responsible for massive parallelism, structural plasticity, and robustness of the brain. They are also crucial to biological computations that underlie perception and learning. Therefore, a compact nanoelectronic device emulating the functions and plasticity of biological synapses will be the most important building block of brain-inspired computational systems.

In 2011, a team at Stanford University demonstrates a new single element nanoscale device, based on the successfully commercialized phase change material technology, emulating the functionality and the plasticity of biological synapses. In their work, the Stanford team demonstrated a single element electronic synapse with the capability of both the modulation of the time constant and the realization of the different synaptic plasticity forms while consuming picojoule level energy for its operation (read more: “Brain-inspired computing with nanoelectronic programmable synapses”).

Berger does mention memristors but not in any great detail in this article,

Researchers have also suggested that memristor devices are capable of emulating the biological synapses with properly designed CMOS neuron components. A memristor is a two-terminal electronic device whose conductance can be precisely modulated by charge or flux through it. It has the special property that its resistance can be programmed (resistor) and subsequently remains stored (memory).

One research project already demonstrated that a memristor can connect conventional circuits and support a process that is the basis for memory and learning in biological systems (read more: “Nanotechnology’s road to artificial brains”).

You can find a number of memristor articles here including these: Memristors have always been with us from June 14, 2013; How to use a memristor to create an artificial brain from Feb. 26, 2013; Electrochemistry of memristors in a critique of the 2008 discovery from Sept. 6, 2012; and many more (type ‘memristor’ into the blog search box and you should receive many postings or alternatively, you can try ‘artificial brains’ if you want everything I have on artificial brains).

Getting back to Berger’s April 4, 2014 article, he mentions one more approach and this one stands out,

A completely different – and revolutionary – human brain model has been designed by researchers in Japan who introduced the concept of a new class of computer which does not use any circuit or logic gate. This artificial brain-building project differs from all others in the world. It does not use logic-gate based computing within the framework of Turing. The decision-making protocol is not a logical reduction of decision rather projection of frequency fractal operations in a real space, it is an engineering perspective of Gödel’s incompleteness theorem.

Berger wrote about this work in much more detail in a Feb. 10, 2014 Nanowerk Spotlight article titled: Brain jelly – design and construction of an organic, brain-like computer, (Note: Links have been removed),

In a previous Nanowerk Spotlight we reported on the concept of a full-fledged massively parallel organic computer at the nanoscale that uses extremely low power (“Will brain-like evolutionary circuit lead to intelligent computers?”). In this work, the researchers created a process of circuit evolution similar to the human brain in an organic molecular layer. This was the first time that such a brain-like ‘evolutionary’ circuit had been realized.

The research team, led by Dr. Anirban Bandyopadhyay, a senior researcher at the Advanced Nano Characterization Center at the National Institute of Materials Science (NIMS) in Tsukuba, Japan, has now finalized their human brain model and introduced the concept of a new class of computer which does not use any circuit or logic gate.

In a new open-access paper published online on January 27, 2014, in Information (“Design and Construction of a Brain-Like Computer: A New Class of Frequency-Fractal Computing Using Wireless Communication in a Supramolecular Organic, Inorganic System”), Bandyopadhyay and his team now describe the fundamental computing principle of a frequency fractal brain like computer.

“Our artificial brain-building project differs from all others in the world for several reasons,” Bandyopadhyay explains to Nanowerk. He lists the four major distinctions:
1) We do not use logic gate based computing within the framework of Turing, our decision-making protocol is not a logical reduction of decision rather projection of frequency fractal operations in a real space, it is an engineering perspective of Gödel’s incompleteness theorem.
2) We do not need to write any software, the argument and basic phase transition for decision-making, ‘if-then’ arguments and the transformation of one set of arguments into another self-assemble and expand spontaneously, the system holds an astronomically large number of ‘if’ arguments and its associative ‘then’ situations.
3) We use ‘spontaneous reply back’, via wireless communication using a unique resonance band coupling mode, not conventional antenna-receiver model, since fractal based non-radiative power management is used, the power expense is negligible.
4) We have carried out our own single DNA, single protein molecule and single brain microtubule neurophysiological study to develop our own Human brain model.

I encourage people to read Berger’s articles on this topic as they provide excellent information and links to much more. Curiously (mind you, it is easy to miss something), he does not mention James Gimzewski’s work at the University of California at Los Angeles (UCLA). Working with colleagues from the National Institute for Materials Science in Japan, Gimzewski published a paper about “two-, three-terminal WO3-x-based nanoionic devices capable of a broad range of neuromorphic and electrical functions”. You can find out more about the paper in my Dec. 24, 2012 posting titled: Synaptic electronics.

As for the ‘brain jelly’ paper, here’s a link to and a citation for it,

Design and Construction of a Brain-Like Computer: A New Class of Frequency-Fractal Computing Using Wireless Communication in a Supramolecular Organic, Inorganic System by Subrata Ghoshemail, Krishna Aswaniemail, Surabhi Singhemail, Satyajit Sahuemail, Daisuke Fujitaemail and Anirban Bandyopadhyay. Information 2014, 5(1), 28-100; doi:10.3390/info5010028

It’s an open access paper.

As for anyone who’s curious about why the US BRAIN initiative ((Brain Research through Advancing Innovative Neurotechnologies, also referred to as the Brain Activity Map Project) is not mentioned, I believe that’s because it’s focussed on biological brains exclusively at this point (you can check its Wikipedia entry to confirm).

Anirban Bandyopadhyay was last mentioned here in a January 16, 2014 posting titled: Controversial theory of consciousness confirmed (maybe) in  the context of a presentation in Amsterdam, Netherlands.

Chaos, brains, and ferroelectrics: “We started to see things that should have been completely impossible …”

Given my interest in neuromorphic (mimicking the human brain) engineering, this work at the US Oak Ridge National Laboratories was guaranteed to catch my attention. From the Nov. 18, 2013 news item on Nanowerk,

Unexpected behavior in ferroelectric materials explored by researchers at the Department of Energy’s Oak Ridge National Laboratory supports a new approach to information storage and processing.

Ferroelectric materials are known for their ability to spontaneously switch polarization when an electric field is applied. Using a scanning probe microscope, the ORNL-led team took advantage of this property to draw areas of switched polarization called domains on the surface of a ferroelectric material. To the researchers’ surprise, when written in dense arrays, the domains began forming complex and unpredictable patterns on the material’s surface.

“When we reduced the distance between domains, we started to see things that should have been completely impossible,” said ORNL’s Anton Ievlev, …

The Nov. 18, 2013 Oak Ridge National Laboratory news release, which originated the news item, provides more details,

“All of a sudden, when we tried to draw a domain, it wouldn’t form, or it would form in an alternating pattern like a checkerboard.  At first glance, it didn’t make any sense. We thought that when a domain forms, it forms. It shouldn’t be dependent on surrounding domains.”  [said Ievlev]

After studying patterns of domain formation under varying conditions, the researchers realized the complex behavior could be explained through chaos theory. One domain would suppress the creation of a second domain nearby but facilitate the formation of one farther away — a precondition of chaotic behavior, says ORNL’s Sergei Kalinin, who led the study.

“Chaotic behavior is generally realized in time, not in space,” he said. ”An example is a dripping faucet: sometimes the droplets fall in a regular pattern, sometimes not, but it is a time-dependent process. To see chaotic behavior realized in space, as in our experiment, is highly unusual.”

Collaborator Yuriy Pershin of the University of South Carolina explains that the team’s system possesses key characteristics needed for memcomputing, an emergent computing paradigm in which information storage and processing occur on the same physical platform.

Memcomputing is basically how the human brain operates: [emphasis mine] Neurons and their connections–synapses–can store and process information in the same location,” Pershin said. “This experiment with ferroelectric domains demonstrates the possibility of memcomputing.”

Encoding information in the domain radius could allow researchers to create logic operations on a surface of ferroelectric material, thereby combining the locations of information storage and processing.

The researchers note that although the system in principle has a universal computing ability, much more work is required to design a commercially attractive all-electronic computing device based on the domain interaction effect.

“These studies also make us rethink the role of surface and electrochemical phenomena in ferroelectric materials, since the domain interactions are directly traced to the behavior of surface screening charges liberated during electrochemical reaction coupled to the switching process,” Kalinin said.

For anyone who’s interested in exploring this particular approach to mimicking the human brain, here’s a citation for and a link to the researchers’ paper,

Intermittency, quasiperiodicity and chaos in probe-induced ferroelectric domain switching by A. V. Ievlev, S. Jesse, A. N. Morozovska, E. Strelcov, E. A. Eliseev, Y. V. Pershin, A. Kumar, V. Ya. Shur, & S. V. Kalinin. Nature Physics (2013) doi:10.1038/nphys2796 Published online 17 November 2013

This paper is behind a paywall although it is possible to preview it for free via ReadCube Access.

The brain and poetry; congratulations to Alice Munro on her 2013 Nobel prize

There’s an intriguing piece of research from the University of Exeter (UK) about poetry and the brain. From an Oct. 9, 2013 University of Exeter news release (also on EurekAlert),

New brain imaging technology is helping researchers to bridge the gap between art and science by mapping the different ways in which the brain responds to poetry and prose.

Scientists at the University of Exeter used state-of-the-art functional magnetic resonance imaging (fMRI) technology, which allows them to visualise which parts of the brain are activated to process various activities. No one had previously looked specifically at the differing responses in the brain to poetry and prose.

In research published in the Journal of Consciousness Studies, the team found activity in a “reading network” of brain areas which was activated in response to any written material. But they also found that more emotionally charged writing aroused several of the regions in the brain which respond to music. These areas, predominantly on the right side of the brain, had previously been shown as to give rise to the “shivers down the spine” caused by an emotional reaction to music. .

When volunteers read one of their favourite passages of poetry, the team found that areas of the brain associated with memory were stimulated more strongly than ‘reading areas’, indicating that reading a favourite passage is a kind of recollection.

In a specific comparison between poetry and prose, the team found evidence that poetry activates brain areas, such as the posterior cingulate cortex and medial temporal lobes, which have been linked to introspection.

I did find the Journal of Consciousness Studies in two places (here [current issues] and here [archived issues]) but can’t find the article in my admittedly speedy searches on the website and via Google. Unfortunately the university news release did not include a citation (as so many of them now do); presumably the research will be published soon.

I’d like to point out a couple of things about the research, the sample was small (13) and not randomized (faculty and students from the English department). From the news release,

Professor Adam Zeman, a cognitive neurologist from the University of Exeter Medical School, worked with colleagues across Psychology and English to carry out the study on 13 volunteers, all faculty members and senior graduate students in English. Their brain activity was scanned and compared when reading literal prose such as an extract from a heating installation manual, evocative passages from novels, easy and difficult sonnets, as well as their favourite poetry.

Professor Zeman said: “Some people say it is impossible to reconcile science and art, but new brain imaging technology means we are now seeing a growing body of evidence about how the brain responds to the experience of art. This was a preliminary study, but it is all part of work that is helping us to make psychological, biological, anatomical sense of art.”

Arguably, people who’ve spent significant chunks of their lives studying and reading poetry and prose might have developed capacities the rest of us have not. For a case in point, there’s a Sept. 26, 2013 news item on ScienceDaily about research on ballet dancers’ brains and their learned ability to suppress dizziness,

The research suggests that years of training can enable dancers to suppress signals from the balance organs in the inner ear.

Normally, the feeling of dizziness stems from the vestibular organs in the inner ear. These fluid-filled chambers sense rotation of the head through tiny hairs that sense the fluid moving. After turning around rapidly, the fluid continues to move, which can make you feel like you’re still spinning.

Ballet dancers can perform multiple pirouettes with little or no feeling of dizziness. The findings show that this feat isn’t just down to spotting, a technique dancers use that involves rapidly moving the head to fix their gaze on the same spot as much as possible.

Researchers at Imperial College London recruited 29 female ballet dancers and, as a comparison group, 20 female rowers whose age and fitness levels matched the dancers’.

The volunteers were spun around in a chair in a dark room. They were asked to turn a handle in time with how quickly they felt like they were still spinning after they had stopped. The researchers also measured eye reflexes triggered by input from the vestibular organs. Later, they examined the participants’ brain structure with MRI scans.

In dancers, both the eye reflexes and their perception of spinning lasted a shorter time than in the rowers.

Yes, they too have a small sample. Happily, you can find a citation and a link to the research at the end of the ScienceDaily news item.

ETA Oct. 10, 2013 at 1:10 pm PDT: The ballet dancer research was not randomized but  that’s understandable as researchers were trying to discover why these dancers don’t experience dizziness. It should be noted the researchers did test the ballet dancers against a control group. By contrast, the researchers at the University of Exeter seemed to be generalizing results from a specialized sample to a larger population.

Alice Munro news

It was announced today (Thursday, Oct. 10, 2013) that Canada’s Alice Munro has been awarded the 2013 Nobel Prize for Literature. Here’s more from an Oct. 10, 2013 news item on the Canadian Broadcasting Corporation (CBC) news website,

Alice Munro wins the 2013 Nobel Prize in Literature, becoming the first Canadian woman to take the award since its launch in 1901.

Munro, 82, only the 13th woman given the award, was lauded by the Swedish Academy during the Nobel announcement in Stockholm as the “master of the contemporary short story.”

“We’re not saying just that she can say a lot in just 20 pages — more than an average novel writer can — but also that she can cover ground. She can have a single short story that covers decades, and it works,” said Peter Englund, permanent secretary of the Swedish Academy.

Reached in British Columbia by CBC News on Thursday morning, Munro said she always viewed her chances of winning the Nobel as “one of those pipe dreams” that “might happen, but it probably wouldn’t.”

Congratulations Ms. Munro! For the curious, there’s a lot more about Alice Munro and about her work in the CBC news item.

University of Waterloo researchers use 2.5M (virtual) neurons to simulate a brain

I hinted about some related work at the University of Waterloo earlier this week in my Nov. 26, 2012 posting (Existential risk) about a proposed centre at the University of Cambridge which would be tasked with examining possible risks associated with ‘ultra intelligent machines’.  Today (Science (magazine) published an article about SPAUN (Semantic Pointer Architecture Unified Network) [behind a paywall])and its ability to solve simple arithmetic and perform other tasks as well.

Ed Yong writing for Nature magazine (Simulated brain scores top test marks, Nov. 29, 2012) offers this description,

Spaun sees a series of digits: 1 2 3; 5 6 7; 3 4 ?. Its neurons fire, and it calculates the next logical number in the sequence. It scrawls out a 5, in legible if messy writing.

This is an unremarkable feat for a human, but Spaun is actually a simulated brain. It contains2.5 millionvirtual neurons — many fewer than the 86 billion in the average human head, but enough to recognize lists of numbers, do simple arithmetic and solve reasoning problems.

Here’s a video demonstration, from the University of Waterloo’s Nengo Neural Simulator home page,

The University of Waterloo’s Nov. 29, 2012 news release offers more technical detail,

… The model captures biological details of each neuron, including which neurotransmitters are used, how voltages are generated in the cell, and how they communicate. Spaun uses this network of neurons to process visual images in order to control an arm that draws Spaun’s answers to perceptual, cognitive and motor tasks. …

“This is the first model that begins to get at how our brains can perform a wide variety of tasks in a flexible manner—how the brain coordinates the flow of information between different areas to exhibit complex behaviour,” said Professor Chris Eliasmith, Director of the Centre for Theoretical Neuroscience at Waterloo. He is Canada Research Chair in Theoretical Neuroscience, and professor in Waterloo’s Department of Philosophy and Department of Systems Design Engineering.

Unlike other large brain models, Spaun can perform several tasks. Researchers can show patterns of digits and letters the model’s eye, which it then processes, causing it to write its responses to any of eight tasks.  And, just like the human brain, it can shift from task to task, recognizing an object one moment and memorizing a list of numbers the next. [emphasis mine] Because of its biological underpinnings, Spaun can also be used to understand how changes to the brain affect changes to behaviour.

“In related work, we have shown how the loss of neurons with aging leads to decreased performance on cognitive tests,” said Eliasmith. “More generally, we can test our hypotheses about how the brain works, resulting in a better understanding of the effects of drugs or damage to the brain.”

In addition, the model provides new insights into the sorts of algorithms that might be useful for improving machine intelligence. [emphasis mine] For instance, it suggests new methods for controlling the flow of information through a large system attempting to solve challenging cognitive tasks.

Laura Sanders’ Nov. 29, 2012 article for ScienceNews suggests that there is some controversy as to whether or not SPAUN does resemble a human brain,

… Henry Markram, who leads a different project to reconstruct the human brain called the Blue Brain, questions whether Spaun really captures human brain behavior. Because Spaun’s design ignores some important neural properties, it’s unlikely to reveal anything about the brain’s mechanics, says Markram, of the Swiss Federal Institute of Technology in Lausanne. “It is not a brain model.”

Personally, I have a little difficulty seeing lines of code as ever being able to truly simulate brain activity. I think the notion of moving to something simpler (using fewer neurons as the Eliasmith team does) is a move in the right direction but I’m still more interested in devices such as the memristor and the electrochemical atomic switch and their potential.

Blue Brain Project

Memristor and artificial synapses in my April 19, 2012 posting

Atomic or electrochemical atomic switches and neuromorphic engineering briefly mentioned (scroll 1/2 way down) in my Oct. 17, 2011 posting.

ETA Dec. 19, 2012: There was an AMA (ask me anything) session on Reddit with the SPAUN team in early December, if you’re interested, you can still access the questions and answers,

We are the computational neuroscientists behind the world’s largest functional brain model

Zombies, brains, collapsing boundaries, and entanglements at the 4th annual S.NET conference

My proposal, Zombies, brains, collapsing boundaries, and entanglements, for the 4th annual S.NET (Society for the Study of Nanoscience and Emerging Technologies) conference was accepted. Mentioned in my Feb. 9, 2012 posting, the conference will be held at the University of Twente (Netherlands) from Oct. 22 – 25, 2012.

Here’s the abstract I provided,

The convergence between popular culture’s current fascination with zombies and their appetite for human brains (first established in the 1985 movie, Night of the Living Dead) and an extraordinarily high level of engagement in brain research by various medical and engineering groups around the world is no coincidence

Amongst other recent discoveries, the memristor (a concept from nanoelectronics) is collapsing the boundaries between humans and machines/robots and ushering in an age where humanistic discourse must grapple with cognitive entanglements.

Perceptible only at the level of molecular electronics (nanoelectronics), the memristor was a theoretical concept until 2008. Traditionally in electrical engineering, there are three circuit elements: resistors, inductors, and capacitors. The new circuit element, the memristor, was postulated in a paper by Dr. Leon Chua in 1971 to account for anomalies that had been experienced and described in the literature since the 1950s.

According to Chua’s theory and confirmed by the research team headed by R. Stanley Williams, the memristor remembers how much and when current has been flowing. The memristor is capable of an in-between state similar to certain brain states and this capacity lends itself to learning. As some have described it, the memristor is a synapse on a chip making neural computing a reality and/or the possibility of repairing brains stricken with neurological conditions. In other words, with post-human engineering exploiting discoveries such as the memristor we will have machines/robots that can learn and think and human brains that could incorporate machines.

As Jacques Derrida used the zombie to describe a state that this is neither life nor death as undecidable, the memristor can be described as an agent of transformation conferring robots with the ability to learn (a human trait) thereby rendering them as undecidable, i.e., neither machine nor life. Mirroring its transformative agency in robots, the memristor could also confer the human brain with machine/robot status and undecidability when used for repair or enhancement.

The memristor moves us past Jacques Derrida’s notion of undecidability as largely theoretical to a world where we confront this reality in a type of cognitive entanglement on a daily basis.

You can find the preliminary programme here.  My talk is scheduled for Thursday, Oct. 25, 2012 in one of the last sessions for the conference, 11 – 12:30 pm in the Tracing Transhuman Narratives strand.

I do see a few names I recognize, Wickson, Pat (Roy)  Mooney and Youtie. I believe Wickson is Fern Wickson from the University of Bergen last mentioned here in a Jul;y 7, 2010 posting about nature, nanotechnology, and metaphors. Pat Roy Mooney is from The ETC Group (an activist or civil society group) and was last mentioned here in my Oct. 7, 2011 posting), and I believe Youtie is Jan Youtie who wss mentioned in my March 29, 2012 posting about nanotechnology, economic impacts, and full life cycle assessments.

Brain, brains, brains: a roundup

I’ve decided to do a roundup of the various brain-related projects I’ve been coming across in the last several months. I was inspired by this article (Real-life Jedi: Pushing the limits of mind control) by Katia Moskvitch,

You don’t have to be a Jedi to make things move with your mind.

Granted, we may not be able to lift a spaceship out of a swamp like Yoda does in The Empire Strikes Back, but it is possible to steer a model car, drive a wheelchair and control a robotic exoskeleton with just your thoughts.

We are standing in a testing room at IBM’s Emerging Technologies lab in Winchester, England.

On my head is a strange headset that looks like a black plastic squid. Its 14 tendrils, each capped with a moistened electrode, are supposed to detect specific brain signals.

In front of us is a computer screen, displaying an image of a floating cube.

As I think about pushing it, the cube responds by drifting into the distance.

Moskvitch goes on to discuss a number of projects that translate thought into movement via various pieces of equipment before she mentions a project at Brown University (US) where researchers are implanting computer chips into brains,

Headsets and helmets offer cheap, easy-to-use ways of tapping into the mind. But there are other,

Imagine some kind of a wireless computer device in your head that you’ll use for mind control – what if people hacked into that”

At Brown Institute for Brain Science in the US, scientists are busy inserting chips right into the human brain.

The technology, dubbed BrainGate, sends mental commands directly to a PC.

Subjects still have to be physically “plugged” into a computer via cables coming out of their heads, in a setup reminiscent of the film The Matrix. However, the team is now working on miniaturising the chips and making them wireless.

The researchers are recruiting for human clinical trials, from the BrainGate Clinical Trials webpage,

Clinical Trials – Now Recruiting

The purpose of the first phase of the pilot clinical study of the BrainGate2 Neural Interface System is to obtain preliminary device safety information and to demonstrate the feasibility of people with tetraplegia using the System to control a computer cursor and other assistive devices with their thoughts. Another goal of the study is to determine the participants’ ability to operate communication software, such as e-mail, simply by imagining the movement of their own hand. The study is invasive and requires surgery.

Individuals with limited or no ability to use both hands due to cervical spinal cord injury, brainstem stroke, muscular dystrophy, or amyotrophic lateral sclerosis (ALS) or other motor neuron diseases are being recruited into a clinical study at Massachusetts General Hospital (MGH) and Stanford University Medical Center. Clinical trial participants must live within a three-hour drive of Boston, MA or Palo Alto, CA. Clinical trial sites at other locations may be opened in the future. The study requires a commitment of 13 months.

They have been recruiting since at least November 2011, from the Nov. 14, 2011 news item by Tanya Lewis on MedicalXpress,

Stanford University researchers are enrolling participants in a pioneering study investigating the feasibility of people with paralysis using a technology that interfaces directly with the brain to control computer cursors, robotic arms and other assistive devices.

The pilot clinical trial, known as BrainGate2, is based on technology developed at Brown University and is led by researchers at Massachusetts General Hospital, Brown and the Providence Veterans Affairs Medical Center. The researchers have now invited the Stanford team to establish the only trial site outside of New England.

Under development since 2002, BrainGate is a combination of hardware and software that directly senses electrical signals in the brain that control movement. The device — a baby-aspirin-sized array of electrodes — is implanted in the cerebral cortex (the outer layer of the brain) and records its signals; computer algorithms then translate the signals into digital instructions that may allow people with paralysis to control external devices.

Confusingly, there seemto be two BrainGate organizations. One appears to be a research entity where a number of institutions collaborate and the other is some sort of jointly held company. From the About Us webpage of the BrainGate research entity,

In the late 1990s, the initial translation of fundamental neuroengineering research from “bench to bedside” – that is, to pilot clinical testing – would require a level of financial commitment ($10s of millions) available only from private sources. In 2002, a Brown University spin-off/startup medical device company, Cyberkinetics, Inc. (later, Cyberkinetics Neurotechnology Systems, Inc.) was formed to collect the regulatory permissions and financial resources required to launch pilot clinical trials of a first-generation neural interface system. The company’s efforts and substantial initial capital investment led to the translation of the preclinical research at Brown University to an initial human device, the BrainGate Neural Interface System [Caution: Investigational Device. Limited by Federal Law to Investigational Use]. The BrainGate system uses a brain-implantable sensor to detect neural signals that are then decoded to provide control signals for assistive technologies. In 2004, Cyberkinetics received from the U.S. Food and Drug Administration (FDA) the first of two Investigational Device Exemptions (IDEs) to perform this research. Hospitals in Rhode Island, Massachusetts, and Illinois were established as clinical sites for the pilot clinical trial run by Cyberkinetics. Four trial participants with tetraplegia (decreased ability to use the arms and legs) were enrolled in the study and further helped to develop the BrainGate device. Initial results from these trials have been published or presented, with additional publications in preparation.

While scientific progress towards the creation of this promising technology has been steady and encouraging, Cyberkinetics’ financial sponsorship of the BrainGate research – without which the research could not have been started – began to wane. In 2007, in response to business pressures and changes in the capital markets, Cyberkinetics turned its focus to other medical devices. Although Cyberkinetics’ own funds became unavailable for BrainGate research, the research continued through grants and subcontracts from federal sources. By early 2008 it became clear that Cyberkinetics would eventually need to withdraw completely from directing the pilot clinical trials of the BrainGate device. Also in 2008, Cyberkinetics spun off its device manufacturing to new ownership, BlackRock Microsystems, Inc., which now produces and is further developing research products as well as clinically-validated (510(k)-cleared) implantable neural recording devices.

Beginning in mid 2008, with the agreement of Cyberkinetics, a new, fully academically-based IDE application (for the “BrainGate2 Neural Interface System”) was developed to continue this important research. In May 2009, the FDA provided a new IDE for the BrainGate2 pilot clinical trial. [Caution: Investigational Device. Limited by Federal Law to Investigational Use.] The BrainGate2 pilot clinical trial is directed by faculty in the Department of Neurology at Massachusetts General Hospital, a teaching affiliate of Harvard Medical School; the research is performed in close scientific collaboration with Brown University’s Department of Neuroscience, School of Engineering, and Brown Institute for Brain Sciences, and the Rehabilitation Research and Development Service of the U.S. Department of Veteran’s Affairs at the Providence VA Medical Center. Additionally, in late 2011, Stanford University joined the BrainGate Research Team as a clinical site and is currently enrolling participants in the clinical trial. This interdisciplinary research team includes scientific partners from the Functional Electrical Stimulation Center at Case Western Reserve University and the Cleveland VA Medical Center. As was true of the decades of fundamental, preclinical research that provided the basis for the recent clinical studies, funding for BrainGate research is now entirely from federal and philanthropic sources.

The BrainGate Research Team at Brown University, Massachusetts General Hospital, Stanford University, and Providence VA Medical Center comprises physicians, scientists, and engineers working together to advance understanding of human brain function and to develop neurotechnologies for people with neurologic disease, injury, or limb loss.

I think they’re saying there was a reverse takeover of Cyberkinetics, from the BrainGate company About webpage,

The BrainGate™ Co. is a privately-held firm focused on the advancement of the BrainGate™ Neural Interface System.  The Company owns the Intellectual property of the BrainGate™ system as well as new technology being developed by the BrainGate company.  In addition, the Company also owns  the intellectual property of Cyberkinetics which it purchased in April 2009.

Meanwhile, in Europe there are two projects BrainAble and the Human Brain Project. The BrainAble project is similar to BrainGate in that it is intended for people with injuries but they seem to be concentrating on a helmet or cap for thought transmission (as per Moskovitch’s experience at the beginning of this posting). From the Feb. 28, 2012 news item on Science Daily,

In the 2009 film Surrogates, humans live vicariously through robots while safely remaining in their own homes. That sci-fi future is still a long way off, but recent advances in technology, supported by EU funding, are bringing this technology a step closer to reality in order to give disabled people more autonomy and independence than ever before.

“Our aim is to give people with motor disabilities as much autonomy as technology currently allows and in turn greatly improve their quality of life,” says Felip Miralles at Barcelona Digital Technology Centre, a Spanish ICT research centre.

Mr. Miralles is coordinating the BrainAble* project (http://www.brainable.org/), a three-year initiative supported by EUR 2.3 million in funding from the European Commission to develop and integrate a range of different technologies, services and applications into a commercial system for people with motor disabilities.

Here’s more from the BrainAble home page,

In terms of HCI [human-computer interface], BrainAble improves both direct and indirect interaction between the user and his smart home. Direct control is upgraded by creating tools that allow controlling inner and outer environments using a “hybrid” Brain Computer Interface (BNCI) system able to take into account other sources of information such as measures of boredom, confusion, frustration by means of the so-called physiological and affective sensors.

Furthermore, interaction is enhanced by means of Ambient Intelligence (AmI) focused on creating a proactive and context-aware environments by adding intelligence to the user’s surroundings. AmI’s main purpose is to aid and facilitate the user’s living conditions by creating proactive environments to provide assistance.

Human-Computer Interfaces are complemented by an intelligent Virtual Reality-based user interface with avatars and scenarios that will help the disabled move around freely, and interact with any sort of devices. Even more the VR will provide self-expression assets using music, pictures and text, communicate online and offline with other people, play games to counteract cognitive decline, and get trained in new functionalities and tasks.

Perhaps this video helps,

Another European project, NeuroCare, which I discussed in my March 5, 2012 posting, is focused on creating neural implants to replace damaged and/or destroyed sensory cells in the eye or the ear.

The Human Brain Project is, despite its title, a neuromorphic engineering project (although the researchers do mention some medical applications on the project’s home page)  in common with the work being done at the University of Michigan/HRL Labs mentioned in my April 19, 2012 posting (A step closer to artificial synapses courtesy of memritors) about that project. From the April 11, 2012 news item about the Human Brain Project on Science Daily,

Researchers at the EPFL [Ecole Polytechnique Fédérale de Lausanne] have discovered rules that relate the genes that a neuron switches on and off, to the shape of that neuron, its electrical properties and its location in the brain.

The discovery, using state-of-the-art informatics tools, increases the likelihood that it will be possible to predict much of the fundamental structure and function of the brain without having to measure every aspect of it. That in turn makes the Holy Grail of modelling the brain in silico — the goal of the proposed Human Brain Project — a more realistic, less Herculean, prospect. “It is the door that opens to a world of predictive biology,” says Henry Markram, the senior author on the study, which is published this week in PLoS ONE.

Here’s a bit more about the Human Brain Project (from the home page),

Today, simulating a single neuron requires the full power of a laptop computer. But the brain has billions of neurons and simulating all them simultaneously is a huge challenge. To get round this problem, the project will develop novel techniques of multi-level simulation in which only groups of neurons that are highly active are simulated in detail. But even in this way, simulating the complete human brain will require a computer a thousand times more powerful than the most powerful machine available today. This means that some of the key players in the Human Brain Project will be specialists in supercomputing. Their task: to work with industry to provide the project with the computing power it will need at each stage of its work.

The Human Brain Project will impact many different areas of society. Brain simulation will provide new insights into the basic causes of neurological diseases such as autism, depression, Parkinson’s, and Alzheimer’s. It will give us new ways of testing drugs and understanding the way they work. It will provide a test platform for new drugs that directly target the causes of disease and that have fewer side effects than current treatments. It will allow us to design prosthetic devices to help people with disabilities. The benefits are potentially huge. As world populations grow older, more than a third will be affected by some kind of brain disease. Brain simulation provides us with a powerful new strategy to tackle the problem.

The project also promises to become a source of new Information Technologies. Unlike the computers of today, the brain has the ability to repair itself, to take decisions, to learn, and to think creatively – all while consuming no more energy than an electric light bulb. The Human Brain Project will bring these capabilities to a new generation of neuromorphic computing devices, with circuitry directly derived from the circuitry of the brain. The new devices will help us to build a new generation of genuinely intelligent robots to help us at work and in our daily lives.

The Human Brain Project builds on the work of the Blue Brain Project. Led by Henry Markram of the Ecole Polytechnique Fédérale de Lausanne (EPFL), the Blue Brain Project has already taken an essential first towards simulation of the complete brain. Over the last six years, the project has developed a prototype facility with the tools, know-how and supercomputing technology necessary to build brain models, potentially of any species at any stage in its development. As a proof of concept, the project has successfully built the first ever, detailed model of the neocortical column, one of the brain’s basic building blocks.

The Human Brain Project is a flagship project  in contention for the 1B Euro research prize that I’ve mentioned in the context of the GRAPHENE-CA flagship project (my Feb. 13, 2012 posting gives a better description of these flagship projects while mentioned both GRAPHENE-CA and another brain-computer interface project, PRESENCCIA).

Part of the reason for doing this roundup, is the opportunity to look at a number of these projects in one posting; the effect is more overwhelming than I expected.

For anyone who’s interested in Markram’s paper (open access),

Georges Khazen, Sean L. Hill, Felix Schürmann, Henry Markram. Combinatorial Expression Rules of Ion Channel Genes in Juvenile Rat (Rattus norvegicus) Neocortical Neurons. PLoS ONE, 2012; 7 (4): e34786 DOI: 10.1371/journal.pone.0034786

I do have earlier postings on brains and neuroprostheses, one of the more recent ones is this March 16, 2012 posting. Meanwhile, there are  new announcements from Northwestern University (US) and the US National Institutes of Health (National Institute of Neurological Disorders and Stroke). From the April 18, 2012 news item (originating from the National Institutes of Health) on Science Daily,

An artificial connection between the brain and muscles can restore complex hand movements in monkeys following paralysis, according to a study funded by the National Institutes of Health.

In a report in the journal Nature, researchers describe how they combined two pieces of technology to create a neuroprosthesis — a device that replaces lost or impaired nervous system function. One piece is a multi-electrode array implanted directly into the brain which serves as a brain-computer interface (BCI). The array allows researchers to detect the activity of about 100 brain cells and decipher the signals that generate arm and hand movements. The second piece is a functional electrical stimulation (FES) device that delivers electrical current to the paralyzed muscles, causing them to contract. The brain array activates the FES device directly, bypassing the spinal cord to allow intentional, brain-controlled muscle contractions and restore movement.

From the April 19, 2012 news item (originating from Northwestern University) on Science Daily,

A new Northwestern Medicine brain-machine technology delivers messages from the brain directly to the muscles — bypassing the spinal cord — to enable voluntary and complex movement of a paralyzed hand. The device could eventually be tested on, and perhaps aid, paralyzed patients.

The research was done in monkeys, whose electrical brain and muscle signals were recorded by implanted electrodes when they grasped a ball, lifted it and released it into a small tube. Those recordings allowed the researchers to develop an algorithm or “decoder” that enabled them to process the brain signals and predict the patterns of muscle activity when the monkeys wanted to move the ball.

These experiments were performed by Christian Ethier, a post-doctoral fellow, and Emily Oby, a graduate student in neuroscience, both at the Feinberg School of Medicine. The researchers gave the monkeys a local anesthetic to block nerve activity at the elbow, causing temporary, painless paralysis of the hand. With the help of the special devices in the brain and the arm — together called a neuroprosthesis — the monkeys’ brain signals were used to control tiny electric currents delivered in less than 40 milliseconds to their muscles, causing them to contract, and allowing the monkeys to pick up the ball and complete the task nearly as well as they did before.

“The monkey won’t use his hand perfectly, but there is a process of motor learning that we think is very similar to the process you go through when you learn to use a new computer mouse or a different tennis racquet. Things are different and you learn to adjust to them,” said Miller [Lee E. Miller], also a professor of physiology and of physical medicine and rehabilitation at Feinberg and a Sensory Motor Performance Program lab chief at the Rehabilitation Institute of Chicago.

The National Institutes of Health news item supplies a little history and background for this latest breakthrough while the Northwestern University news item offers more technical details more technical details.

You can find the researchers’ paper with this citation (assuming you can get past the paywall,

C. Ethier, E. R. Oby, M. J. Bauman, L. E. Miller. Restoration of grasp following paralysis through brain-controlled stimulation of muscles. Nature, 2012; DOI: 10.1038/nature10987

I was surprised to find the Health Research Fund of Québec listed as one of the funders but perhaps Christian Ethier has some connection with the province.

Rats with robot brains

A robotic cerebellum has been implanted into a rat’s skull. From the Oct. 4, 2011 news item on Science Daily,

With new cutting-edge technology aimed at providing amputees with robotic limbs, a Tel Aviv University researcher has successfully implanted a robotic cerebellum into the skull of a rodent with brain damage, restoring its capacity for movement.

The cerebellum is responsible for co-ordinating movement, explains Prof. Matti Mintz of TAU’s [Tel Aviv University] Department of Psychology. When wired to the brain, his “robo-cerebellum” receives, interprets, and transmits sensory information from the brain stem, facilitating communication between the brain and the body. To test this robotic interface between body and brain, the researchers taught a brain-damaged rat to blink whenever they sounded a particular tone. The rat could only perform the behavior when its robotic cerebellum was functional.

This is the third item I’ve found in the last few weeks about computer chips being implanted in brains. I found the other two items in a discussion about extreme human enhancement on Slate.com (first mentioned in my Sept. 15, 2011 posting). One of the Brad Allenby [the other two discussants are Nicholas Agar and Kyle Munkittrick] entries (posted Sept. 16, 2011) featured these two references,

Experiments that began here at Arizona State University and have been continued at Duke and elsewhere have involved monkeys learning to move mechanical arms to which they are wirelessly connected as if they were part of themselves, using them effectively even when the arms (but not the monkey) are shifted up to MIT and elsewhere. More recently, monkeys with chips implanted in their brains [2008 according to the video on the website] at Duke University have kept a robot wirelessly connected to their chip running in Japan. Similar technologies are being explored to enable paraplegics and other injured people to interact with their environments and to communicate effectively, as well. The upshot is that “the body” is becoming more than just a spatial presence; rather, it becomes a designed extended cognitive network.

The projects are almost mirror images of each other. The rat can’t move without input from its robotic cerebellum while the monkeys control the robots’ movement with their thoughts. From the Oct. 3, 2011 news release on Eureka Alert,

According to the researcher, the chip is designed to mimic natural neuronal activity. “It’s a proof of the concept that we can record information from the brain, analyze it in a way similar to the biological network, and then return it to the brain,” says Prof. Mintz, who recently presented his research at the Strategies for Engineered Negligible Senescence meeting in Cambridge, UK.

In reading these items, I can’t help but remember that plastic surgery was a means of helping soldiers with horrendous wounds and it has now become part of the cosmetics industry. Given that history, it is possible to imagine (or to assume) that these brain ‘repairs’ could be used to augment or reshape our brains to increase intelligence, heighten senses, improve motor coordination, etc. In short. to accomplish very different goals than those originally set out.