Given my interest in neuromorphic (mimicking the human brain) engineering, this work at the US Oak Ridge National Laboratories was guaranteed to catch my attention. From the Nov. 18, 2013 news item on Nanowerk,
Unexpected behavior in ferroelectric materials explored by researchers at the Department of Energy’s Oak Ridge National Laboratory supports a new approach to information storage and processing.
Ferroelectric materials are known for their ability to spontaneously switch polarization when an electric field is applied. Using a scanning probe microscope, the ORNL-led team took advantage of this property to draw areas of switched polarization called domains on the surface of a ferroelectric material. To the researchers’ surprise, when written in dense arrays, the domains began forming complex and unpredictable patterns on the material’s surface.
“When we reduced the distance between domains, we started to see things that should have been completely impossible,” said ORNL’s Anton Ievlev, …
“All of a sudden, when we tried to draw a domain, it wouldn’t form, or it would form in an alternating pattern like a checkerboard. At first glance, it didn’t make any sense. We thought that when a domain forms, it forms. It shouldn’t be dependent on surrounding domains.” [said Ievlev]
After studying patterns of domain formation under varying conditions, the researchers realized the complex behavior could be explained through chaos theory. One domain would suppress the creation of a second domain nearby but facilitate the formation of one farther away — a precondition of chaotic behavior, says ORNL’s Sergei Kalinin, who led the study.
“Chaotic behavior is generally realized in time, not in space,” he said. ”An example is a dripping faucet: sometimes the droplets fall in a regular pattern, sometimes not, but it is a time-dependent process. To see chaotic behavior realized in space, as in our experiment, is highly unusual.”
Collaborator Yuriy Pershin of the University of South Carolina explains that the team’s system possesses key characteristics needed for memcomputing, an emergent computing paradigm in which information storage and processing occur on the same physical platform.
“Memcomputing is basically how the human brain operates: [emphasis mine] Neurons and their connections–synapses–can store and process information in the same location,” Pershin said. “This experiment with ferroelectric domains demonstrates the possibility of memcomputing.”
Encoding information in the domain radius could allow researchers to create logic operations on a surface of ferroelectric material, thereby combining the locations of information storage and processing.
The researchers note that although the system in principle has a universal computing ability, much more work is required to design a commercially attractive all-electronic computing device based on the domain interaction effect.
“These studies also make us rethink the role of surface and electrochemical phenomena in ferroelectric materials, since the domain interactions are directly traced to the behavior of surface screening charges liberated during electrochemical reaction coupled to the switching process,” Kalinin said.
For anyone who’s interested in exploring this particular approach to mimicking the human brain, here’s a citation for and a link to the researchers’ paper,
Dr. Andy Thomas of Bielefeld University’s (Germany) Faculty of Physics has developed a ‘blueprint’ for an artificial brain based on memristors. From the Feb. 26, 2013, news item on phys.org,
Scientists have long been dreaming about building a computer that would work like a brain. This is because a brain is far more energy-saving than a computer, it can learn by itself, and it doesn’t need any programming. Privatdozent [senior lecturer] Dr. Andy Thomas from Bielefeld University’s Faculty of Physics is experimenting with memristors – electronic microcomponents that imitate natural nerves. Thomas and his colleagues proved that they could do this a year ago. They constructed a memristor that is capable of learning. Andy Thomas is now using his memristors as key components in a blueprint for an artificial brain. He will be presenting his results at the beginning of March in the print edition of the Journal of Physics D: Applied Physics.
Memristors are made of fine nanolayers and can be used to connect electric circuits. For several years now, the memristor has been considered to be the electronic equivalent of the synapse. Synapses are, so to speak, the bridges across which nerve cells (neurons) contact each other. Their connections increase in strength the more often they are used. Usually, one nerve cell is connected to other nerve cells across thousands of synapses.
Like synapses, memristors learn from earlier impulses. In their case, these are electrical impulses that (as yet) do not come from nerve cells but from the electric circuits to which they are connected. The amount of current a memristor allows to pass depends on how strong the current was that flowed through it in the past and how long it was exposed to it.
Andy Thomas explains that because of their similarity to synapses, memristors are particularly suitable for building an artificial brain – a new generation of computers. ‘They allow us to construct extremely energy-efficient and robust processors that are able to learn by themselves.’ Based on his own experiments and research findings from biology and physics, his article is the first to summarize which principles taken from nature need to be transferred to technological systems if such a neuromorphic (nerve like) computer is to function. Such principles are that memristors, just like synapses, have to ‘note’ earlier impulses, and that neurons react to an impulse only when it passes a certain threshold.
‘… a memristor can store information more precisely than the bits on which previous computer processors have been based,’ says Thomas. Both a memristor and a bit work with electrical impulses. However, a bit does not allow any fine adjustment – it can only work with ‘on’ and ‘off’. In contrast, a memristor can raise or lower its resistance continuously. ‘This is how memristors deliver a basis for the gradual learning and forgetting of an artificial brain,’ explains Thomas.
A nanocomponent that is capable of learning: The Bielefeld memristor built into a chip here is 600 times thinner than a human hair. [ downloaded from http://ekvv.uni-bielefeld.de/blog/uninews/entry/blueprint_for_an_artificial_brain]
Here’s a citation for and link to the paper (from the university news release),
This paper is available until March 5, 2013 as IOP Science (publisher of Journal Physics D: Applied Physics), makes their papers freely available (with some provisos) for the first 30 days after online publication, from the Access Options page for Memristor-based neural networks,
As a service to the community, IOP is pleased to make papers in its journals freely available for 30 days from date of online publication – but only fair use of the content is permitted.
Under fair use, IOP content may only be used by individuals for the sole purpose of their own private study or research. Such individuals may access, download, store, search and print hard copies of the text. Copying should be limited to making single printed or electronic copies.
Other use is not considered fair use. In particular, use by persons other than for the purpose of their own private study or research is not fair use. Nor is altering, recompiling, reselling, systematic or programmatic copying, redistributing or republishing. Regular/systematic downloading of content or the downloading of a substantial proportion of the content is not fair use either.
Getting back to the memristor, I’ve been writing about it for some years, it was most recently mentioned here in a Feb.7, 2013 posting and I mentioned in a Dec. 24, 2012 posting nanoionic nanodevices also described as resembling synapses.
There’s been a lot about the memristor, being developed at HP Labs, at the University of Michigan, and elsewhere, on this blog and significantly less on other approaches to creating nanodevices with neuromorphic properties by researchers in Japan and in the US. The Dec. 20, 2012 news item on ScienceDaily notes,
Researchers in Japan and the US propose a nanoionic device with a range of neuromorphic and electrical multifunctions that may allow the fabrication of on-demand configurable circuits, analog memories and digital-neural fused networks in one device architecture.
… Now Rui Yang, Kazuya Terabe and colleagues at the National Institute for Materials Science in Japan and the University of California, Los Angeles, in the US have developed two-, three-terminal WO3-x-based nanoionic devices capable of a broad range of neuromorphic and electrical functions.
The researchers draw similarities between the device properties — volatile and non-volatile states and the current fading process following positive voltage pulses — with models for neural behaviour —that is, short- and long-term memory and forgetting processes. They explain the behaviour as the result of oxygen ions migrating within the device in response to the voltage sweeps. Accumulation of the oxygen ions at the electrode leads to Schottky-like potential barriers and the resulting changes in resistance and rectifying characteristics. The stable bipolar switching behaviour at the Pt/WO3-x interface is attributed to the formation of the electric conductive filament and oxygen absorbability of the Pt electrode.
As the researchers conclude, “These capabilities open a new avenue for circuits, analog memories, and artificially fused digital neural networks using on-demand programming by input pulse polarity, magnitude, and repetition history.”
For those who wish to delve more deeply, here’s the citation (from the ScienceDaily news item),
The news release does not state explicitly why this would be considered an on-demand device. The article is behind a paywall.
There was a recent attempt to mimic brain processing not based in nanoelectronics but on mimicking brain activity by creating virtual neurons. A Canadian team at the University of Waterloo led by Chris Eliasmith made a sensation with SPAUN (Semantic Pointer Architecture Unified Network) in late Nov. 2012 (mentioned in my Nov. 29, 2012 posting).
I hinted about some related work at the University of Waterloo earlier this week in my Nov. 26, 2012 posting (Existential risk) about a proposed centre at the University of Cambridge which would be tasked with examining possible risks associated with ‘ultra intelligent machines’. Today (Science (magazine) published an article about SPAUN (Semantic Pointer Architecture Unified Network) [behind a paywall])and its ability to solve simple arithmetic and perform other tasks as well.
Spaun sees a series of digits: 1 2 3; 5 6 7; 3 4 ?. Its neurons fire, and it calculates the next logical number in the sequence. It scrawls out a 5, in legible if messy writing.
This is an unremarkable feat for a human, but Spaun is actually a simulated brain. It contains2.5 millionvirtual neurons — many fewer than the 86 billion in the average human head, but enough to recognize lists of numbers, do simple arithmetic and solve reasoning problems.
… The model captures biological details of each neuron, including which neurotransmitters are used, how voltages are generated in the cell, and how they communicate. Spaun uses this network of neurons to process visual images in order to control an arm that draws Spaun’s answers to perceptual, cognitive and motor tasks. …
“This is the first model that begins to get at how our brains can perform a wide variety of tasks in a flexible manner—how the brain coordinates the flow of information between different areas to exhibit complex behaviour,” said Professor Chris Eliasmith, Director of the Centre for Theoretical Neuroscience at Waterloo. He is Canada Research Chair in Theoretical Neuroscience, and professor in Waterloo’s Department of Philosophy and Department of Systems Design Engineering.
Unlike other large brain models, Spaun can perform several tasks. Researchers can show patterns of digits and letters the model’s eye, which it then processes, causing it to write its responses to any of eight tasks. And, just like the human brain, it can shift from task to task, recognizing an object one moment and memorizing a list of numbers the next. [emphasis mine] Because of its biological underpinnings, Spaun can also be used to understand how changes to the brain affect changes to behaviour.
“In related work, we have shown how the loss of neurons with aging leads to decreased performance on cognitive tests,” said Eliasmith. “More generally, we can test our hypotheses about how the brain works, resulting in a better understanding of the effects of drugs or damage to the brain.”
In addition, the model provides new insights into the sorts of algorithms that might be useful for improving machine intelligence. [emphasis mine] For instance, it suggests new methods for controlling the flow of information through a large system attempting to solve challenging cognitive tasks.
Laura Sanders’ Nov. 29, 2012 article for ScienceNews suggests that there is some controversy as to whether or not SPAUN does resemble a human brain,
… Henry Markram, who leads a different project to reconstruct the human brain called the Blue Brain, questions whether Spaun really captures human brain behavior. Because Spaun’s design ignores some important neural properties, it’s unlikely to reveal anything about the brain’s mechanics, says Markram, of the Swiss Federal Institute of Technology in Lausanne. “It is not a brain model.”
Personally, I have a little difficulty seeing lines of code as ever being able to truly simulate brain activity. I think the notion of moving to something simpler (using fewer neurons as the Eliasmith team does) is a move in the right direction but I’m still more interested in devices such as the memristor and the electrochemical atomic switch and their potential.
Anne Dijkstra’s presentation (at the 2012 S.NET [Society for the Study of Nanoscience and Emerging Technologies] conference on “Science Cafés and scientific citizens. The Nanotrail project as a case” provided a contrast to the local (Vancouver, Canada) science café scene I wasn’t expecting. The Dutch science cafés Dijkstra described were formal both in tone and organization. She featured five science cafés focussed on discussions of nanotechnology. The most striking image in Dijkstra’s presentation was of someone taking notes at one of the meetings. By contrast, the Vancouver café scientifique get togethers take place in a local bar/pub (The Railway Club) and are organized by members of the local science community. (There are some life science café scientifique Vancouver meetings which may be more formal as they take place at the University of British Columbia.)
I was quite fascinated to hear about the Dutch children’s science cafés that have been organized by the parents featuring presentations by children to their peers. It’s a grassroots effort/community-based initiative.
The next and final presentation set was when I presented my work on ‘Zombies, brains, collapsing boundaries, and entanglements’. (People at the conference kept laughing when I told them when my presentation was scheduled.) Briefly, my area of interest is in neuromorphic engineering (artificial brains), memristors and other devices which can mimic synaptic plasticity, pop culture (zombies), and something I’ve termed ‘cognitive entanglement’. My basic question is: what does it mean to be human at a time when notions about what constitutes life and nonlife are being obliterated? In addition, although I didn’t do this deliberately, this passage from my Oct. 31, 2012 posting (Part 1 of this series) touches on a related issue,
His [Chris Groves' plenary] quote from Hannah Arendt, “What we make remakes us” brought home the notion that there is a feedback loop and that science and invention are not unidirectional pursuits, i.e., we do not create the world and stand apart from it; the world we create, in turn, recreates us.
I have more about this ‘conversation’ regarding artificial brains taking place in business, pop culture, philosophy, advertising, science, engineering, and elsewhere but I think I need to write up a paper. Once I do that I”ll post it. As for the response from the conference goers, there were no questions but there were a few comments (I’m not the only one interested in zombies and the living dead) and a suggestion to me for further reading (Andrew Pickering, The cybernetic brain: sketches of another future).
You don’t have to be a Jedi to make things move with your mind.
Granted, we may not be able to lift a spaceship out of a swamp like Yoda does in The Empire Strikes Back, but it is possible to steer a model car, drive a wheelchair and control a robotic exoskeleton with just your thoughts.
We are standing in a testing room at IBM’s Emerging Technologies lab in Winchester, England.
On my head is a strange headset that looks like a black plastic squid. Its 14 tendrils, each capped with a moistened electrode, are supposed to detect specific brain signals.
In front of us is a computer screen, displaying an image of a floating cube.
As I think about pushing it, the cube responds by drifting into the distance.
Moskvitch goes on to discuss a number of projects that translate thought into movement via various pieces of equipment before she mentions a project at Brown University (US) where researchers are implanting computer chips into brains,
Headsets and helmets offer cheap, easy-to-use ways of tapping into the mind. But there are other,
Imagine some kind of a wireless computer device in your head that you’ll use for mind control – what if people hacked into that”
At Brown Institute for Brain Science in the US, scientists are busy inserting chips right into the human brain.
The technology, dubbed BrainGate, sends mental commands directly to a PC.
Subjects still have to be physically “plugged” into a computer via cables coming out of their heads, in a setup reminiscent of the film The Matrix. However, the team is now working on miniaturising the chips and making them wireless.
The purpose of the first phase of the pilot clinical study of the BrainGate2 Neural Interface System is to obtain preliminary device safety information and to demonstrate the feasibility of people with tetraplegia using the System to control a computer cursor and other assistive devices with their thoughts. Another goal of the study is to determine the participants’ ability to operate communication software, such as e-mail, simply by imagining the movement of their own hand. The study is invasive and requires surgery.
Individuals with limited or no ability to use both hands due to cervical spinal cord injury, brainstem stroke, muscular dystrophy, or amyotrophic lateral sclerosis (ALS) or other motor neuron diseases are being recruited into a clinical study at Massachusetts General Hospital (MGH) and Stanford University Medical Center. Clinical trial participants must live within a three-hour drive of Boston, MA or Palo Alto, CA. Clinical trial sites at other locations may be opened in the future. The study requires a commitment of 13 months.
They have been recruiting since at least November 2011, from the Nov. 14, 2011 news item by Tanya Lewis on MedicalXpress,
Stanford University researchers are enrolling participants in a pioneering study investigating the feasibility of people with paralysis using a technology that interfaces directly with the brain to control computer cursors, robotic arms and other assistive devices.
The pilot clinical trial, known as BrainGate2, is based on technology developed at Brown University and is led by researchers at Massachusetts General Hospital, Brown and the Providence Veterans Affairs Medical Center. The researchers have now invited the Stanford team to establish the only trial site outside of New England.
Under development since 2002, BrainGate is a combination of hardware and software that directly senses electrical signals in the brain that control movement. The device — a baby-aspirin-sized array of electrodes — is implanted in the cerebral cortex (the outer layer of the brain) and records its signals; computer algorithms then translate the signals into digital instructions that may allow people with paralysis to control external devices.
Confusingly, there seemto be two BrainGate organizations. One appears to be a research entity where a number of institutions collaborate and the other is some sort of jointly held company. From the About Us webpage of the BrainGate research entity,
In the late 1990s, the initial translation of fundamental neuroengineering research from “bench to bedside” – that is, to pilot clinical testing – would require a level of financial commitment ($10s of millions) available only from private sources. In 2002, a Brown University spin-off/startup medical device company, Cyberkinetics, Inc. (later, Cyberkinetics Neurotechnology Systems, Inc.) was formed to collect the regulatory permissions and financial resources required to launch pilot clinical trials of a first-generation neural interface system. The company’s efforts and substantial initial capital investment led to the translation of the preclinical research at Brown University to an initial human device, the BrainGate Neural Interface System [Caution: Investigational Device. Limited by Federal Law to Investigational Use]. The BrainGate system uses a brain-implantable sensor to detect neural signals that are then decoded to provide control signals for assistive technologies. In 2004, Cyberkinetics received from the U.S. Food and Drug Administration (FDA) the first of two Investigational Device Exemptions (IDEs) to perform this research. Hospitals in Rhode Island, Massachusetts, and Illinois were established as clinical sites for the pilot clinical trial run by Cyberkinetics. Four trial participants with tetraplegia (decreased ability to use the arms and legs) were enrolled in the study and further helped to develop the BrainGate device. Initial results from these trials have been published or presented, with additional publications in preparation.
While scientific progress towards the creation of this promising technology has been steady and encouraging, Cyberkinetics’ financial sponsorship of the BrainGate research – without which the research could not have been started – began to wane. In 2007, in response to business pressures and changes in the capital markets, Cyberkinetics turned its focus to other medical devices. Although Cyberkinetics’ own funds became unavailable for BrainGate research, the research continued through grants and subcontracts from federal sources. By early 2008 it became clear that Cyberkinetics would eventually need to withdraw completely from directing the pilot clinical trials of the BrainGate device. Also in 2008, Cyberkinetics spun off its device manufacturing to new ownership, BlackRock Microsystems, Inc., which now produces and is further developing research products as well as clinically-validated (510(k)-cleared) implantable neural recording devices.
Beginning in mid 2008, with the agreement of Cyberkinetics, a new, fully academically-based IDE application (for the “BrainGate2 Neural Interface System”) was developed to continue this important research. In May 2009, the FDA provided a new IDE for the BrainGate2 pilot clinical trial. [Caution: Investigational Device. Limited by Federal Law to Investigational Use.] The BrainGate2 pilot clinical trial is directed by faculty in the Department of Neurology at Massachusetts General Hospital, a teaching affiliate of Harvard Medical School; the research is performed in close scientific collaboration with Brown University’s Department of Neuroscience, School of Engineering, and Brown Institute for Brain Sciences, and the Rehabilitation Research and Development Service of the U.S. Department of Veteran’s Affairs at the Providence VA Medical Center. Additionally, in late 2011, Stanford University joined the BrainGate Research Team as a clinical site and is currently enrolling participants in the clinical trial. This interdisciplinary research team includes scientific partners from the Functional Electrical Stimulation Center at Case Western Reserve University and the Cleveland VA Medical Center. As was true of the decades of fundamental, preclinical research that provided the basis for the recent clinical studies, funding for BrainGate research is now entirely from federal and philanthropic sources.
The BrainGate Research Team at Brown University, Massachusetts General Hospital, Stanford University, and Providence VA Medical Center comprises physicians, scientists, and engineers working together to advance understanding of human brain function and to develop neurotechnologies for people with neurologic disease, injury, or limb loss.
The BrainGate™ Co. is a privately-held firm focused on the advancement of the BrainGate™ Neural Interface System. The Company owns the Intellectual property of the BrainGate™ system as well as new technology being developed by the BrainGate company. In addition, the Company also owns the intellectual property of Cyberkinetics which it purchased in April 2009.
Meanwhile, in Europe there are two projects BrainAble and the Human Brain Project. The BrainAble project is similar to BrainGate in that it is intended for people with injuries but they seem to be concentrating on a helmet or cap for thought transmission (as per Moskovitch’s experience at the beginning of this posting). From the Feb. 28, 2012 news item on Science Daily,
In the 2009 film Surrogates, humans live vicariously through robots while safely remaining in their own homes. That sci-fi future is still a long way off, but recent advances in technology, supported by EU funding, are bringing this technology a step closer to reality in order to give disabled people more autonomy and independence than ever before.
“Our aim is to give people with motor disabilities as much autonomy as technology currently allows and in turn greatly improve their quality of life,” says Felip Miralles at Barcelona Digital Technology Centre, a Spanish ICT research centre.
Mr. Miralles is coordinating the BrainAble* project (http://www.brainable.org/), a three-year initiative supported by EUR 2.3 million in funding from the European Commission to develop and integrate a range of different technologies, services and applications into a commercial system for people with motor disabilities.
In terms of HCI [human-computer interface], BrainAble improves both direct and indirect interaction between the user and his smart home. Direct control is upgraded by creating tools that allow controlling inner and outer environments using a “hybrid” Brain Computer Interface (BNCI) systemable to take into account other sources of information such as measures of boredom, confusion, frustration by means of the so-called physiological and affective sensors.
Furthermore, interaction is enhanced by means of Ambient Intelligence (AmI) focused on creating a proactive and context-aware environments by adding intelligence to the user’s surroundings. AmI’s main purpose is to aid and facilitate the user’s living conditions by creating proactive environments to provide assistance.
Human-Computer Interfaces are complemented by an intelligent Virtual Reality-based user interface with avatars and scenarios that will help the disabled move around freely, and interact with any sort of devices. Even more the VR will provide self-expression assets using music, pictures and text, communicate online and offline with other people, play games to counteract cognitive decline, and get trained in new functionalities and tasks.
Perhaps this video helps,
Another European project, NeuroCare, which I discussed in my March 5, 2012 posting, is focused on creating neural implants to replace damaged and/or destroyed sensory cells in the eye or the ear.
The Human Brain Project is, despite its title, a neuromorphic engineering project (although the researchers do mention some medical applications on the project’s home page) in common with the work being done at the University of Michigan/HRL Labs mentioned in my April 19, 2012 posting (A step closer to artificial synapses courtesy of memritors) about that project. From the April 11, 2012 news item about the Human Brain Project on Science Daily,
Researchers at the EPFL [Ecole Polytechnique Fédérale de Lausanne] have discovered rules that relate the genes that a neuron switches on and off, to the shape of that neuron, its electrical properties and its location in the brain.
The discovery, using state-of-the-art informatics tools, increases the likelihood that it will be possible to predict much of the fundamental structure and function of the brain without having to measure every aspect of it. That in turn makes the Holy Grail of modelling the brain in silico — the goal of the proposed Human Brain Project — a more realistic, less Herculean, prospect. “It is the door that opens to a world of predictive biology,” says Henry Markram, the senior author on the study, which is published this week in PLoS ONE.
Here’s a bit more about the Human Brain Project (from the home page),
Today, simulating a single neuron requires the full power of a laptop computer. But the brain has billions of neurons and simulating all them simultaneously is a huge challenge. To get round this problem, the project will develop novel techniques of multi-level simulation in which only groups of neurons that are highly active are simulated in detail. But even in this way, simulating the complete human brain will require a computer a thousand times more powerful than the most powerful machine available today. This means that some of the key players in the Human Brain Project will be specialists in supercomputing. Their task: to work with industry to provide the project with the computing power it will need at each stage of its work.
The Human Brain Project will impact many different areas of society. Brain simulation will provide new insights into the basic causes of neurological diseases such as autism, depression, Parkinson’s, and Alzheimer’s. It will give us new ways of testing drugs and understanding the way they work. It will provide a test platform for new drugs that directly target the causes of disease and that have fewer side effects than current treatments. It will allow us to design prosthetic devices to help people with disabilities. The benefits are potentially huge. As world populations grow older, more than a third will be affected by some kind of brain disease. Brain simulation provides us with a powerful new strategy to tackle the problem.
The project also promises to become a source of new Information Technologies. Unlike the computers of today, the brain has the ability to repair itself, to take decisions, to learn, and to think creatively – all while consuming no more energy than an electric light bulb. The Human Brain Project will bring these capabilities to a new generation of neuromorphic computing devices, with circuitry directly derived from the circuitry of the brain. The new devices will help us to build a new generation of genuinely intelligent robots to help us at work and in our daily lives.
The Human Brain Project builds on the work of the Blue Brain Project. Led by Henry Markram of the Ecole Polytechnique Fédérale de Lausanne (EPFL), the Blue Brain Project has already taken an essential first towards simulation of the complete brain. Over the last six years, the project has developed a prototype facility with the tools, know-how and supercomputing technology necessary to build brain models, potentially of any species at any stage in its development. As a proof of concept, the project has successfully built the first ever, detailed model of the neocortical column, one of the brain’s basic building blocks.
The Human Brain Project is a flagship project in contention for the 1B Euro research prize that I’ve mentioned in the context of the GRAPHENE-CA flagship project (my Feb. 13, 2012 posting gives a better description of these flagship projects while mentioned both GRAPHENE-CA and another brain-computer interface project, PRESENCCIA).
Part of the reason for doing this roundup, is the opportunity to look at a number of these projects in one posting; the effect is more overwhelming than I expected.
For anyone who’s interested in Markram’s paper (open access),
Georges Khazen, Sean L. Hill, Felix Schürmann, Henry Markram. Combinatorial Expression Rules of Ion Channel Genes in Juvenile Rat (Rattus norvegicus) Neocortical Neurons. PLoS ONE, 2012; 7 (4): e34786 DOI: 10.1371/journal.pone.0034786
I do have earlier postings on brains and neuroprostheses, one of the more recent ones is this March 16, 2012 posting. Meanwhile, there are new announcements from Northwestern University (US) and the US National Institutes of Health (National Institute of Neurological Disorders and Stroke). From the April 18, 2012 news item (originating from the National Institutes of Health) on Science Daily,
An artificial connection between the brain and muscles can restore complex hand movements in monkeys following paralysis, according to a study funded by the National Institutes of Health.
In a report in the journal Nature, researchers describe how they combined two pieces of technology to create a neuroprosthesis — a device that replaces lost or impaired nervous system function. One piece is a multi-electrode array implanted directly into the brain which serves as a brain-computer interface (BCI). The array allows researchers to detect the activity of about 100 brain cells and decipher the signals that generate arm and hand movements. The second piece is a functional electrical stimulation (FES) device that delivers electrical current to the paralyzed muscles, causing them to contract. The brain array activates the FES device directly, bypassing the spinal cord to allow intentional, brain-controlled muscle contractions and restore movement.
A new Northwestern Medicine brain-machine technology delivers messages from the brain directly to the muscles — bypassing the spinal cord — to enable voluntary and complex movement of a paralyzed hand. The device could eventually be tested on, and perhaps aid, paralyzed patients.
The research was done in monkeys, whose electrical brain and muscle signals were recorded by implanted electrodes when they grasped a ball, lifted it and released it into a small tube. Those recordings allowed the researchers to develop an algorithm or “decoder” that enabled them to process the brain signals and predict the patterns of muscle activity when the monkeys wanted to move the ball.
These experiments were performed by Christian Ethier, a post-doctoral fellow, and Emily Oby, a graduate student in neuroscience, both at the Feinberg School of Medicine. The researchers gave the monkeys a local anesthetic to block nerve activity at the elbow, causing temporary, painless paralysis of the hand. With the help of the special devices in the brain and the arm — together called a neuroprosthesis — the monkeys’ brain signals were used to control tiny electric currents delivered in less than 40 milliseconds to their muscles, causing them to contract, and allowing the monkeys to pick up the ball and complete the task nearly as well as they did before.
“The monkey won’t use his hand perfectly, but there is a process of motor learning that we think is very similar to the process you go through when you learn to use a new computer mouse or a different tennis racquet. Things are different and you learn to adjust to them,” said Miller [Lee E. Miller], also a professor of physiology and of physical medicine and rehabilitation at Feinberg and a Sensory Motor Performance Program lab chief at the Rehabilitation Institute of Chicago.
The National Institutes of Health news item supplies a little history and background for this latest breakthrough while the Northwestern University news item offers more technical details more technical details.
You can find the researchers’ paper with this citation (assuming you can get past the paywall,
C. Ethier, E. R. Oby, M. J. Bauman, L. E. Miller. Restoration of grasp following paralysis through brain-controlled stimulation of muscles. Nature, 2012; DOI: 10.1038/nature10987
I was surprised to find the Health Research Fund of Québec listed as one of the funders but perhaps Christian Ethier has some connection with the province.
In a step toward computers that mimic the parallel processing of complex biological brains, researchers from HRL Laboratories, LLC, and the University of Michigan have built a type of artificial synapse.
They have demonstrated the first functioning “memristor” array stacked on a conventional complementary metal-oxide semiconductor (CMOS) circuit. Memristors combine the functions of memory and logic like the synapses of biological brains.
The researchers developed a vertically integrated hybrid electronic circuit by combining the novel memristor developed at the University of Michigan with wafer scale heterogeneous process integration methodology and CMOS read/write circuitry developed at HRL. “This hybrid circuit is a critical advance in developing intelligent machines,” said HRL SyNAPSE program manager and principal investigator Narayan Srinivasa. “We have created a multi-bit fully addressable memory storage capability with a density of up to 30 Gbits/cm², which is unprecedented in microelectronics.”
Industry is seeking hybrid systems such as this one, the researchers say. Dubbed “R-RAM,” they could shatter the looming limits of Moore’s Law, which predicts a doubling of transistor density and therefore chip speed every two years.
“We’re reaching the fundamental limits of transistor scaling. This hybrid integration opens many opportunities for greater memory capacity and higher performance of conventional computers. It has great potential in future non-volatile memory that would improve upon today’s Flash, as well as reconfigurable circuits,” said Wei Lu, an associate professor at the U-M Department of Electrical Engineering and Computer Science whose group developed the memristor array.
This work is being done as part of a DARPA (Defense Advanced Research Projects Agency) project titled, SyNAPSE, from the news item,
The work is part of the Defense Advanced Research Projects Agency’s (DARPA) SyNAPSE Program, or Systems of Neuromorphic Adaptive Plastic Scalable Electronics. Since 2008, the HRL-led SyNAPSE team has been developing a new paradigm for “neuromorphic computing” modeled after biology.
While I haven’t come across HRL Laboratories before, I have mentioned Dr. Wei Lu and his work with memristors in my April 15, 2010 posting. As for HRL Laboratories, they were founded in 1948 by Howard Hughes as the Hughes Research Laboratories (from the company’s History page),
HRL Laboratories continues the legacy of technology advances that began at Hughes Research Laboratories, established by Howard Hughes in 1948. HRL Laboratories, LLC, was organized as a limited liability company (LLC) on December 17, 1997 and received its first patent on September 12, 2000. With more than 750 patents to our name since then and counting, we’re proud of our talented group of researchers, who continue the long tradition of technical excellence in innovation.
One of Hughes’ most notable achievements came in 1960 with the demonstration of the world’s first laser which used a synthetic ruby crystal. The ruby laser became the basis of a multibillion-dollar laser range finder business for Hughes. In 2010 during the 50th anniversary of the laser, HRL was designated a Physics Historic Site by the American Physical Society and was selected an IEEE Milestones location as the facility where the first working laser was demonstrated.
Part of HRL’s Information and Systems Sciences Laboratory, the Center for Neural and Emergent Systems (CNES) is dedicated to exploring and developing an innovative neural & emergent computing paradigm for creating intelligent, efficient machines that can interact with, react and adapt to, evolve, and learn from their environments.
CNES was founded on the principle that all intelligent systems are open thermodynamic systems capable of self-organization, whereby structural order emerges from disorder as a natural consequence of exchanging energy, matter or entropy with their environments.
These systems exist in a state far from equilibrium where the evolution of complex behaviors cannot be readily predicted from purely local interactions between the system’s parts. Rather, the emergent order and structure of the system arises from manifold interactions of its parts. These emergent systems contain amplifying-damping loops as a result of which very small perturbations can cause large effects or no effect at all. They become adaptive when the component relationships within the system become tuned for a particular set of tasks.
CNES promotes the idea that the neural system in the brain is an example of such a complex adaptive system. A key goal of CNES is to explain how computations in the brain can help explain the realization of complex behaviors such as perception, planning, decision making and navigation due to brain-body-environment interactions.
This has reminded me of HP Labs and their work with memristors (I have many postings, too many to list here) and understand that they will be rolling out ‘memristor-based’ products in 2013. From the Oct. 8, 2011 article by Peter Clarke for EE Times,
The ‘memristor’ two-terminal non-volatile memory technology, in development at Hewlett Packard Co. since 2008, is on track to be in the market and taking share from flash memory within 18 months, according to Stan Williams, senior fellow at HP Labs.
“We have a lot of big plans for it and we’re working with Hynix Semiconductor to launch a replacement for flash in the summer of 2013 and also to address the solid-state drive market,” Williams told the audience of the International Electronics Forum, being held here [Seville, Spain].
ETA June 11, 2012: New artificial synapse development is mentioned in George Dvorsky’s June 11, 2012 posting (on the IO9.com website) about a nanoscale electrochemical switch developed by researchers in a Japan.
After a few fits and starts, the video of my March 15, 2012 presentation to the Canadian Academy of Independent Scholars at Simon Fraser University has been uploaded to Vimeo. Unfortunately the original recording was fuzzy (camera issues) so we (camera operator, director, and editor, Sama Shodjai [firstname.lastname@example.org]) and I rerecorded the presentation and this second version is the one we’ve uploaded.
I’ve come across a few errors; at one point, I refer to Buckminster Fuller as Buckminster Fullerene and I state that the opening image visualizes a neuron from someone with Parkinson’s disease, I should have said Huntingdon’s disease. Perhaps, you’ll come across more, please do let me know. If this should become a viral sensation (no doubt feeding a pent up demand for grey-haired women talking about memristors and brains), it’s important that corrections be added.
Finally, a big thank you to Mark Dwor who provides my introduction at the beginning, the Canadian Academy of Independent Scholars whose grant made the video possible, and Simon Fraser University.
ETA March 29, 2012: This is an updated version of the presentation I was hoping to give at ISEA (International Symposium on Electronic Arts) 2011 in Istanbul. Sadly, I was never able to raise all of the funds I needed for that venture. The funds I raised separately from the CAIS grant are being held until I can find another suitable opportunity to present my work.
How did I miss it? Last Thursday night (Oct. 13, 2011) the Canadian Broadcasting Corporation (CBC) televised the first of a three-part series on nanotechnology on its Nature of Things science programme. Luckily, they’ve already posted the episode so I (and you too) can catch up. Titled, The Nano Revolution, the first episode is subtitled, Welcome to Nano City, and focuses on three main topics: buildings, computers, and security.
Their ‘go to’ expert is the University of California at Los Angeles’s Dr.James Gimzewski (pronounced jemjeski) who has what I’m guessing is a Scottish accent. He worked alongside Gerd Binnig and Heinrich Rohrer (mentioned in my May 26, 2011 posting) to develop the scanning tunneling microscope (STM) which allowed scientists to work at the nanoscale in a fashion that had not been possible before. Gimzewski’s accomplishments are many (from his About page),
Dr. Gimzewski pioneered research on mechanical and electrical contacts with single atoms and molecules using scanning tunneling microscopy (STM) and was one of the first persons to image molecules with STM. His accomplishments include the first STM-based fabrication of molecular suprastructures at room temperature using mechanical forces to push molecules across surfaces, the discovery of single molecule rotors and the development of new micromechanical sensors based on nanotechnology, which explore ultimate limits of sensitivity and measurement. This approach was recently used to convert biochemical recognition into Nanomechanics. His current interests are in the nanomechanics of cells and bacteria where he collaborates with the UCLA Medical and Dental Schools. He is involved in projects that range from the operation of X-rays, ions and nuclear fusion using pyroelectric crystals, direct deposition of carbonn nanotubes and single molecule DNA profiling. Dr. Gimzewski is also involved in numerous art-science collaborative projects that have been exhibited in museums throughout the world.
Getting back to the programme, there was no Canadian content in this first episode unless you count David Suzuki who did not appear on camera reading a script for voice over narration as Canadian content. I was glad to see a lot more information about research in Japan and Korea than I’m usually able to dig up and I thought the science was well presented.
I had the distinct impression that the segments were repurposed materials, i.e., the animations and interviews had originally been recorded for another purpose and reused by the CBC for this series.
I imagine that for storyteling purposes they felt it necessary to focus on specific experts and teams of researchers. For example, the segment on an atomic switch described its characteristics in a manner that reminded me of memristors, especially when Gimzewski mentioned memory and learning. The discussion then turned to neuromorphic engineering (creating artificial brains) and how atomic switches are a key area of interest which likely left most viewers with the impression that the featured team is working alone and/or that only atomic switches are being studied with regard to neuromorphic engineering. Surely, they could have mentioned other teams and other approaches in passing. Also, I’m not sure why they included a segment on computers in a programme about cities and nanotechnology.
In the last few minutes of the first episode they mention the UK’s report from the Royal Society and some of the concerns being raised about nanotechnology. Issued in 2004, the report was written after Prince Charles mentioned the ‘grey goo’ scenario much discussed at that time. K. Eric Drexler had written a book, Engines of Creation, in 1986 intended to popularize nanotechnology. In writing the book, Drexler also included an idea he had about nanoscale self-assemblers (popularly termed as nanobots) running amuck, i.e. snatching atoms and molecules to self-assemble endlessly thereby reducing the world to ‘grey goo’. If you listen carefully, you’ll hear one of the report authors, Sir Martin Rees, indirectly refer to and dismiss that notion.
All three segments featured futuristic sequences imagining our nano-enabled world in 2041. It seems like a remarkable lonely world in which women are concerned with cleanliness and dirt (why a space elevator is better than space travel); prefer to stay in the house where they can shop to their hearts’ content, raise virtual children with no fuss and no muss, and get a health check-up without ever seeing a human being in a segment where virtual reality is seamlessly integrated into all aspects of our lives; and are worried about surveillance in a world where RFID (radio frequency identification) tags which track and monitor our every move are ubiquitous.
Remarkably the first two ‘futuristic’ segments are presented as part of a happy future. One other note, all of the scientists presented in the first segment are men while the lead characters in the animations are a blonde woman and a dark-haired child. I can hardly wait to see what’s coming up next on Thursday, Oct. 20, 2011 in the episode subtitled, More than Human.