Tag Archives: Brain Machine Interface

Artificial nose for intelligent olfactory substitution

The signal transmitted into mouse brain can participate in mouse perception and act as the brain stimulator. (Image credit: Prof. ZHAN Yang)

I’m fascinated by the image. Are they suggesting putting implants into people’s brains that can sense dangerous gaseous molecules and convert that into data which can be read on a smartphone? And, are they harvesting bioenergy to supply energy to the implant?

A July 29, 2019 news item on Azonano was not as helpful in answering my questions as I’d hoped (Note: A link has been removed),

An artificial olfactory system based on a self-powered nano-generator has been built by Prof. ZHAN Yang’s team at the Shenzhen Institutes of Advanced Technology (SIAT) of the Chinese Academy of Sciences [CAS], together with colleagues at the University of Electronic Science and Technology of China.

The device, which can detect a variety of odor molecules and identify different odors, has been demonstrated in vivo in animal models. The research titled “An artificial triboelectricity-brain-behavior closed loop for intelligent olfactory substitution” has been reported in Nano Energy.

A July 25, 2019 CAS press release, which originated the news item, provides a little more information,

Odor processing is important to many species. Specific olfactory receptors located on the neurons are involved in odor recognition. These different olfactory receptors form patterned distribution.

Inspired by the biological receptors, the teams collaborated on formulating an artificial olfactory system. Through nano-fabrication on the soft materials and special alignment of material structures, the teams built a self-power device that can code and differentiate different odorant molecules.

This device has been connected to the mouse brain to demonstrate that the olfactory signals can produce appropriate neural stimulation. When the self-powered device generated the electric currents, the mouse displayed behavioral motion changes.

This study, inspired by the biological olfactory system, provides insights on novel design of neural stimulation and brain-machine interface. 

Here’s a link to and a citation for the paper,

An artificial triboelectricity-brain-behavior closed loop for intelligent olfactory substitution by Tianyan Zhong, Mengyang Zhang, Yongming Fu, Yechao Han, Hongye Guan, Haoxuan He, Tianming Zhao, Lili Xing, Xinyu Xue, Yan Zhang, Yang Zhan.Nano Energy Volume 63, September 2019, 103884 DOI: https://doi.org/10.1016/j.nanoen.2019.103884

This paper is behind a paywall.

Better motor control for prosthetic hands (the illusion of feeling) and a discussion of superprostheses and reality

I have two bits about prosthetics, one which focuses on how most of us think of them and another about science fiction fantasies.

Better motor control

This new technology comes via a collaboration between the University of Alberta, the University of New Brunswick (UNB) and Ohio’s Cleveland Clinic, from a March 18, 2018 article by Nicole Ireland for the Canadian Broadcasting Corporation’s (CBC) news online,

Rob Anderson was fighting wildfires in Alberta when the helicopter he was in crashed into the side of a mountain. He survived, but lost his left arm and left leg.

More than 10 years after that accident, Anderson, now 39, says prosthetic limb technology has come a long way, and he feels fortunate to be using “top of the line stuff” to help him function as normally as possible. In fact, he continues to work for the Alberta government’s wildfire fighting service.

His powered prosthetic hand can do basic functions like opening and closing, but he doesn’t feel connected to it — and has limited ability to perform more intricate movements with it, such as shaking hands or holding a glass.

Anderson, who lives in Grande Prairie, Alta., compares its function to “doing things with a long pair of pliers.”

“There’s a disconnect between what you’re physically touching and what your body is doing,” he told CBC News.

Anderson is one of four Canadian participants in a study that suggests there’s a way to change that. …

Six people, all of whom had arm amputations from below the elbow or higher, took part in the research. It found that strategically placed vibrating “robots” made them “feel” the movements of their prosthetic hands, allowing them to grasp and grip objects with much more control and accuracy.

All of the participants had all previously undergone a specialized surgical procedure called “targeted re-innervation.” The nerves that had connected to their hands before they were amputated were rewired to link instead to muscles (including the biceps and triceps) in their remaining upper arms and in their chests.

For the study, researchers placed the robotic devices on the skin over those re-innervated muscles and vibrated them as the participants opened, closed, grasped or pinched with their prosthetic hands.

While the vibration was turned on, the participants “felt” their artificial hands moving and could adjust their grip based on the sensation. …

I have an April 24, 2017 posting about a tetraplegic patient who had a number of electrodes implanted in his arms and hands linked to a brain-machine interface and which allowed him to move his hands and arms; the implants were later removed. It is a different problem with a correspondingly different technological solution but there does seem to be increased interest in implanting sensors and electrodes into the human body to increase mobility and/or sensation.

Anderson describes how it ‘feels,

“It was kind of surreal,” Anderson said. “I could visually see the hand go out, I would touch something, I would squeeze it and my phantom hand felt like it was being closed and squeezing on something and it was sending the message back to my brain.

“It was a very strange sensation to actually be able to feel that feedback because I hadn’t in 10 years.”

The feeling of movement in the prosthetic hand is an illusion, the researchers say, since the vibration is actually happening to a muscle elsewhere in the body. But the sensation appeared to have a real effect on the participants.

“They were able to control their grasp function and how much they were opening the hand, to the same degree that someone with an intact hand would,” said study co-author Dr. Jacqueline Hebert, an associate professor in the Faculty of Rehabilitation Medicine at the University of Alberta.

Although the researchers are encouraged by the study findings, they acknowledge that there was a small number of participants, who all had access to the specialized re-innervation surgery to redirect the nerves from their amputated hands to other parts of their body.

The next step, they say, is to see if they can also simulate the feeling of movement in a broader range of people who have had other types of amputations, including legs, and have not had the re-innervation surgery.

Here’s a March 15, 2018  CBC New Brunswick radio interview about the work,

This is a bit longer than most of the embedded audio pieces that I have here but it’s worth it. Sadly, I can’t identify the interviewer who did a very good job with Jon Sensinger, associate director of UNB’s Institute of Biomedical Engineering. One more thing, I noticed that the interviewer made no mention of the University of Alberta in her introduction or in the subsequent interview. I gather regionalism reigns supreme everywhere in Canada. Or, maybe she and Sensinger just forgot. It happens when you’re excited. Also, there were US institutions in Ohio and Virginia that participated in this work.

Here’s a link to and a citation for the team’s paper,

Illusory movement perception improves motor control for prosthetic hands by Paul D. Marasco, Jacqueline S. Hebert, Jon W. Sensinger, Courtney E. Shell, Jonathon S. Schofield, Zachary C. Thumser, Raviraj Nataraj, Dylan T. Beckler, Michael R. Dawson, Dan H. Blustein, Satinder Gill, Brett D. Mensh, Rafael Granja-Vazquez, Madeline D. Newcomb, Jason P. Carey, and Beth M. Orzell. Science Translational Medicine 14 Mar 2018: Vol. 10, Issue 432, eaao6990 DOI: 10.1126/scitranslmed.aao6990

This paper is open access.

Superprostheses and our science fiction future

A March 20, 2018 news item on phys.org features an essay on about superprostheses and/or assistive devices,

Assistive devices may soon allow people to perform virtually superhuman feats. According to Robert Riener, however, there are more pressing goals than developing superhumans.

What had until recently been described as a futuristic vision has become a reality: the first self-declared “cyborgs” have had chips implanted in their bodies so that they can open doors and make cashless payments. The latest robotic hand prostheses succeed in performing all kinds of grips and tasks requiring dexterity. Parathletes fitted with running and spring prostheses compete – and win – against the best, non-impaired athletes. Then there are robotic pets and talking humanoid robots adding a bit of excitement to nursing homes.

Some media are even predicting that these high-tech creations will bring about forms of physiological augmentation overshadowing humans’ physical capabilities in ways never seen before. For instance, hearing aids are eventually expected to offer the ultimate in hearing; retinal implants will enable vision with a sharpness rivalling that of any eagle; motorised exoskeletons will transform soldiers into tireless fighting machines.

Visions of the future: the video game Deus Ex: Human Revolution highlights the emergence of physiological augmentation. (Visualisations: Square Enix) Courtesy: ETH Zurich

Professor Robert Riener uses the image above to illustrate the notion of superprosthese in his March 20, 2018 essay on the ETH Zurich website,

All of these prophecies notwithstanding, our robotic transformation into superheroes will not be happening in the immediate future and can still be filed under Hollywood hero myths. Compared to the technology available today, our bodies are a true marvel whose complexity and performance allows us to perform an extremely wide spectrum of tasks. Hundreds of efficient muscles, thousands of independently operating motor units along with millions of sensory receptors and billions of nerve cells allow us to perform delicate and detailed tasks with tweezers or lift heavy loads. Added to this, our musculoskeletal system is highly adaptable, can partly repair itself and requires only minimal amounts of energy in the form of relatively small amounts of food consumed.

Machines will not be able to match this any time soon. Today’s assistive devices are still laboratory experiments or niche products designed for very specific tasks. Markus Rehm, an athlete with a disability, does not use his innovative spring prosthesis to go for walks or drive a car. Nor can today’s conventional arm prostheses help a person tie their shoes or button up their shirt. Lifting devices used for nursing care are not suitable for helping with personal hygiene tasks or in psychotherapy. And robotic pets quickly lose their charm the moment their batteries die.

Solving real problems

There is no denying that advances continue to be made. Since the scientific and industrial revolutions, we have become dependent on relentless progress and growth, and we can no longer separate today’s world from this development. There are, however, more pressing issues to be solved than creating superhumans.

On the one hand, engineers need to dedicate their efforts to solving the real problems of patients, the elderly and people with disabilities. Better technical solutions are needed to help them lead normal lives and assist them in their work. We need motorised prostheses that also work in the rain and wheelchairs that can manoeuvre even with snow on the ground. Talking robotic nurses also need to be understood by hard-of-hearing pensioners as well as offer simple and dependable interactivity. Their batteries need to last at least one full day to be recharged overnight.

In addition, financial resources need to be available so that all people have access to the latest technologies, such as a high-quality household prosthesis for the family man, an extra prosthesis for the avid athlete or a prosthesis for the pensioner. [emphasis mine]

Breaking down barriers

What is just as important as the ongoing development of prostheses and assistive devices is the ability to minimise or eliminate physical barriers. Where there are no stairs, there is no need for elaborate special solutions like stair lifts or stairclimbing wheelchairs – or, presumably, fully motorised exoskeletons.

Efforts also need to be made to transform the way society thinks about people with disabilities. More acknowledgement of the day-to-day challenges facing patients with disabilities is needed, which requires that people be confronted with the topic of disability when they are still children. Such projects must be promoted at home and in schools so that living with impairments can also attain a state of normality and all people can partake in society. It is therefore also necessary to break down mental barriers.

The road to a virtually superhuman existence is still far and long. Anyone reading this text will not live to see it. In the meantime, the task at hand is to tackle the mundane challenges in order to simplify people’s daily lives in ways that do not require technology, that allow people to be active participants and improve their quality of life – instead of wasting our time getting caught up in cyborg euphoria and digital mania.

I’m struck by Riener’s reference to financial resources and access. Sensinger mentions financial resources in his CBC radio interview although his concern is with convincing funders that prostheses that mimic ‘feeling’ are needed.

I’m also struck by Riener’s discussion about nontechnological solutions for including people with all kinds of abilities and disabilities.

There was no grand plan for combining these two news bits; I just thought they were interesting together.

Bidirectional prosthetic-brain communication with light?

The possibility of not only being able to make a prosthetic that allows a tetraplegic to grab a coffee but to feel that coffee  cup with their ‘hand’ is one step closer to reality according to a Feb. 22, 2017 news item on ScienceDaily,

Since the early seventies, scientists have been developing brain-machine interfaces; the main application being the use of neural prosthesis in paralyzed patients or amputees. A prosthetic limb directly controlled by brain activity can partially recover the lost motor function. This is achieved by decoding neuronal activity recorded with electrodes and translating it into robotic movements. Such systems however have limited precision due to the absence of sensory feedback from the artificial limb. Neuroscientists at the University of Geneva (UNIGE), Switzerland, asked whether it was possible to transmit this missing sensation back to the brain by stimulating neural activity in the cortex. They discovered that not only was it possible to create an artificial sensation of neuroprosthetic movements, but that the underlying learning process occurs very rapidly. These findings, published in the scientific journal Neuron, were obtained by resorting to modern imaging and optical stimulation tools, offering an innovative alternative to the classical electrode approach.

A Feb. 22, 2017 Université de Genève press release on EurekAlert, which originated the news item, provides more detail,

Motor function is at the heart of all behavior and allows us to interact with the world. Therefore, replacing a lost limb with a robotic prosthesis is the subject of much research, yet successful outcomes are rare. Why is that? Until this moment, brain-machine interfaces are operated by relying largely on visual perception: the robotic arm is controlled by looking at it. The direct flow of information between the brain and the machine remains thus unidirectional. However, movement perception is not only based on vision but mostly on proprioception, the sensation of where the limb is located in space. “We have therefore asked whether it was possible to establish a bidirectional communication in a brain-machine interface: to simultaneously read out neural activity, translate it into prosthetic movement and reinject sensory feedback of this movement back in the brain”, explains Daniel Huber, professor in the Department of Basic Neurosciences of the Faculty of Medicine at UNIGE.

Providing artificial sensations of prosthetic movements

In contrast to invasive approaches using electrodes, Daniel Huber’s team specializes in optical techniques for imaging and stimulating brain activity. Using a method called two-photon microscopy, they routinely measure the activity of hundreds of neurons with single cell resolution. “We wanted to test whether mice could learn to control a neural prosthesis by relying uniquely on an artificial sensory feedback signal”, explains Mario Prsa, researcher at UNIGE and the first author of the study. “We imaged neural activity in the motor cortex. When the mouse activated a specific neuron, the one chosen for neuroprosthetic control, we simultaneously applied stimulation proportional to this activity to the sensory cortex using blue light”. Indeed, neurons of the sensory cortex were rendered photosensitive to this light, allowing them to be activated by a series of optical flashes and thus integrate the artificial sensory feedback signal. The mouse was rewarded upon every above-threshold activation, and 20 minutes later, once the association learned, the rodent was able to more frequently generate the correct neuronal activity.

This means that the artificial sensation was not only perceived, but that it was successfully integrated as a feedback of the prosthetic movement. In this manner, the brain-machine interface functions bidirectionally. The Geneva researchers think that the reason why this fabricated sensation is so rapidly assimilated is because it most likely taps into very basic brain functions. Feeling the position of our limbs occurs automatically, without much thought and probably reflects fundamental neural circuit mechanisms. This type of bidirectional interface might allow in the future more precisely displacing robotic arms, feeling touched objects or perceiving the necessary force to grasp them.

At present, the neuroscientists at UNIGE are examining how to produce a more efficient sensory feedback. They are currently capable of doing it for a single movement, but is it also possible to provide multiple feedback channels in parallel? This research sets the groundwork for developing a new generation of more precise, bidirectional neural prostheses.

Towards better understanding the neural mechanisms of neuroprosthetic control

By resorting to modern imaging tools, hundreds of neurons in the surrounding area could also be observed as the mouse learned the neuroprosthetic task. “We know that millions of neural connections exist. However, we discovered that the animal activated only the one neuron chosen for controlling the prosthetic action, and did not recruit any of the neighbouring neurons”, adds Daniel Huber. “This is a very interesting finding since it reveals that the brain can home in on and specifically control the activity of just one single neuron”. Researchers can potentially exploit this knowledge to not only develop more stable and precise decoding techniques, but also gain a better understanding of most basic neural circuit functions. It remains to be discovered what mechanisms are involved in routing signals to the uniquely activated neuron.

Caption: A novel optical brain-machine interface allows bidirectional communication with the brain. While a robotic arm is controlled by neuronal activity recorded with optical imaging (red laser), the position of the arm is fed back to the brain via optical microstimulation (blue laser). Credit: © Daniel Huber, UNIGE

Here’s a link to and a citation for the paper,

Rapid Integration of Artificial Sensory Feedback during Operant Conditioning of Motor Cortex Neurons by Mario Prsa, Gregorio L. Galiñanes, Daniel Huber. Neuron Volume 93, Issue 4, p929–939.e6, 22 February 2017 DOI: http://dx.doi.org/10.1016/j.neuron.2017.01.023 Open access funded by European Research Council

This paper is open access.

Changing synaptic connectivity with a memristor

The French have announced some research into memristive devices that mimic both short-term and long-term neural plasticity according to a Dec. 6, 2016 news item on Nanowerk,

Leti researchers have demonstrated that memristive devices are excellent candidates to emulate synaptic plasticity, the capability of synapses to enhance or diminish their connectivity between neurons, which is widely believed to be the cellular basis for learning and memory.

The breakthrough was presented today [Dec. 6, 2016] at IEDM [International Electron Devices Meeting] 2016 in San Francisco in the paper, “Experimental Demonstration of Short and Long Term Synaptic Plasticity Using OxRAM Multi k-bit Arrays for Reliable Detection in Highly Noisy Input Data”.

Neural systems such as the human brain exhibit various types and time periods of plasticity, e.g. synaptic modifications can last anywhere from seconds to days or months. However, prior research in utilizing synaptic plasticity using memristive devices relied primarily on simplified rules for plasticity and learning.

The project team, which includes researchers from Leti’s sister institute at CEA Tech, List, along with INSERM and Clinatec, proposed an architecture that implements both short- and long-term plasticity (STP and LTP) using RRAM devices.

A Dec. 6, 2016 Laboratoire d’électronique des technologies de l’information (LETI) press release, which originated the news item, elaborates,

“While implementing a learning rule for permanent modifications – LTP, based on spike-timing-dependent plasticity – we also incorporated the possibility of short-term modifications with STP, based on the Tsodyks/Markram model,” said Elisa Vianello, Leti non-volatile memories and cognitive computing specialist/research engineer. “We showed the benefits of utilizing both kinds of plasticity with visual pattern extraction and decoding of neural signals. LTP allows our artificial neural networks to learn patterns, and STP makes the learning process very robust against environmental noise.”

Resistive random-access memory (RRAM) devices coupled with a spike-coding scheme are key to implementing unsupervised learning with minimal hardware footprint and low power consumption. Embedding neuromorphic learning into low-power devices could enable design of autonomous systems, such as a brain-machine interface that makes decisions based on real-time, on-line processing of in-vivo recorded biological signals. Biological data are intrinsically highly noisy and the proposed combined LTP and STP learning rule is a powerful technique to improve the detection/recognition rate. This approach may enable the design of autonomous implantable devices for rehabilitation purposes

Leti, which has worked on RRAM to develop hardware neuromorphic architectures since 2010, is the coordinator of the H2020 [Horizon 2020] European project NeuRAM3. That project is working on fabricating a chip with architecture that supports state-of-the-art machine-learning algorithms and spike-based learning mechanisms.

That’s it folks.

Singapore’s* new chip could make low-powered wireless neural implants a possibility and Australians develop their own neural implant

Singapore

This research from Singapore could make neuroprosthetics and exoskeletons a little easier to manage as long as you don’t mind having a neural implant. From a Feb. 11, 2016 news item on ScienceDaily,

A versatile chip offers multiple applications in various electronic devices, report researchers, suggested that there is now hope that a low-powered, wireless neural implant may soon be a reality. Neural implants when embedded in the brain can alleviate the debilitating symptoms of Parkinson’s disease or give paraplegic people the ability to move their prosthetic limbs.

Caption: NTU Asst Prof Arindam Basu is holding his low-powered smart chip. Credit: NTU Singapore

Caption: NTU Asst Prof Arindam Basu is holding his low-powered smart chip. Credit: NTU Singapore

A Feb. 11, 2016 Nanyang Technological University (NTU) press release (also on EurekAlert), which originated the news item, provides more detail,

Scientists at Nanyang Technological University, Singapore (NTU Singapore) have developed a small smart chip that can be paired with neural implants for efficient wireless transmission of brain signals.

Neural implants when embedded in the brain can alleviate the debilitating symptoms of Parkinson’s disease or give paraplegic people the ability to move their prosthetic limbs.

However, they need to be connected by wires to an external device outside the body. For a prosthetic patient, the neural implant is connected to a computer that decodes the brain signals so the artificial limb can move.

These external wires are not only cumbersome but the permanent openings which allow the wires into the brain increases the risk of infections.

The new chip by NTU scientists can allow the transmission of brain data wirelessly and with high accuracy.

Assistant Professor Arindam Basu from NTU’s School of Electrical and Electronic Engineering said the research team have tested the chip on data recorded from animal models, which showed that it could decode the brain’s signal to the hand and fingers with 95 per cent accuracy.

“What we have developed is a very versatile smart chip that can process data, analyse patterns and spot the difference,” explained Prof Basu.

“It is about a hundred times more efficient than current processing chips on the market. It will lead to more compact medical wearable devices, such as portable ECG monitoring devices and neural implants, since we no longer need large batteries to power them.”

Different from other wireless implants

To achieve high accuracy in decoding brain signals, implants require thousands of channels of raw data. To wirelessly transmit this large amount of data, more power is also needed which means either bigger batteries or more frequent recharging.

This is not feasible as there is limited space in the brain for implants while frequent recharging means the implants cannot be used for long-term recording of signals.

Current wireless implant prototypes thus suffer from a lack of accuracy as they lack the bandwidth to send out thousands of channels of raw data.

Instead of enlarging the power source to support the transmission of raw data, Asst Prof Basu tried to reduce the amount of data that needs to be transmitted.

Designed to be extremely power-efficient, NTU’s patented smart chip will analyse and decode the thousands of signals from the neural implants in the brain, before compressing the results and sending it wirelessly to a small external receiver.

This invention and its findings were published last month [December 2015] in the prestigious journal, IEEE Transactions on Biomedical Circuits & Systems, by the Institute of Electrical and Electronics Engineers, the world’s largest professional association for the advancement of technology.

Its underlying science was also featured in three international engineering conferences (two in Atlanta, USA and one in China) over the last three months.

Versatile smart chip with multiple uses

This new smart chip is designed to analyse data patterns and spot any abnormal or unusual patterns.

For example, in a remote video camera, the chip can be programmed to send a video back to the servers only when a specific type of car or something out of the ordinary is detected, such as an intruder.

This would be extremely beneficial for the Internet of Things (IOT), where every electrical and electronic device is connected to the Internet through a smart chip.

With a report by marketing research firm Gartner Inc predicting that 6.4 billion smart devices and appliances will be connected to the Internet by 2016, and will rise to 20.8 billion devices by 2020, reducing network traffic will be a priority for most companies.

Using NTU’s new chip, the devices can process and analyse the data on site, before sending back important details in a compressed package, instead of sending the whole data stream. This will reduce data usage by over a thousand times.

Asst Prof Basu is now in talks with Singapore Technologies Electronics Limited to adapt his smart chip that can significantly reduce power consumption and the amount of data transmitted by battery-operated remote sensors, such as video cameras.

The team is also looking to expand the applications of the chip into commercial products, such as to customise it for smart home sensor networks, in collaboration with a local electronics company.

The chip, measuring 5mm by 5mm can now be licensed by companies from NTU’s commercialisation arm, NTUitive.

Here’s a link to and a citation for the paper,

A 128-Channel Extreme Learning Machine-Based Neural Decoder for Brain Machine Interfaces by Yi Chen, Enyi Yao, Arindam Basu. IEEE Transactions on Biomedical Circuits and Systems, 2015; 1 DOI: 10.1109/TBCAS.2015.2483618

This paper is behind a paywall.

Australia

Earlier this month there was a Feb. 9, 2016 announcement about a planned human clinical trial in Australia for a new brain-machine interface (neural implant). Before proceeding with the news, here’s what this implant looks like,

Caption: This tiny device, the size of a small paperclip, is implanted in to a blood vessel next to the brain and can read electrical signals from the motor cortex, the brain's control centre. These signals can then be transmitted to an exoskeleton or wheelchair to give paraplegic patients greater mobility. Users will need to learn how to communicate with their machinery, but over time, it is thought it will become second nature, like driving or playing the piano. The first human trials are slated for 2017 in Melbourne, Australia. Credit: The University of Melbourne.

Caption: This tiny device, the size of a small paperclip, is implanted in to a blood vessel next to the brain and can read electrical signals from the motor cortex, the brain’s control centre. These signals can then be transmitted to an exoskeleton or wheelchair to give paraplegic patients greater mobility. Users will need to learn how to communicate with their machinery, but over time, it is thought it will become second nature, like driving or playing the piano. The first human trials are slated for 2017 in Melbourne, Australia. Credit: The University of Melbourne.

A Feb. 9, 2016 University of Melbourne press release (also on EurekAlert), which originated the news item, provides more detail,

Melbourne medical researchers have created a new minimally invasive brain-machine interface, giving people with spinal cord injuries new hope to walk again with the power of thought.

The brain machine interface consists of a stent-based electrode (stentrode), which is implanted within a blood vessel next to the brain, and records the type of neural activity that has been shown in pre-clinical trials to move limbs through an exoskeleton or to control bionic limbs.

The new device is the size of a small paperclip and will be implanted in the first in-human trial at The Royal Melbourne Hospital in 2017.

The results published today in Nature Biotechnology show the device is capable of recording high-quality signals emitted from the brain’s motor cortex, without the need for open brain surgery.

Principal author and Neurologist at The Royal Melbourne Hospital and Research Fellow at The Florey Institute of Neurosciences and the University of Melbourne, Dr Thomas Oxley, said the stentrode was revolutionary.

“The development of the stentrode has brought together leaders in medical research from The Royal Melbourne Hospital, The University of Melbourne and the Florey Institute of Neuroscience and Mental Health. In total 39 academic scientists from 16 departments were involved in its development,” Dr Oxley said.

“We have been able to create the world’s only minimally invasive device that is implanted into a blood vessel in the brain via a simple day procedure, avoiding the need for high risk open brain surgery.

“Our vision, through this device, is to return function and mobility to patients with complete paralysis by recording brain activity and converting the acquired signals into electrical commands, which in turn would lead to movement of the limbs through a mobility assist device like an exoskeleton. In essence this a bionic spinal cord.”

Stroke and spinal cord injuries are leading causes of disability, affecting 1 in 50 people. There are 20,000 Australians with spinal cord injuries, with the typical patient a 19-year old male, and about 150,000 Australians left severely disabled after stroke.

Co-principal investigator and biomedical engineer at the University of Melbourne, Dr Nicholas Opie, said the concept was similar to an implantable cardiac pacemaker – electrical interaction with tissue using sensors inserted into a vein, but inside the brain.

“Utilising stent technology, our electrode array self-expands to stick to the inside wall of a vein, enabling us to record local brain activity. By extracting the recorded neural signals, we can use these as commands to control wheelchairs, exoskeletons, prosthetic limbs or computers,” Dr Opie said.

“In our first-in-human trial, that we anticipate will begin within two years, we are hoping to achieve direct brain control of an exoskeleton for three people with paralysis.”

“Currently, exoskeletons are controlled by manual manipulation of a joystick to switch between the various elements of walking – stand, start, stop, turn. The stentrode will be the first device that enables direct thought control of these devices”

Neurophysiologist at The Florey, Professor Clive May, said the data from the pre-clinical study highlighted that the implantation of the device was safe for long-term use.

“Through our pre-clinical study we were able to successfully record brain activity over many months. The quality of recording improved as the device was incorporated into tissue,” Professor May said.

“Our study also showed that it was safe and effective to implant the device via angiography, which is minimally invasive compared with the high risks associated with open brain surgery.

“The brain-computer interface is a revolutionary device that holds the potential to overcome paralysis, by returning mobility and independence to patients affected by various conditions.”

Professor Terry O’Brien, Head of Medicine at Departments of Medicine and Neurology, The Royal Melbourne Hospital and University of Melbourne said the development of the stentrode has been the “holy grail” for research in bionics.

“To be able to create a device that can record brainwave activity over long periods of time, without damaging the brain is an amazing development in modern medicine,” Professor O’Brien said.

“It can also be potentially used in people with a range of diseases aside from spinal cord injury, including epilepsy, Parkinsons and other neurological disorders.”

The development of the minimally invasive stentrode and the subsequent pre-clinical trials to prove its effectiveness could not have been possible without the support from the major funding partners – US Defense Department DARPA [Defense Advanced Research Projects Agency] and Australia’s National Health and Medical Research Council.

So, DARPA is helping fund this, eh? Interesting but not a surprise given the agency’s previous investments in brain research and neuroprosthetics.

For those who like to get their news via video,

Here’s a link to and a citation for the paper,

Minimally invasive endovascular stent-electrode array for high-fidelity, chronic recordings of cortical neural activity by Thomas J Oxley, Nicholas L Opie, Sam E John, Gil S Rind, Stephen M Ronayne, Tracey L Wheeler, Jack W Judy, Alan J McDonald, Anthony Dornom, Timothy J H Lovell, Christopher Steward, David J Garrett, Bradford A Moffat, Elaine H Lui, Nawaf Yassi, Bruce C V Campbell, Yan T Wong, Kate E Fox, Ewan S Nurse, Iwan E Bennett, Sébastien H Bauquier, Kishan A Liyanage, Nicole R van der Nagel, Piero Perucca, Arman Ahnood et al. Nature Biotechnology (2016)  doi:10.1038/nbt.3428 Published online 08 February 2016

This paper is behind a paywall.

I wish the researchers in Singapore, Australia, and elsewhere, good luck!

*’Sinagpore’ in head changed to ‘Singapore’ on May 14, 2019.

Chemistry of Cyborgs: review of the state of the art by German researchers

Communication between man and machine – a fascinating area at the interface of chemistry, biomedicine, and engineering. (Figure: KIT/S. Giselbrecht, R. Meyer, B. Rapp)

Communication between man and machine – a fascinating area at the interface of chemistry, biomedicine, and engineering. (Figure: KIT/S. Giselbrecht, R. Meyer, B. Rapp)

German researchers from the Karlsruhe Institute of Technology (KIT), Professor Christof M. Niemeyer and Dr. Stefan Giselbrecht of the Institute for Biological Interfaces 1 (IBG 1) and Dr. Bastian E. Rapp, Institute of Microstructure Technology (IMT) have written a good overview of the current state of cyborgs while pointing out some of the ethical issues associated with this field. From the Jan. 10, 2014 news item on ScienceDaily,

Medical implants, complex interfaces between brain and machine or remotely controlled insects: Recent developments combining machines and organisms have great potentials, but also give rise to major ethical concerns. In a new review, KIT scientists discuss the state of the art of research, opportunities, and risks.

The Jan. ?, 2014 KIT press release (also on EurekAlert with a release date of Jan. 10, 2014), which originated the news item, describes the innovations and the work at KIT in more detail,

They are known from science fiction novels and films – technically modified organisms with extraordinary skills, so-called cyborgs. This name originates from the English term “cybernetic organism”. In fact, cyborgs that combine technical systems with living organisms are already reality. The KIT researchers Professor Christof M. Niemeyer and Dr. Stefan Giselbrecht of the Institute for Biological Interfaces 1 (IBG 1) and Dr. Bastian E. Rapp, Institute of Microstructure Technology (IMT), point out that this especially applies to medical implants.

In recent years, medical implants based on smart materials that automatically react to changing conditions, computer-supported design and fabrication based on magnetic resonance tomography datasets or surface modifications for improved tissue integration allowed major progress to be achieved. For successful tissue integration and the prevention of inflammation reactions, special surface coatings were developed also by the KIT under e.g. the multidisciplinary Helmholtz program “BioInterfaces”.

Progress in microelectronics and semiconductor technology has been the basis of electronic implants controlling, restoring or improving the functions of the human body, such as cardiac pacemakers, retina implants, hearing implants, or implants for deep brain stimulation in pain or Parkinson therapies. Currently, bioelectronic developments are being combined with robotics systems to design highly complex neuroprostheses. Scientists are working on brain-machine interfaces (BMI) for the direct physical contacting of the brain. BMI are used among others to control prostheses and complex movements, such as gripping. Moreover, they are important tools in neurosciences, as they provide insight into the functioning of the brain. Apart from electric signals, substances released by implanted micro- and nanofluidic systems in a spatially or temporarily controlled manner can be used for communication between technical devices and organisms.

BMI are often considered data suppliers. However, they can also be used to feed signals into the brain, which is a highly controversial issue from the ethical point of view. “Implanted BMI that feed signals into nerves, muscles or directly into the brain are already used on a routine basis, e.g. in cardiac pacemakers or implants for deep brain stimulation,” Professor Christof M. Niemeyer, KIT, explains. “But these signals are neither planned to be used nor suited to control the entire organism – brains of most living organisms are far too complex.”

Brains of lower organisms, such as insects, are less complex. As soon as a signal is coupled in, a certain movement program, such as running or flying, is started. So-called biobots, i.e. large insects with implanted electronic and microfluidic control units, are used in a new generation of tools, such as small flying objects for monitoring and rescue missions. In addition, they are applied as model systems in neurosciences in order to understand basic relationships.

Electrically active medical implants that are used for longer terms depend on reliable power supply. Presently, scientists are working on methods to use the patient body’s own thermal, kinetic, electric or chemical energy.

In their review the KIT researchers sum up that developments combining technical devices with organisms have a fascinating potential. They may considerably improve the quality of life of many people in the medical sector in particular. However, ethical and social aspects always have to be taken into account.

After briefly reading the paper, I can say the researchers are most interested in the science and technology aspects but they do have this to say about ethical and social issues in the paper’s conclusion (Note: Links have been removed),

The research and development activities summarized here clearly raise significant social and ethical concerns, in particular, when it comes to the use of BMIs for signal injection into humans, which may lead to modulation or even control of behavior. The ethical issues of this new technology have been discussed in the excellent commentary of Jens Clausen,33 which we highly recommend for further reading. The recently described engineering of a synthetic polymer construct, which is capable of propulsion in water through a collection of adhered rat cardiomyocytes,77 a “medusoid” also described as a “cyborg jellyfish with a rat heart”, brings up an additional ethical aspect. The motivation of the work was to reverse-engineer muscular pumps, and it thus represents fundamental research in tissue engineering for biomedical applications. However, it is also an impressive, early demonstration that autonomous control of technical devices can be achieved through small populations of cells or microtissues. It seems reasonable that future developments along this line will strive, for example, to control complex robots through the use of brain tissue. Given the fact that the robots of today are already capable of autonomously performing complex missions, even in unknown territories,78 this approach might indeed pave the way for yet another entirely new generation of cybernetic organisms.

Here’s a link to and a citation for the English language version of the paper, which is open access (as of Jan. 10, 2014),

The Chemistry of Cyborgs—Interfacing Technical Devices with Organisms by Dr. Stefan Giselbrecht, Dr. Bastian E. Rapp, & Prof.Dr. Christof M. Niemeyer. Angewandte Chemie International Edition Volume 52, Issue 52, pages 13942–13957, December 23, 2013 Article first published online: 29 NOV 2013 DOI: 10.1002/anie.201307495

Copyright © 2013 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim

For those with German language skills,

Chemie der Cyborgs – zur Verknüpfung technischer Systeme mit Lebewesen.  by Stefan Giselbrecht, Bastian E. Rapp, and Christof M. Niemeyer. Angewandte Chemie. Volume 125, issue 52, pages 14190, December 23, 2013. DOI: 10.1002/ange.201307495

I have written many times about cyborgs and neuroprosthetics including this Aug. 30, 2011 posting titled:  Eye, arm, & leg prostheses, cyborgs, eyeborgs, Deus Ex, and ableism, where I mention Gregor Wolbring, a Canadian academic (University of Calgary) who has written extensively on the social and ethical issues of human enhancement technologies. You can find out more on his blog, Nano and Nano- Bio, Info, Cogno, Neuro, Synbio, Geo, Chem…

For anyone wanting to search this blog for these pieces, try using the term machine/flesh as a tag, as well as, human enhancement, neuroprostheses, cyborgs …

A new science magazine edited and peer-reviewed by children: Frontiers for Young Minds

November 15, 2013 article by Alice Truong about Frontiers for Young Minds (for Fast Company), profiles a new journal meant to be read by children and edited and peer-reviewed by children. Let’s start with an excerpt from the Truong article as an introduction to the Frontiers for Young Minds journal (Note: Links have been removed),

Frontiers for Young Minds is made up of editors ages 8 to 18 who learn the ropes of peer review from working scientists. With 18 young minds and 38 adult authors and associate editors lending their expertise, the journal–an offshoot of the academic research network Frontiers …

With a mission to engage a budding generation of scientists, UC [University of California at] Berkeley professor Robert Knight created the kid-friendly version of Frontiers and serves as its editor-in-chief. The young editors review and approve submissions, which are written so kids can understand them–“clearly, concisely and with enthusiasm!” the guidelines suggest. Many of the scientists who provide guidance are academics, hailing from Harvard to Rio de Janeiro’s D’Or Institute for Research and Education. The pieces are peer reviewed by one of the young editors, but to protect their identities only their first names are published along with the authors’ names.

Great idea and bravo to all involved in the project! Here’s an excerpt from the Frontiers for Young Minds About webpage,

Areas in Development now include:

  • The Brain and Friends (social neuroscience)
  • The Brain and Fun (emotion)
  • The Brain and Magic (perception, sensation)
  • The Brain and Allowances (neuroeconomics)
  • The Brain and School (attention, decision making)
  • The Brain and Sports (motor control, action)
  • The Brain and Life (memory)
  • The Brain and Talking/Texting (language)
  • The Brain and Growing (neurodevelopment)
  • The Brain and Math (neural organization of math, computational neuroscience)
  • The Brain and Health (neurology, psychiatry)
  • The Brain and Robots (brain machine interface)
  • The Brain and Music (music!)
  • The Brain and Light (optogenetics)
  • The Brain and Gaming (Fun, Action, Learning)
  • The Brain and Reading
  • The Brain and Pain
  • The Brain and Tools (basis of brain measurements)
  • The Brain and History (the story of brain research)
  • The Brain and Drugs (drugs)
  • The Brain and Sleep

I believe the unofficial title for this online journal is Frontiers (in Neuroscience) for Young Minds. I guess they were trying to make the title less cumbersome which, unfortunately, results in a bit of confusion.

At any rate, there’s a quite a range of young minds at work as editors and reviewers, from the Editorial Team’s webpage,

Sacha
14 years old
Amsterdam, Netherlands

When I was just a few weeks old, we moved to Bennekom, a small town close to Arnhem (“a bridge too far”). I am now 14 and follow the bilingual stream in secondary school, receiving lessons in English and Dutch. I hope to do the International Bacquelaurate before I leave school. In my spare time, I like to play football and hang out with my mates. Doing this editing interested me for three reasons: I really wanted to understand more about my dad’s work; I like the idea of this journal that helps us understand what our parents do; and I also like the idea of being an editor!

Abby
11 years old
Israel

I currently live in Israel, but I lived in NYC and I loved it. I like wall climbing, dancing, watching TV, scuba diving, and I love learning new things about how our world works. Oh, I also love the Weird-but-True books. You should try reading them too.

Caleb
14 years old
Canada

I enjoy reading and thinking about life. I have a flair for the dramatic. Woe betide the contributor who falls under my editorial pen. I am in several theatrical productions and I like to go camping in the Canadian wilds. My comment on brains: I wish I had one.

Darius
10 years old
Lafayette, CA, USA

I am in fifth grade. In my free time I enjoy reading and computer programming. As a hobby, I make useful objects and experiment with devices. I am very interested in the environment and was one of the founders of my school’s green committee. I enjoy reading about science, particularly chemistry, biology, and neuroscience.

Marin
8 years old
Cambridge, MA, USA

3rd grader who plays the piano and loves to sing and dance. She participates in Science Club for Girls and she and her Mom will be performing in their second opera this year.

Eleanor
8 years old
Champaign, IL, USA

I like reading and drawing. My favorite colors are blue, silver, pink, and purple. My favorite food is creamed spinach. I like to go shopping with my Mom.

….

At age 8, I would have been less Marin and more Eleanor. I hated opera; my father made us listen every Sunday afternoon during the winters.

Here’s something from an article about brain-machine interfaces for the final excerpt from the website (from the articles webpage),

[downloaded from http://kids.frontiersin.org/articles/brain-machine_interfaces/7/]

[downloaded from http://kids.frontiersin.org/articles/brain-machine_interfaces/7/]

Brain-Machine Interfaces (BMI), or brain-computer interfaces (BCI), is an exciting multidisciplinary field that has grown tremendously during the last decade. In a nutshell, BMI is about transforming thought into action and sensation into perception. In a BMI system, neural signals recorded from the brain are fed into a decoding algorithm that translates these signals into motor output. This includes controlling a computer cursor, steering a wheelchair, or driving a robotic arm. A closed control loop is typically established by providing the subject with visual feedback of the prosthetic device. BMIs have tremendous potential to greatly improve the quality of life of millions of people suffering from spinal cord injury, stroke, amyotrophic lateral sclerosis, and other severely disabling conditions.6

I think this piece written by Jose M. Carmena and José del R. Millán and reviewed by Bhargavi, 13 years old, is a good beginner’s piece for any adults who might be interested, as well as,, the journal’s target audience. This illustration the scientists have provided is very helpful to anyone who, for whatever reason, isn’t that knowledgeable about this area of research,

Figure 1 - Your brain in action: the different components of a BMI include the recording system, the decoding algorithm, device to be controlled, and the feedback delivered to the user (modified from Heliot and Carmena, 2010).

Figure 1 – Your brain in action:
the different components of a BMI include the recording system, the decoding algorithm, device to be controlled, and the feedback delivered to the user (modified from Heliot and Carmena, 2010).

As for getting information about basic details, here’s some of what I unearthed. The parent organization, ‘Frontiers in’ is based in Switzerland and describes itself this way on its About page,

Frontiers is a community-oriented open-access academic publisher and research network.

Our grand vision is to build an Open Science platform that empowers researchers in their daily work and where everybody has equal opportunity to seek, share and generate knowledge.

Frontiers is at the forefront of building the ultimate Open Science platform. We are driving innovations and new technologies around peer-review, article and author impact metrics, social networking for researchers, and a whole ecosystem of open science tools. We are the first – and only – platform that combines open-access publishing with research networking, with the goal to increase the reach of publications and ultimately the impact of articles and their authors.

Frontiers was launched as a grassroots initiative in 2007 by scientists from the Swiss Federal Institute of Technology in Lausanne, Switzerland, out of the collective desire to improve the publishing options and provide better tools and services to researchers in the Internet age. Since then, Frontiers has become the fastest-growing open-access scholarly publisher, with a rapidly growing number of community-driven journals, more than 25,000 of high-impact researchers across a wide range of academic fields serving on the editorial boards and more than 4 million monthly page views.

As of a Feb. 27, 2013 news release, Frontiers has partnered with the Nature Publishing Group (NPG), Note: Links have been removed,

Emerging publisher Frontiers is joining Nature Publishing Group (NPG) in a strategic alliance to advance the global open science movement.

NPG, publisher of Nature, today announces a majority investment in the Swiss-based open access (OA) publisher Frontiers.

NPG and Frontiers will work together to empower researchers to change the way science is communicated, through open access publication and open science tools. Frontiers, led by CEO and neuroscientist Kamila Markram, will continue to operate with its own platform, brands, and policies.

Founded by scientists from École Polytechnique Fédérale de Lausanne (EPFL) in 2007, Frontiers is one of the fastest growing open access publishers, more than doubling articles published year on year. Frontiers now has a portfolio of open access journals in 14 fields of science and medicine, and published over 5,000 OA articles in 2012.

Working with NPG, the journal series “Frontiers in” will significantly expand in 2013-2014. Currently, sixty-three journals published by NPG offer open access options or are open access and NPG published over 2000 open access articles in 2012. Bilateral links between nature.com and frontiersin.org will ensure that open access papers are visible on both sites.

Frontiers and NPG will also be working together on innovations in open science tools, networking, and publication processes.

Frontiers is based at EPFL in Switzerland, and works out of Innovation Square, a technology park supporting science start-ups, and hosting R&D divisions of large companies such as Logitech & Nestlé.

As for this new venture, Frontiers for Young Minds, this appears to have been launched on Nov. 11, 2013. At least, that’s what I understand from this notice on Frontier’s Facebook page (Note: Links have been removed,

Frontiers
November 11 [2013?]
Great news for kids, parents, teachers and neuroscientists! We have just launched the first Frontiers for Young Minds!

Frontiers in #Neuroscience for Young Minds is an #openaccess scientific journal that involves young people in the review of articles.

This has the double benefit of bringing kids into the world of science and offering scientists a platform for reaching out to the broadest of all audiences.

Frontiers for Young Minds is science edited for kids, by kids. Learn more and spread the word! http://bit.ly/1dijipy #sfn13

I am glad to see this effort and I wish all the parties involved the best of luck.

‘Touching’ infrared light, if you’re a rat followed by announcement of US FDA approval of first commercial artificial retina (bionic eye)

Researcher Miguel Nicolelis and his colleagues at Duke University have implanted a neuroprosthetic device in the portion of a rat’s brain related to touch that allows the rats to see infrared light. From the Feb. 12, 2013 news release on EurekAlert,

Researchers have given rats the ability to “touch” infrared light, normally invisible to them, by fitting them with an infrared detector wired to microscopic electrodes implanted in the part of the mammalian brain that processes tactile information. The achievement represents the first time a brain-machine interface has augmented a sense in adult animals, said Duke University neurobiologist Miguel Nicolelis, who led the research team.

The experiment also demonstrated for the first time that a novel sensory input could be processed by a cortical region specialized in another sense without “hijacking” the function of this brain area said Nicolelis. This discovery suggests, for example, that a person whose visual cortex was damaged could regain sight through a neuroprosthesis implanted in another cortical region, he said.

Although the initial experiments tested only whether rats could detect infrared light, there seems no reason that these animals in the future could not be given full-fledged infrared vision, said Nicolelis. For that matter, cortical neuroprostheses could be developed to give animals or humans the ability to see in any region of the electromagnetic spectrum, or even magnetic fields. “We could create devices sensitive to any physical energy,” he said. “It could be magnetic fields, radio waves, or ultrasound. We chose infrared initially because it didn’t interfere with our electrophysiological recordings.”

Interestingly, the research was supported by the US National Institute of Mental Health (as per the news release).

The researchers have more to say about what they’re doing,

“The philosophy of the field of brain-machine interfaces has until now been to attempt to restore a motor function lost to lesion or damage of the central nervous system,” said Thomson, [Eric Thomson] first author of the study. “This is the first paper in which a neuroprosthetic device was used to augment function—literally enabling a normal animal to acquire a sixth sense.”

Here’s how they conducted the research,

The mammalian retina is blind to infrared light, and mammals cannot detect any heat generated by the weak infrared light used in the studies. In their experiments, the researchers used a test chamber that contained three light sources that could be switched on randomly. Using visible LED lights, they first taught each rat to choose the active light source by poking its nose into an attached port to receive a reward of a sip of water.

After training the rats, the researchers implanted in their brains an array of stimulating microelectrodes, each roughly a tenth the diameter of a human hair. The microelectrodes were implanted in the cortical region that processes touch information from the animals’ facial whiskers.

Attached to the microelectrodes was an infrared detector affixed to the animals’ foreheads. The system was programmed so that orientation toward an infrared light would trigger an electrical signal to the brain. The signal pulses increased in frequency with the intensity and proximity of the light.

The researchers returned the animals to the test chamber, gradually replacing the visible lights with infrared lights. At first in infrared trials, when a light was switched on the animals would tend to poke randomly at the reward ports and scratch at their faces, said Nicolelis. This indicated that they were initially interpreting the brain signals as touch. However, over about a month, the animals learned to associate the brain signal with the infrared source. They began to actively “forage” for the signal, sweeping their heads back and forth to guide themselves to the active light source. Ultimately, they achieved a near-perfect score in tracking and identifying the correct location of the infrared light source.

To ensure that the animals were really using the infrared detector and not their eyes to sense the infrared light, the researchers conducted trials in which the light switched on, but the detector sent no signal to the brain. In these trials, the rats did not react to the infrared light.

Their finding could have an impact on notions of mammalian brain plasticity,

A key finding, said Nicolelis, was that enlisting the touch cortex for light detection did not reduce its ability to process touch signals. “When we recorded signals from the touch cortex of these animals, we found that although the cells had begun responding to infrared light, they continued to respond to whisker touch. It was almost like the cortex was dividing itself evenly so that the neurons could process both types of information.

This finding of brain plasticity is in contrast with the “optogenetic” approach to brain stimulation, which holds that a particular neuronal cell type should be stimulated to generate a desired neurological function. Rather, said Nicolelis, the experiments demonstrate that a broad electrical stimulation, which recruits many distinct cell types, can drive a cortical region to adapt to a new source of sensory input.

All of this work is part of Nicolelis’ larger project ‘Walk Again’ which is mentioned in my March 16, 2012 posting and includes a reference to some ethical issues raised by the work. Briefly, Nicolelis and an international team of collaborators are developing a brain-machine interface that will enable full mobility for people who are severely paralyzed. From the news release,

The Walk Again Project has recently received a $20 million grant from FINEP, a Brazilian research funding agency to allow the development of the first brain-controlled whole body exoskeleton aimed at restoring mobility in severely paralyzed patients. A first demonstration of this technology is expected to happen in the opening game of the 2014 Soccer World Cup in Brazil.

Expanding sensory abilities could also enable a new type of feedback loop to improve the speed and accuracy of such exoskeletons, said Nicolelis. For example, while researchers are now seeking to use tactile feedback to allow patients to feel the movements produced by such “robotic vests,” the feedback could also be in the form of a radio signal or infrared light that would give the person information on the exoskeleton limb’s position and encounter with objects.

There’s more information including videos about the work with infrared light and rats at the Nicolelis Lab website.  Here’s a citation for and link to the team’s research paper,

Perceiving invisible light through a somatosensory cortical prosthesis by Eric E. Thomson, Rafael Carra, & Miguel A.L. Nicolelis. Nature Communications Published 12 Feb 2013 DOI: 10.1038/ncomms2497

Meanwhile, the US Food and Drug Administraton (FDA) has approved the first commercial artificial retina, from the Feb. 14, 2013 news release,

The U.S. Food and Drug Administration (FDA) granted market approval to an artificial retina technology today, the first bionic eye to be approved for patients in the United States. The prosthetic technology was developed in part with support from the National Science Foundation (NSF).

The device, called the Argus® II Retinal Prosthesis System, transmits images from a small, eye-glass-mounted camera wirelessly to a microelectrode array implanted on a patient’s damaged retina. The array sends electrical signals via the optic nerve, and the brain interprets a visual image.

The FDA approval currently applies to individuals who have lost sight as a result of severe to profound retinitis pigmentosa (RP), an ailment that affects one in every 4,000 Americans. The implant allows some individuals with RP, who are completely blind, to locate objects, detect movement, improve orientation and mobility skills and discern shapes such as large letters.

The Argus II is manufactured by, and will be distributed by, Second Sight Medical Products of Sylmar, Calif., which is part of the team of scientists and engineers from the university, federal and private sectors who spent nearly two decades developing the system with public and private investment.

Scientists are often compelled to do research in an area inspired by family,

“Seeing my grandmother go blind motivated me to pursue ophthalmology and biomedical engineering to develop a treatment for patients for whom there was no foreseeable cure,” says the technology’s co-developer, Mark Humayun, associate director of research at the Doheny Eye Institute at the University of Southern California and director of the NSF Engineering Research Center for Biomimetic MicroElectronic Systems (BMES). …”

There’s also been considerable government investment,

The effort by Humayun and his colleagues has received early and continuing support from NSF, the National Institutes of Health and the Department of Energy, with grants totaling more than $100 million. The private sector’s support nearly matched that of the federal government.

“The retinal implant exemplifies how NSF grants for high-risk, fundamental research can directly result in ground-breaking technologies decades later,” said Acting NSF Assistant Director for Engineering Kesh Narayanan. “In collaboration with the Second Sight team and the courageous patients who volunteered to have experimental surgery to implant the first-generation devices, the researchers of NSF’s Biomimetic MicroElectronic Systems Engineering Research Center are developing technologies that may ultimately have as profound an impact on blindness as the cochlear implant has had for hearing loss.”

Leaving aside controversies about cochlear implants and the possibility of such controversies with artificial retinas (bionic eyes), it’s interesting to note that this device is dependent on an external camera,

The researchers’ efforts have bridged cellular biology–necessary for understanding how to stimulate the retinal ganglion cells without permanent damage–with microelectronics, which led to the miniaturized, low-power integrated chip for performing signal conversion, conditioning and stimulation functions. The hardware was paired with software processing and tuning algorithms that convert visual imagery to stimulation signals, and the entire system had to be incorporated within hermetically sealed packaging that allowed the electronics to operate in the vitreous fluid of the eye indefinitely. Finally, the research team had to develop new surgical techniques in order to integrate the device with the body, ensuring accurate placement of the stimulation electrodes on the retina.

“The artificial retina is a great engineering challenge under the interdisciplinary constraint of biology, enabling technology, regulatory compliance, as well as sophisticated design science,” adds Liu.  [Wentai Liu of the University of California, Los Angeles] “The artificial retina provides an interface between biotic and abiotic systems. Its unique design characteristics rely on system-level optimization, rather than the more common practice of component optimization, to achieve miniaturization and integration. Using the most advanced semiconductor technology, the engine for the artificial retina is a ‘system on a chip’ of mixed voltages and mixed analog-digital design, which provides self-contained power and data management and other functionality. This design for the artificial retina facilitates both surgical procedures and regulatory compliance.”

The Argus II design consists of an external video camera system matched to the implanted retinal stimulator, which contains a microelectrode array that spans 20 degrees of visual field. [emphasis mine] …

“The external camera system-built into a pair of glasses-streams video to a belt-worn computer, which converts the video into stimulus commands for the implant,” says Weiland [USC researcher Jim Weiland], “The belt-worn computer encodes the commands into a wireless signal that is transmitted to the implant, which has the necessary electronics to receive and decode both wireless power and data. Based on those data, the implant stimulates the retina with small electrical pulses. The electronics are hermetically packaged and the electrical stimulus is delivered to the retina via a microelectrode array.”

You can see some footage of people using artificial retinas in the context of Grégoire Cosendai’s TEDx Vienna presentation. As I noted in my Aug. 18, 2011 posting where this talk and developments in human enhancement are mentioned, the relevant material can be seen at approximately 13 mins., 25 secs. in Cosendai’s talk.

Second Sight Medical Devices can be found here.

Monkeys, mind control, robots, prosthetics, and the 2014 World Cup (soccer/football)

The idea that a monkey in the US could control a robot’s movements in Japan is stunning. Even more stunning is the fact that the research is four years old. It was discussed publicly in a Jan. 15, 2008 article by Sharon Gaudin for Computer World,

Scientists in the U.S. and Japan have successfully used a monkey’s brain activity to control a humanoid robot — over the Internet.

This research may only be a few years away from helping paralyzed people walk again by enabling them to use their thoughts to control exoskeletons attached to their bodies, according to Miguel Nicolelis, a professor of neurobiology at Duke University and lead researcher on the project.

“This is an attempt to restore mobility to people,” said Nicolelis. “We had the animal trained to walk on a treadmill. As it walked, we recorded its brain activity that generated its locomotion pattern. As the animal was walking and slowing down and changing his pattern, his brain activity was driving a robot in Japan in real time.”

This video clip features an animated monkey simulating control of  a real robot in Japan (the Computational Brain Project of the Japan Science and Technology Agency (JST) in Kyoto partnered with Duke University for this project),

I wonder if the Duke researchers or communications staff thought that the sight of real rhesus monkeys on treadmills might be too disturbing. While we’re on the topic of simulation, I wonder where the robot in the clip actually resides. Quibbles about the video clip aside, I have no doubt that the research took place.

There’s a more recent (Oct. 5, 2011) article, about the work being done in Nicolelis’ laboratory at Duke University, by Ed Yong for Discover Magazine (mentioned previously described in my Oct. 6, 2011 posting),

This is where we are now: at Duke University, a monkey controls a virtual arm using only its thoughts. Miguel Nicolelis had fitted the animal with a headset of electrodes that translates its brain activity into movements. It can grab virtual objects without using its arms. It can also feel the objects without its hands, because the headset stimulates its brain to create the sense of different textures. Monkey think, monkey do, monkey feel – all without moving a muscle.
And this is where  Nicolelis wants to be in three years: a young quadriplegic Brazilian man strolls confidently into a massive stadium. He controls his four prosthetic limbs with his thoughts, and they in turn send tactile information straight to his brain. The technology melds so fluidly with his mind that he confidently runs up and delivers the opening kick of the 2014 World Cup.

This sounds like a far-fetched dream, but Nicolelis – a big soccer fan – is talking to the Brazilian government to make it a reality.

According to Yong, Nicolelis has created an international consortium to support the Walk Again Project. From the project home page,

The Walk Again Project, an international consortium of leading research centers around the world represents a new paradigm for scientific collaboration among the world’s academic institutions, bringing together a global network of scientific and technological experts, distributed among all the continents, to achieve a key humanitarian goal.

The project’s central goal is to develop and implement the first BMI [brain-machine interface] capable of restoring full mobility to patients suffering from a severe degree of paralysis. This lofty goal will be achieved by building a neuroprosthetic device that uses a BMI as its core, allowing the patients to capture and use their own voluntary brain activity to control the movements of a full-body prosthetic device. This “wearable robot,” also known as an “exoskeleton,” will be designed to sustain and carry the patient’s body according to his or her mental will.

In addition to proposing to develop new technologies that aim at improving the quality of life of millions of people worldwide, the Walk Again Project also innovates by creating a complete new paradigm for global scientific collaboration among leading academic institutions worldwide. According to this model, a worldwide network of leading scientific and technological experts, distributed among all the continents, come together to participate in a major, non-profit effort to make a fellow human being walk again, based on their collective expertise. These world renowned scholars will contribute key intellectual assets as well as provide a base for continued fundraising capitalization of the project, setting clear goals to establish fundamental advances toward restoring full mobility for patients in need.

It’s the exoskeleton described on the Walk Again Project home page that Nicolelis is hoping will enable a young Brazilian quadriplegic to deliver the opening kick for the 2014 World Cup (soccer/football) in Brazil.

Cars that read minds?

Today’s blogging seems to have acquired a transportation theme. Here’s another item about a car, this one can read minds. From the Sept. 28, 2011 news item on physorg.com,

In the future, thinking about turning left may no longer be just a thought. Japanese auto giant Nissan and a Swiss university are developing cars that scan the driver’s thoughts and prepares the vehicle for the next move.

I found more information at the Nissan website in their Sept.28, 2011 news release,

As the driver thinks about turning left ahead, for example, so the car will prepare itself for the manoeuvre, selecting the correct speed and road positioning, before completing the turn. The aim? To ensure that our roads are as safe as possible and that the freedom that comes with personal mobility remains at the heart of society.

Nissan is undertaking this pioneering work in collaboration with the École Polytechnique Fédérale de Lausanne in Switzerland (EPFL). Far reaching research on Brain Machine Interface (BMI) systems by scientists at EPFL already allows disabled users to manoeuvre their wheelchairs by thought transference alone. The next stage is to adapt the BMI processes to the car – and driver – of the future.

Professor José del R. Millán, leading the project, said: “The idea is to blend driver and vehicle intelligence together in such a way that eliminates conflicts between them, leading to a safer motoring environment.”

Using brain activity measurement, eye movement patterns and by scanning the environment around the car in conjunction with the car’s own sensors, it should be possible to predict what the driver plans to do – be it a turn, an overtake, a lane change – and then assist with the manoeuvre in complete safety, thus improving the driving experience.

Here’s an image of some of the lab work being performed,

Nissan Brain-Computer Interface. Photo Credit: EPFL / Alain Herzog

I wonder what it’s going to look like when it’s ready for testing with real people. I’m pretty sure most people are not going to be interested in wearing head caps for very long. I imagine the researchers have come to this conclusion too, which means that they are likely considering some very sophisticated sensors. (I hope so, otherwise the researchers are somewhat delusional.  Sadly, this can be true. I speak from experiences dealing with technical experts who seemed to be designing their software for failure, i.e. the average person using would be likely to make an error.)