Not unexpectedly, there’s a news item about science and Iron Man (it’s getting quite common for the science in movies to be promoted and discussed) just a few weeks before the movie Captain America: Civil War or, as it’s also known, Captain America vs. Iron Man opens in the US. From an April 26, 2016 news item on phys.org,
… how much of our favourite superheros’ power lies in science and how much is complete fiction?
As Iron Man’s name suggests, he wears a suit of “iron” which gives him his abilities—superhuman strength, flight and an arsenal of weapons—and protects him from harm.
In scientific parlance, the Iron man suit is an exoskeleton which is worn outside the body to enhance it.
An April 26, 2016 posting by Chris Marr on the ScienceNetwork Western Australia blog, which originated the news item, provides an interesting overview of exoskeletons and some of the scientific obstacles still to be overcome before they become commonplace,
In the 1960s, the first real powered exoskeleton appeared—a machine integrated with the human frame and movements which provided the wearer with 25 times his natural lifting capacity.
The major drawback then was that the unit itself weighed in at 680kg.
UWA [University of Western Australia] Professor Adrian Keating suggests that some of the technology seen in the latest Marvel blockbuster, such as controlling the exoskeleton with simple thoughts, will be available in the near future by leveraging ongoing advances of multi-disciplinary research teams.
“Dust grain-sized micromachines could be programmed to cooperate to form reconfigurable materials such as the retractable face mask, for example,” Prof Keating says.
However, all of these devices are in need of a power unit small enough to be carried yet providing enough capacity for more than a few minutes of superhuman use, he says.
Does anyone have a spare Arc Reactor?
Currently, most exoskeleton development has been for medical applications, with devices designed to give mobility to amputees and paraplegics, and there are a number in commercial production and use.
Dr Lei Cui, who lectures in Mechatronics at Curtin University, has recently developed both a hand and leg exoskeleton, designed for use by patients who have undergone surgery or have nerve dysfunction, spinal injuries or muscular dysfunction.
“Currently we use an internal battery that lasts about two hours in the glove, which can be programmed for only four different movement patterns,” Dr Cui says.
Dr Cui’s exoskeletons are made from plastic, making them light but offering little protection compared to the titanium exterior of Stark’s favourite suit.
It’s clear that we are a long way from being able to produce a working Iron Man suit at all, let alone one that flies, protects the wearer and has the capacity to fight back.
This is not the first time I’ve featured a science and pop culture story here. You can check out my April 28, 2014 posting for a story about how Captain America’s shield could be a supercapacitor (it also has a link to a North Carolina State University blog featuring science and other comic book heroes) and there is my May 6, 2013 post about Iron Man 3 and a real life injectable nano-network.
As for ScienceNetwork Western Australia, here’s more from their About SWNA page,
ScienceNetwork Western Australia (SNWA) is an online science news service devoted to sharing WA’s achievements in science and technology.
Our team of freelance writers work with in-house editors based at Scitech to bring you news from all fields of science, and from the research, government and private industry sectors working throughout the state. Our writers also produce profile stories on scientists. We collaborate with leading WA institutions to bring you Perspectives from prominent WA scientists and opinion leaders.
Since our commencement in 2003 we have grown to share WA’s stories with local, national and global audiences. Our articles are regularly republished in print and online media in the metropolitan and regional areas.
Bravo to the Western Australia government! I wish there initiatives of this type in Canada, the closest we have is the French language Agence Science-Presse supported by the Province of Québec.
Six years ago, he was paralyzed in a diving accident. Today, he participates in clinical sessions during which he can grasp and swipe a credit card or play a guitar video game with his own fingers and hand. These complex functional movements are driven by his own thoughts and a prototype medical system that are detailed in a study published online today in the journal Nature.
The device, called NeuroLife, was invented at Battelle, which teamed with physicians and neuroscientists from The Ohio State University Wexner Medical Center to develop the research approach and perform the clinical study. Ohio State doctors identified the study participant and implanted a tiny computer chip into his brain.
That pioneering participant, Ian Burkhart, is a 24-year-old quadriplegic from Dublin, Ohio, and the first person to use this technology. This electronic neural bypass for spinal cord injuries reconnects the brain directly to muscles, allowing voluntary and functional control of a paralyzed limb by using his thoughts. The device interprets thoughts and brain signals then bypasses his injured spinal cord and connects directly to a sleeve that stimulates the muscles that control his arm and hand.
“We’re showing for the first time that a quadriplegic patient is able to improve his level of motor function and hand movements,” said Dr. Ali Rezai, a co-author of the study and a neurosurgeon at Ohio State’s Wexner Medical Center.
Burkhart first demonstrated the neural bypass technology in June 2014, when he was able to open and close his hand simply by thinking about it. Now, he can perform more sophisticated movements with his hands and fingers such as picking up a spoon or picking up and holding a phone to his ear — things he couldn’t do before and which can significantly improve his quality of life.
“It’s amazing to see what he’s accomplished,” said Nick Annetta, electrical engineering lead for Battelle’s team on the project. “Ian can grasp a bottle, pour the contents of the bottle into a jar and put the bottle back down. Then he takes a stir bar, grips that and then stirs the contents of the jar that he just poured and puts it back down. He’s controlling it every step of the way.”
The neural bypass technology combines algorithms that learn and decode the user’s brain activity and a high-definition muscle stimulation sleeve that translates neural impulses from the brain and transmits new signals to the paralyzed limb.
The Battelle team has been working on this technology for more than a decade. To develop the algorithms, software and stimulation sleeve, Battelle scientists first recorded neural impulses from an electrode array implanted in a paralyzed person’s brain. They used that recorded data to illustrate the device’s effect on the patient and prove the concept.
Four years ago, former Battelle researcher Chad Bouton and his team began collaborating with Ohio State Neurological Institute researchers and clinicians Rezai and Dr. Jerry Mysiw to design the clinical trials and validate the feasibility of using the neural bypass technology in patients.
“In the 30 years I’ve been in this field, this is the first time we’ve been able to offer realistic hope to people who have very challenging lives,” said Mysiw, chair of the Department of Physical Medicine and Rehabilitation at Ohio State. “What we’re looking to do is help these people regain more control over their bodies.”
During a three-hour surgery in April 2014, Rezai implanted a computer chip smaller than a pea onto the motor cortex of Burkhart’s brain.
The Ohio State and Battelle teams worked together to figure out the correct sequence of electrodes to stimulate to allow Burkhart to move his fingers and hand functionally. For example, Burkhart uses different brain signals and muscles to rotate his hand, make a fist or pinch his fingers together to grasp an object. As part of the study, Burkhart worked for months using the electrode sleeve to stimulate his forearm to rebuild his atrophied muscles so they would be more responsive to the electric stimulation.
“During the last decade, we’ve learned how to decipher brain signals in patients who are completely paralyzed and now, for the first time, those thoughts are being turned into movement,” said study co-author Bouton, who directed Battelle’s team before he joined the New York-based Feinstein Institute for Medical Research. “Our findings show that signals recorded from within the brain can be re-routed around an injury to the spinal cord, allowing restoration of functional movement and even movement of individual fingers.”
Burkhart said it was an easy decision to participate in the FDA-approved clinical trial at Ohio State’s Wexner Medical Center because he wanted to try to help others with spinal cord injuries. “I just kind of think that it’s my obligation to society,” Burkhart said. “If someone else had an opportunity to do it in some other part of the world, I would hope that they would commit their time so that everyone can benefit from it in the future.”
Rezai and the team from Battelle agree that this technology holds the promise to help patients affected by various brain and spinal cord injuries such as strokes and traumatic brain injury to be more independent and functional.
“We’re hoping that this technology will evolve into a wireless system connecting brain signals and thoughts to the outside world to improve the function and quality of life for those with disabilities,” Rezai said. “One of our major goals is to make this readily available to be used by patients at home.”
Burkhart is the first of a potential five participants in a clinical study. Mysiw and Rezai have identified a second patient who is scheduled to start the study in the summer.
“Participating in this research has changed me in the sense that I have a lot more hope for the future now,” Burkhart said. “I always did have a certain level of hope, but now I know, first-hand, that there are going to be improvements in science and technology that will make my life better.”
This paper is behind a paywall but there is an in depth April 13, 2016 article by Linda Geddes in Nature providing nuggets of new insight such as this,
Previous studies have suggested that after spinal-cord injuries, the brain undergoes ‘reorganization’ — a rewiring of its connections. But this new work suggests that the degree of reorganization occurring after such injuries may be less than previously assumed. “It gives us a lot of hope that there are perhaps not as many neural changes in the brain as we might have imagined [emphasis mine] after an injury like this, and we can bypass damaged areas of the spinal cord to regain movement,” says Bouton.
The Geddes article is open access.
Finally, there’s an April 13, 2016 article by Will Oremus for Slate.com, which notes that this story is not a fairy tale as there’s a possibility the chip will be removed in the near future as the US Food and Drug Administration’s approval of the device was conditional due to this,
Burkhart knows the device was never meant to last forever. The brain implant’s efficacy gradually degrades over time due to scarring in the brain tissue, and eventually that hardware degradation will start to undo the progress that Burkhart and the software have made together.
He told me he has accepted that his newfound mobility is temporary, and that the progress he has made is likely to benefit posterity more than it benefits him. “I now know that when I’m connected to the system I can do all these great things. It won’t be too much of a shock to me [when it’s over], because even now I can only use the system for a few hours a week when I’m down in the lab. But it will be something I’ll certainly miss.”
It’s not the first time someone’s tried to redesign a prosthetic (an Aug. 7, 2009 posting touched on reimagining prosthetic arms and other topics) but it’s the first project I’ve seen where children are the featured designers. A Jan. 27, 2016 article by Emily Price for The Guardian describes the idea,
In a hidden room in the back of a pier overlooking the San Francisco Bay, a young girl shoots glitter across the room with a flick of her wrist. On the other side of the room, a boy is shooting darts from his wrist – some travelling at least 20ft high, onto a landing above. It feels like a superhero training center or a party for the next generation of X-Men and, in a way, it is.
This is Superhero Cyborgs, an event that brings six children together with 3D design specialists and augmentation experts to create unique prosthetics that will turn each child into a kind of superhero.
The children are aged between 10 and 15 and all have upper-limb differences, having either been born without a hand or having lost a limb. They are spending five days with prosthetics experts and a design team from 3D software firm Autodesk, creating prosthetics that turn a replacement hand into something much more special.
“We started asking: ‘Why are we trying to replicate the functionality of a hand?’ when we could really do anything. Things that are way cooler that hands aren’t able to do,” says Kate Ganim, co-founder and co-director at KidMob, the nonprofit group that organised this project in partnership with San Rafael, California 3D software firm Autodesk. KidMob first ran this type of project at Rhode Island’s Brown University in 2014.
Details of each superhero prosthetic are being posted on the DIY site Instructables and hacking site Project Ignite in the hope that it inspires other groups, schools and individuals to follow suit. “A classroom might work on building a project and then donate a finished hand to someone they know or appoint it to someone in the community who is in need,” O’Rourke said.
I searched the Project Ignite website using the term ‘superhero cyborg’ and did not receive a single hit. I also used the search term on the Instructables website and got many hits but did not see one that resembled any of the project descriptions in Price’s article. Unfortunately, Price did not offer any suggestions for search terms.
Getting back to the project, Jessica Hullinger has written a March 28, 2016 article about Superhero Cyborgs for Fast Company where she follows one of the participants (Note: Links have been removed),
Jordan [Jordan Reeves, a 10-year-old from Columbia, Missouri] was born with a limb difference: her left arm stops just above the elbow. When she found out she was headed to the Superhero Cyborg workshop, she was over the moon. “I was like, ‘Wow, I can’t believe I’m actually doing this,'” she says.
Over the course of five days, she and five other kids between the ages of 10 and 15 worked with design experts and engineers from Autodesk to brainstorm ideas. “Basically, if they could design the prosthetic or body modification of their dreams in a superhero context, what would that look like?” asks Sarah O’Rourke, a senior product marketing manager with Autodesk.
For Jordan, it looks very sparkly. Her plan was to transform her arm into a cannon that spread a delightful cloud of glitter wherever she went. She started with a few sketches. Then she created a 3-D-printed cast of her arm and a plastic cuff made to fit over it, for prototyping purposes. The kids used Autodesk’s 3-D design tools like TinkerCAD and Fusion 360 to test their prototypes. …
“For us, our interest is in getting kids familiar with taking an idea from concept to execution and learning the skills along the way to do that,” says Ganim. “Ideally, it’s not about the end product they end up with out of workshop; it’s more about realizing they’re not just subject to what’s available on the market. It creates this interesting closed loop system where they’re both designer and end user. That is very powerful.”
The workshop is over now but the children will continue for a few months working on their designs and, in some cases, creating prostheses that can have practical applications.
Sydney: A dual water gun shooter that will automatically refill itself
I got more information on KIDmob on the About page,
KIDmob is the mobile, kid-integrated design firm. We are a Bay Area fiscally sponsored not-for-profit organization that believes design education is an opportunity for creative engagement and community empowerment. We take our passion on the road to bring our innovative approach to local communities around the world.
We engage in the design process through project-based learning. KIDmob workshops use the design process as a beginning curriculum framework on which to build a customized local project brief, based on a partner-identified need. Our workshops facilitate partners in devising imaginative solutions for their community, by their community. We strive to foster local stewardship within all of our projects.
We promote an energetic, hands-on approach to learning – our workshops create an immersive environment of moving, shaking, sketching, whirling, splatting, slicing, sawing, jitterbugging creativity. When we are not swimming in post-it notes, we like to explore all kinds of technologies, from pencils to circuitry mills, as tools for creative expression.
From what I understand one of the most difficult aspects of an amputation is the loss of touch, so, bravo to the engineers. From a March 8, 2016 news item on ScienceDaily,
An amputee was able to feel smoothness and roughness in real-time with an artificial fingertip that was surgically connected to nerves in his upper arm. Moreover, the nerves of non-amputees can also be stimulated to feel roughness, without the need of surgery, meaning that prosthetic touch for amputees can now be developed and safely tested on intact individuals.
The technology to deliver this sophisticated tactile information was developed by Silvestro Micera and his team at EPFL (Ecole polytechnique fédérale de Lausanne) and SSSA (Scuola Superiore Sant’Anna) together with Calogero Oddo and his team at SSSA. The results, published today in eLife, provide new and accelerated avenues for developing bionic prostheses, enhanced with sensory feedback.
“The stimulation felt almost like what I would feel with my hand,” says amputee Dennis Aabo Sørensen about the artificial fingertip connected to his stump. He continues, “I still feel my missing hand, it is always clenched in a fist. I felt the texture sensations at the tip of the index finger of my phantom hand.”
Sørensen is the first person in the world to recognize texture using a bionic fingertip connected to electrodes that were surgically implanted above his stump.
Nerves in Sørensen’s arm were wired to an artificial fingertip equipped with sensors. A machine controlled the movement of the fingertip over different pieces of plastic engraved with different patterns, smooth or rough. As the fingertip moved across the textured plastic, the sensors generated an electrical signal. This signal was translated into a series of electrical spikes, imitating the language of the nervous system, then delivered to the nerves.
Sørensen could distinguish between rough and smooth surfaces 96% of the time.
In a previous study, Sorensen’s implants were connected to a sensory-enhanced prosthetic hand that allowed him to recognize shape and softness. In this new publication about texture in the journal eLife, the bionic fingertip attains a superior level of touch resolution.
Simulating touch in non-amputees
This same experiment testing coarseness was performed on non-amputees, without the need of surgery. The tactile information was delivered through fine, needles that were temporarily attached to the arm’s median nerve through the skin. The non-amputees were able to distinguish roughness in textures 77% of the time.
But does this information about touch from the bionic fingertip really resemble the feeling of touch from a real finger? The scientists tested this by comparing brain-wave activity of the non-amputees, once with the artificial fingertip and then with their own finger. The brain scans collected by an EEG cap on the subject’s head revealed that activated regions in the brain were analogous.
The research demonstrates that the needles relay the information about texture in much the same way as the implanted electrodes, giving scientists new protocols to accelerate for improving touch resolution in prosthetics.
“This study merges fundamental sciences and applied engineering: it provides additional evidence that research in neuroprosthetics can contribute to the neuroscience debate, specifically about the neuronal mechanisms of the human sense of touch,” says Calogero Oddo of the BioRobotics Institute of SSSA. “It will also be translated to other applications such as artificial touch in robotics for surgery, rescue, and manufacturing.”
Over ten years ago I attended a show at the Vancouver (Canada) Art Gallery titled ‘Massive Change’ where I saw part of a nose or ear being grown in a petri dish (the work was from an Israeli laboratory) and that was my introduction to tissue engineering. For anyone who’s been following the tissue engineering story, 3D printers have sped up the growth process considerably. More recently, researchers at Wake Forest Baptist Medical Center (North Carolina, US) have announced another step forward for growing organs and body parts, from a Feb. 15, 2016 Wake Forest Baptist Medical Center news release on EurekAlert,
Using a sophisticated, custom-designed 3D printer, regenerative medicine scientists at Wake Forest Baptist Medical Center have proved that it is feasible to print living tissue structures to replace injured or diseased tissue in patients.
Reporting in Nature Biotechnology, the scientists said they printed ear, bone and muscle structures. When implanted in animals, the structures matured into functional tissue and developed a system of blood vessels. Most importantly, these early results indicate that the structures have the right size, strength and function for use in humans.
“This novel tissue and organ printer is an important advance in our quest to make replacement tissue for patients,” said Anthony Atala, M.D., director of the Wake Forest Institute for Regenerative Medicine (WFIRM) and senior author on the study. “It can fabricate stable, human-scale tissue of any shape. With further development, this technology could potentially be used to print living tissue and organ structures for surgical implantation.”
With funding from the Armed Forces Institute of Regenerative Medicine, a federally funded effort to apply regenerative medicine to battlefield injuries, Atala’s team aims to implant bioprinted muscle, cartilage and bone in patients in the future.
Tissue engineering is a science that aims to grow replacement tissues and organs in the laboratory to help solve the shortage of donated tissue available for transplants. The precision of 3D printing makes it a promising method for replicating the body’s complex tissues and organs. However, current printers based on jetting, extrusion and laser-induced forward transfer cannot produce structures with sufficient size or strength to implant in the body.
The Integrated Tissue and Organ Printing System (ITOP), developed over a 10-year period by scientists at the Institute for Regenerative Medicine, overcomes these challenges. The system deposits both bio-degradable, plastic-like materials to form the tissue “shape” and water-based gels that contain the cells. In addition, a strong, temporary outer structure is formed. The printing process does not harm the cells.
A major challenge of tissue engineering is ensuring that implanted structures live long enough to integrate with the body. The Wake Forest Baptist scientists addressed this in two ways. They optimized the water-based “ink” that holds the cells so that it promotes cell health and growth and they printed a lattice of micro-channels throughout the structures. These channels allow nutrients and oxygen from the body to diffuse into the structures and keep them live while they develop a system of blood vessels.
It has been previously shown that tissue structures without ready-made blood vessels must be smaller than 200 microns (0.007 inches) for cells to survive. In these studies, a baby-sized ear structure (1.5 inches) survived and showed signs of vascularization at one and two months after implantation.
“Our results indicate that the bio-ink combination we used, combined with the micro-channels, provides the right environment to keep the cells alive and to support cell and tissue growth,” said Atala.
Another advantage of the ITOP system is its ability to use data from CT and MRI scans to “tailor-make” tissue for patients. For a patient missing an ear, for example, the system could print a matching structure.
Several proof-of-concept experiments demonstrated the capabilities of ITOP. To show that ITOP can generate complex 3D structures, printed, human-sized external ears were implanted under the skin of mice. Two months later, the shape of the implanted ear was well-maintained and cartilage tissue and blood vessels had formed.
To demonstrate the ITOP can generate organized soft tissue structures, printed muscle tissue was implanted in rats. After two weeks, tests confirmed that the muscle was robust enough to maintain its structural characteristics, become vascularized and induce nerve formation.
And, to show that construction of a human-sized bone structure, jaw bone fragments were printed using human stem cells. The fragments were the size and shape needed for facial reconstruction in humans. To study the maturation of bioprinted bone in the body, printed segments of skull bone were implanted in rats. After five months, the bioprinted structures had formed vascularized bone tissue.
Ongoing studies will measure longer-term outcomes.
The research was supported, in part, by grants from the Armed Forces Institute of Regenerative Medicine (W81XWH-08-2-0032), the Telemedicine and Advanced Technology Research Center at the U.S. Army Medical Research and Material Command (W81XWH-07-1-0718) and the Defense Threat Reduction Agency (N66001-13-C-2027).
(Sometimes the information about the funding agencies is almost as interesting as the research.) Here’s a link to and a citation for the paper,
This research from Singapore could make neuroprosthetics and exoskeletons a little easier to manage as long as you don’t mind having a neural implant. From a Feb. 11, 2016 news item on ScienceDaily,
A versatile chip offers multiple applications in various electronic devices, report researchers, suggested that there is now hope that a low-powered, wireless neural implant may soon be a reality. Neural implants when embedded in the brain can alleviate the debilitating symptoms of Parkinson’s disease or give paraplegic people the ability to move their prosthetic limbs.
Caption: NTU Asst Prof Arindam Basu is holding his low-powered smart chip. Credit: NTU Singapore
Scientists at Nanyang Technological University, Singapore (NTU Singapore) have developed a small smart chip that can be paired with neural implants for efficient wireless transmission of brain signals.
Neural implants when embedded in the brain can alleviate the debilitating symptoms of Parkinson’s disease or give paraplegic people the ability to move their prosthetic limbs.
However, they need to be connected by wires to an external device outside the body. For a prosthetic patient, the neural implant is connected to a computer that decodes the brain signals so the artificial limb can move.
These external wires are not only cumbersome but the permanent openings which allow the wires into the brain increases the risk of infections.
The new chip by NTU scientists can allow the transmission of brain data wirelessly and with high accuracy.
Assistant Professor Arindam Basu from NTU’s School of Electrical and Electronic Engineering said the research team have tested the chip on data recorded from animal models, which showed that it could decode the brain’s signal to the hand and fingers with 95 per cent accuracy.
“What we have developed is a very versatile smart chip that can process data, analyse patterns and spot the difference,” explained Prof Basu.
“It is about a hundred times more efficient than current processing chips on the market. It will lead to more compact medical wearable devices, such as portable ECG monitoring devices and neural implants, since we no longer need large batteries to power them.”
Different from other wireless implants
To achieve high accuracy in decoding brain signals, implants require thousands of channels of raw data. To wirelessly transmit this large amount of data, more power is also needed which means either bigger batteries or more frequent recharging.
This is not feasible as there is limited space in the brain for implants while frequent recharging means the implants cannot be used for long-term recording of signals.
Current wireless implant prototypes thus suffer from a lack of accuracy as they lack the bandwidth to send out thousands of channels of raw data.
Instead of enlarging the power source to support the transmission of raw data, Asst Prof Basu tried to reduce the amount of data that needs to be transmitted.
Designed to be extremely power-efficient, NTU’s patented smart chip will analyse and decode the thousands of signals from the neural implants in the brain, before compressing the results and sending it wirelessly to a small external receiver.
This invention and its findings were published last month [December 2015] in the prestigious journal, IEEE Transactions on Biomedical Circuits & Systems, by the Institute of Electrical and Electronics Engineers, the world’s largest professional association for the advancement of technology.
Its underlying science was also featured in three international engineering conferences (two in Atlanta, USA and one in China) over the last three months.
Versatile smart chip with multiple uses
This new smart chip is designed to analyse data patterns and spot any abnormal or unusual patterns.
For example, in a remote video camera, the chip can be programmed to send a video back to the servers only when a specific type of car or something out of the ordinary is detected, such as an intruder.
This would be extremely beneficial for the Internet of Things (IOT), where every electrical and electronic device is connected to the Internet through a smart chip.
With a report by marketing research firm Gartner Inc predicting that 6.4 billion smart devices and appliances will be connected to the Internet by 2016, and will rise to 20.8 billion devices by 2020, reducing network traffic will be a priority for most companies.
Using NTU’s new chip, the devices can process and analyse the data on site, before sending back important details in a compressed package, instead of sending the whole data stream. This will reduce data usage by over a thousand times.
Asst Prof Basu is now in talks with Singapore Technologies Electronics Limited to adapt his smart chip that can significantly reduce power consumption and the amount of data transmitted by battery-operated remote sensors, such as video cameras.
The team is also looking to expand the applications of the chip into commercial products, such as to customise it for smart home sensor networks, in collaboration with a local electronics company.
The chip, measuring 5mm by 5mm can now be licensed by companies from NTU’s commercialisation arm, NTUitive.
Earlier this month there was a Feb. 9, 2016 announcement about a planned human clinical trial in Australia for a new brain-machine interface (neural implant). Before proceeding with the news, here’s what this implant looks like,
Caption: This tiny device, the size of a small paperclip, is implanted in to a blood vessel next to the brain and can read electrical signals from the motor cortex, the brain’s control centre. These signals can then be transmitted to an exoskeleton or wheelchair to give paraplegic patients greater mobility. Users will need to learn how to communicate with their machinery, but over time, it is thought it will become second nature, like driving or playing the piano. The first human trials are slated for 2017 in Melbourne, Australia. Credit: The University of Melbourne.
Melbourne medical researchers have created a new minimally invasive brain-machine interface, giving people with spinal cord injuries new hope to walk again with the power of thought.
The brain machine interface consists of a stent-based electrode (stentrode), which is implanted within a blood vessel next to the brain, and records the type of neural activity that has been shown in pre-clinical trials to move limbs through an exoskeleton or to control bionic limbs.
The new device is the size of a small paperclip and will be implanted in the first in-human trial at The Royal Melbourne Hospital in 2017.
The results published today in Nature Biotechnology show the device is capable of recording high-quality signals emitted from the brain’s motor cortex, without the need for open brain surgery.
Principal author and Neurologist at The Royal Melbourne Hospital and Research Fellow at The Florey Institute of Neurosciences and the University of Melbourne, Dr Thomas Oxley, said the stentrode was revolutionary.
“The development of the stentrode has brought together leaders in medical research from The Royal Melbourne Hospital, The University of Melbourne and the Florey Institute of Neuroscience and Mental Health. In total 39 academic scientists from 16 departments were involved in its development,” Dr Oxley said.
“We have been able to create the world’s only minimally invasive device that is implanted into a blood vessel in the brain via a simple day procedure, avoiding the need for high risk open brain surgery.
“Our vision, through this device, is to return function and mobility to patients with complete paralysis by recording brain activity and converting the acquired signals into electrical commands, which in turn would lead to movement of the limbs through a mobility assist device like an exoskeleton. In essence this a bionic spinal cord.”
Stroke and spinal cord injuries are leading causes of disability, affecting 1 in 50 people. There are 20,000 Australians with spinal cord injuries, with the typical patient a 19-year old male, and about 150,000 Australians left severely disabled after stroke.
Co-principal investigator and biomedical engineer at the University of Melbourne, Dr Nicholas Opie, said the concept was similar to an implantable cardiac pacemaker – electrical interaction with tissue using sensors inserted into a vein, but inside the brain.
“Utilising stent technology, our electrode array self-expands to stick to the inside wall of a vein, enabling us to record local brain activity. By extracting the recorded neural signals, we can use these as commands to control wheelchairs, exoskeletons, prosthetic limbs or computers,” Dr Opie said.
“In our first-in-human trial, that we anticipate will begin within two years, we are hoping to achieve direct brain control of an exoskeleton for three people with paralysis.”
“Currently, exoskeletons are controlled by manual manipulation of a joystick to switch between the various elements of walking – stand, start, stop, turn. The stentrode will be the first device that enables direct thought control of these devices”
Neurophysiologist at The Florey, Professor Clive May, said the data from the pre-clinical study highlighted that the implantation of the device was safe for long-term use.
“Through our pre-clinical study we were able to successfully record brain activity over many months. The quality of recording improved as the device was incorporated into tissue,” Professor May said.
“Our study also showed that it was safe and effective to implant the device via angiography, which is minimally invasive compared with the high risks associated with open brain surgery.
“The brain-computer interface is a revolutionary device that holds the potential to overcome paralysis, by returning mobility and independence to patients affected by various conditions.”
Professor Terry O’Brien, Head of Medicine at Departments of Medicine and Neurology, The Royal Melbourne Hospital and University of Melbourne said the development of the stentrode has been the “holy grail” for research in bionics.
“To be able to create a device that can record brainwave activity over long periods of time, without damaging the brain is an amazing development in modern medicine,” Professor O’Brien said.
“It can also be potentially used in people with a range of diseases aside from spinal cord injury, including epilepsy, Parkinsons and other neurological disorders.”
The development of the minimally invasive stentrode and the subsequent pre-clinical trials to prove its effectiveness could not have been possible without the support from the major funding partners – US Defense Department DARPA [Defense Advanced Research Projects Agency] and Australia’s National Health and Medical Research Council.
So, DARPA is helping fund this, eh? Interesting but not a surprise given the agency’s previous investments in brain research and neuroprosthetics.
For those who like to get their news via video,
Here’s a link to and a citation for the paper,
Minimally invasive endovascular stent-electrode array for high-fidelity, chronic recordings of cortical neural activity by Thomas J Oxley, Nicholas L Opie, Sam E John, Gil S Rind, Stephen M Ronayne, Tracey L Wheeler, Jack W Judy, Alan J McDonald, Anthony Dornom, Timothy J H Lovell, Christopher Steward, David J Garrett, Bradford A Moffat, Elaine H Lui, Nawaf Yassi, Bruce C V Campbell, Yan T Wong, Kate E Fox, Ewan S Nurse, Iwan E Bennett, Sébastien H Bauquier, Kishan A Liyanage, Nicole R van der Nagel, Piero Perucca, Arman Ahnood et al. Nature Biotechnology (2016) doi:10.1038/nbt.3428 Published online 08 February 2016
This paper is behind a paywall.
I wish the researchers in Singapore, Australia, and elsewhere, good luck!
The combination of human and computer intelligence might be just what we need to solve the “wicked” problems of the world, such as climate change and geopolitical conflict, say researchers from the Human Computation Institute (HCI) and Cornell University.
In an article published in the journal Science, the authors present a new vision of human computation (the science of crowd-powered systems), which pushes beyond traditional limits, and takes on hard problems that until recently have remained out of reach.
Humans surpass machines at many things, ranging from simple pattern recognition to creative abstraction. With the help of computers, these cognitive abilities can be effectively combined into multidimensional collaborative networks that achieve what traditional problem-solving cannot.
Most of today’s human computation systems rely on sending bite-sized ‘micro-tasks’ to many individuals and then stitching together the results. For example, 165,000 volunteers in EyeWire have analyzed thousands of images online to help build the world’s most complete map of human retinal neurons.
This microtasking approach alone cannot address the tough challenges we face today, say the authors. A radically new approach is needed to solve “wicked problems” – those that involve many interacting systems that are constantly changing, and whose solutions have unforeseen consequences (e.g., corruption resulting from financial aid given in response to a natural disaster).
New human computation technologies can help. Recent techniques provide real-time access to crowd-based inputs, where individual contributions can be processed by a computer and sent to the next person for improvement or analysis of a different kind. This enables the construction of more flexible collaborative environments that can better address the most challenging issues.
This idea is already taking shape in several human computation projects, including YardMap.org, which was launched by the Cornell in 2012 to map global conservation efforts one parcel at a time.
“By sharing and observing practices in a map-based social network, people can begin to relate their individual efforts to the global conservation potential of living and working landscapes,” says Janis Dickinson, Professor and Director of Citizen Science at the Cornell Lab of Ornithology.
YardMap allows participants to interact and build on each other’s work – something that crowdsourcing alone cannot achieve. The project serves as an important model for how such bottom-up, socially networked systems can bring about scalable changes how we manage residential landscapes.
HCI has recently set out to use crowd-power to accelerate Cornell-based Alzheimer’s disease research. WeCureAlz.com combines two successful microtasking systems into an interactive analytic pipeline that builds blood flow models of mouse brains. The stardust@home system, which was used to search for comet dust in one million images of aerogel, is being adapted to identify stalled blood vessels, which will then be pinpointed in the brain by a modified version of the EyeWire system.
“By enabling members of the general public to play some simple online game, we expect to reduce the time to treatment discovery from decades to just a few years”, says HCI director and lead author, Dr. Pietro Michelucci. “This gives an opportunity for anyone, including the tech-savvy generation of caregivers and early stage AD patients, to take the matter into their own hands.”
This paper is behind a paywall but the abstract is freely available,
Human computation, a term introduced by Luis von Ahn (1), refers to distributed systems that combine the strengths of humans and computers to accomplish tasks that neither can do alone (2). The seminal example is reCAPTCHA, a Web widget used by 100 million people a day when they transcribe distorted text into a box to prove they are human. This free cognitive labor provides users with access to Web content and keeps websites safe from spam attacks, while feeding into a massive, crowd-powered transcription engine that has digitized 13 million articles from The New York Times archives (3). But perhaps the best known example of human computation is Wikipedia. Despite initial concerns about accuracy (4), it has become the key resource for all kinds of basic information. Information science has begun to build on these early successes, demonstrating the potential to evolve human computation systems that can model and address wicked problems (those that defy traditional problem-solving methods) at the intersection of economic, environmental, and sociopolitical systems.
*’and’ changed to ‘an’ and ‘Jan. 3, 2016’ changed to ‘Jan. 4, 2016’ on Jan. 4, 2016 at 1543 PDT.
Thanks to Dexter Johnson’s Oct. 22, 2015 posting on his Nanoclast blog (on the IEEE [Institute of Electrical and Electronics Engineers]) website, I’ve found information about a second memristor with three terminals, aka, three stable resistive states, (the first is mentioned in my April 10, 2015 posting). From Dexter’s posting (Note: Links have been removed),
Now researchers at ETH Zurich have designed a memristor device out of perovskite just 5 nanometres thick that has three stable resistive states, which means it can encode data as 0,1 and 2, or a “trit” as opposed to a “bit.”
The research, which was published in the journal ACS Nano, developed model devices that have two competing nonvolatile resistive switching processes. These switching processes can be alternatively triggered by the effective switching voltage and time applied to the device.
“Our component could therefore also be useful for a new type of IT (Information Technology) that is not based on binary logic, but on a logic that provides for information located ‘between’ the 0 and 1,” said Jennifer Rupp, professor in the Department of Materials at ETH Zurich, in a press release. “This has interesting implications for what is referred to as fuzzy logic, which seeks to incorporate a form of uncertainty into the processing of digital information. You could describe it as less rigid computing.”
Two IT giants, Intel and HP, have entered a race to produce a commercial version of memristors, a new electronics component that could one day replace flash memory (DRAM) used in USB memory sticks, SD cards and SSD hard drives. “Basically, memristors require less energy since they work at lower voltages,” explains Jennifer Rupp, professor in the Department of Materials at ETH Zurich and holder of a SNSF professorship grant. “They can be made much smaller than today’s memory modules, and therefore offer much greater density. This means they can store more megabytes of information per square millimetre.” But currently memristors are only at the prototype stage. [emphasis mine]
There is a memristor-based product on the market as I noted in a Sept. 10, 2015 posting, although that may not be the type of memristive device that Rupp seems to be discussing. (Should you have problems accessing the Swiss National Science Foundation press release, you can find a lightly edited version (a brief [two sentences] history of the memristor has been left out) here on Azonano.
Jacopo Prisco wrote for CNN online in a March 2, 2015 article about memristors and Rupp’s work (Note: A link has been removed),
Simply put, the memristor could mean the end of electronics as we know it and the beginning of a new era called “ionics”.
The transistor, developed in 1947, is the main component of computer chips. It functions using a flow of electrons, whereas the memristor couples the electrons with ions, or electrically charged atoms.
In a transistor, once the flow of electrons is interrupted by, say, cutting the power, all information is lost. But a memristor can remember the amount of charge that was flowing through it, and much like a memory stick it will retain the data even when the power is turned off.
This can pave the way for computers that will instantly turn on and off like a light bulb and never lose data: the RAM, or memory, will no longer be erased when the machine is turned off, without the need to save anything to hard drives as with current technology.
Jennifer Rupp is a Professor of electrochemical materials at ETH Zurich, and she’s working with IBM to build a memristor-based machine.
Memristors, she points out, function in a way that is similar to a human brain: “Unlike a transistor, which is based on binary codes, a memristor can have multi-levels. You could have several states, let’s say zero, one half, one quarter, one third, and so on, and that gives us a very powerful new perspective on how our computers may develop in the future,” she told CNN’s Nick Glass.
This is the CNN interview with Rupp,
Prisco also provides an update about HP’s memristor-based product,
After manufacturing the first ever memristor, Hewlett Packard has been working for years on a new type of computer based on the technology. According to plans, it will launch by 2020.
Simply called “The Machine”, it uses “electrons for processing, photons for communication, and ions for storage.”
There are many academic teams researching memristors including a team at Northwestern University. I highlighted their announcement of a three-terminal version in an April 10, 2015 posting. While Rupp’s team achieved its effect with a perovskite substrate, the Northwestern team used a molybdenum disulfide (MoS2) substrate.
For anyone wanting to read the latest research from ETH, here’s a link to and a citation for the paper,
Finally, should you find the commercialization aspects of the memristor story interesting, there’s a June 6, 2015 posting by Knowm CEO (chief executive officer) Alex Nugent waxes eloquent on HP Labs’ ‘memristor problem’ (Note: A link has been removed),
Today I read something that did not surprise me. HP has said that their memristor technology will be replaced by traditional DRAM memory for use in “The Machine”. This is not surprising for those of us who have been in the field since before HP’s memristor marketing engine first revved up in 2008. While I have to admit the miscommunication between HP’s research and business development departments is starting to get really old, I do understand the problem, or at least part of it.
There are two ways to develop memristors. The first way is to force them to behave as you want them to behave. Most memristors that I have seen do not behave like fast, binary, non-volatile, deterministic switches. This is a problem because this is how HP wants them to behave. Consequently a perception has been created that memristors are for non-volatile fast memory. HP wants a drop-in replacement for standard memory because this is a large and established market. Makes sense of course, but its not the whole story on memristors.
Memristors exhibit a huge range of amazing phenomena. Some are very fast to switch but operate probabilistically. Others can be changed a little bit at a time and are ideal for learning. Still others have capacitance (with memory), or act as batteries. I’ve even seen some devices that can be programmed to be a capacitor or a resistor or a memristor. (Seriously).
Nugent, whether you agree with him or not provides, some fascinating insight. In the excerpt I’ve included here, he seems to provide confirmation that it’s possible to state ‘there are no memristors on the market’ and ‘there are memristors on the market’ because different devices are being called memristors.
An Oct. 20, 2015 posting by Lynn Bergeson on Nanotechnology Now announces a US White House challenge incorporating nanotechnology, computing, and brain research (Note: A link has been removed),
On October 20, 2015, the White House announced a grand challenge to develop transformational computing capabilities by combining innovations in multiple scientific disciplines. See https://www.whitehouse.gov/blog/2015/10/15/nanotechnology-inspired-grand-challenge-future-computing The Office of Science and Technology Policy (OSTP) states that, after considering over 100 responses to its June 17, 2015, request for information, it “is excited to announce the following grand challenge that addresses three Administration priorities — the National Nanotechnology Initiative, the National Strategic Computing Initiative (NSCI), and the BRAIN initiative.” The grand challenge is to “[c]reate a new type of computer that can proactively interpret and learn from data, solve unfamiliar problems using what it has learned, and operate with the energy efficiency of the human brain.”
Here’s where the Oct. 20, 2015 posting, which originated the news item, by Lloyd Whitman, Randy Bryant, and Tom Kalil for the US White House blog gets interesting,
While it continues to be a national priority to advance conventional digital computing—which has been the engine of the information technology revolution—current technology falls far short of the human brain in terms of both the brain’s sensing and problem-solving abilities and its low power consumption. Many experts predict that fundamental physical limitations will prevent transistor technology from ever matching these twin characteristics. We are therefore challenging the nanotechnology and computer science communities to look beyond the decades-old approach to computing based on the Von Neumann architecture as implemented with transistor-based processors, and chart a new path that will continue the rapid pace of innovation beyond the next decade.
There are growing problems facing the Nation that the new computing capabilities envisioned in this challenge might address, from delivering individualized treatments for disease, to allowing advanced robots to work safely alongside people, to proactively identifying and blocking cyber intrusions. To meet this challenge, major breakthroughs are needed not only in the basic devices that store and process information and the amount of energy they require, but in the way a computer analyzes images, sounds, and patterns; interprets and learns from data; and identifies and solves problems. [emphases mine]
Many of these breakthroughs will require new kinds of nanoscale devices and materials integrated into three-dimensional systems and may take a decade or more to achieve. These nanotechnology innovations will have to be developed in close coordination with new computer architectures, and will likely be informed by our growing understanding of the brain—a remarkable, fault-tolerant system that consumes less power than an incandescent light bulb.
Recent progress in developing novel, low-power methods of sensing and computation—including neuromorphic, magneto-electronic, and analog systems—combined with dramatic advances in neuroscience and cognitive sciences, lead us to believe that this ambitious challenge is now within our reach. …
This is the first time I’ve come across anything that publicly links the BRAIN initiative to computing, artificial intelligence, and artificial brains. (For my own sake, I make an arbitrary distinction between algorithms [artificial intelligence] and devices that simulate neural plasticity [artificial brains].)The emphasis in the past has always been on new strategies for dealing with Parkinson’s and other neurological diseases and conditions.
Scientists have been working for years to allow artificial skin to transmit what the brain would recognize as the sense of touch. For anyone who has lost a limb and gotten a prosthetic replacement, the loss of touch is reputedly one of the more difficult losses to accept. The sense of touch is also vital in robotics if the field is to expand and include activities reliant on the sense of touch, e.g., how much pressure do you use to grasp a cup; how much strength do you apply when moving an object from one place to another?
For anyone interested in the ‘electronic skin and pursuit of touch’ story, I have a Nov. 15, 2013 posting which highlights the evolution of the research into e-skin and what was then some of the latest work.
Using flexible organic circuits and specialized pressure sensors, researchers have created an artificial “skin” that can sense the force of static objects. Furthermore, they were able to transfer these sensory signals to the brain cells of mice in vitro using optogenetics. For the many people around the world living with prosthetics, such a system could one day allow them to feel sensation in their artificial limbs. To create the artificial skin, Benjamin Tee et al. developed a specialized circuit out of flexible, organic materials. It translates static pressure into digital signals that depend on how much mechanical force is applied. A particular challenge was creating sensors that can “feel” the same range of pressure that humans can. Thus, on the sensors, the team used carbon nanotubes molded into pyramidal microstructures, which are particularly effective at tunneling the signals from the electric field of nearby objects to the receiving electrode in a way that maximizes sensitivity. Transferring the digital signal from the artificial skin system to the cortical neurons of mice proved to be another challenge, since conventional light-sensitive proteins used in optogenetics do not stimulate neural spikes for sufficient durations for these digital signals to be sensed. Tee et al. therefore engineered new optogenetic proteins able to accommodate longer intervals of stimulation. Applying these newly engineered optogenic proteins to fast-spiking interneurons of the somatosensory cortex of mice in vitro sufficiently prolonged the stimulation interval, allowing the neurons to fire in accordance with the digital stimulation pulse. These results indicate that the system may be compatible with other fast-spiking neurons, including peripheral nerves.
The heart of the technique is a two-ply plastic construct: the top layer creates a sensing mechanism and the bottom layer acts as the circuit to transport electrical signals and translate them into biochemical stimuli compatible with nerve cells. The top layer in the new work featured a sensor that can detect pressure over the same range as human skin, from a light finger tap to a firm handshake.
Five years ago, Bao’s [Zhenan Bao, a professor of chemical engineering at Stanford,] team members first described how to use plastics and rubbers as pressure sensors by measuring the natural springiness of their molecular structures. They then increased this natural pressure sensitivity by indenting a waffle pattern into the thin plastic, which further compresses the plastic’s molecular springs.
To exploit this pressure-sensing capability electronically, the team scattered billions of carbon nanotubes through the waffled plastic. Putting pressure on the plastic squeezes the nanotubes closer together and enables them to conduct electricity.
This allowed the plastic sensor to mimic human skin, which transmits pressure information as short pulses of electricity, similar to Morse code, to the brain. Increasing pressure on the waffled nanotubes squeezes them even closer together, allowing more electricity to flow through the sensor, and those varied impulses are sent as short pulses to the sensing mechanism. Remove pressure, and the flow of pulses relaxes, indicating light touch. Remove all pressure and the pulses cease entirely.
The team then hooked this pressure-sensing mechanism to the second ply of their artificial skin, a flexible electronic circuit that could carry pulses of electricity to nerve cells.
Importing the signal
Bao’s team has been developing flexible electronics that can bend without breaking. For this project, team members worked with researchers from PARC, a Xerox company, which has a technology that uses an inkjet printer to deposit flexible circuits onto plastic. Covering a large surface is important to making artificial skin practical, and the PARC collaboration offered that prospect.
Finally the team had to prove that the electronic signal could be recognized by a biological neuron. It did this by adapting a technique developed by Karl Deisseroth, a fellow professor of bioengineering at Stanford who pioneered a field that combines genetics and optics, called optogenetics. Researchers bioengineer cells to make them sensitive to specific frequencies of light, then use light pulses to switch cells, or the processes being carried on inside them, on and off.
For this experiment the team members engineered a line of neurons to simulate a portion of the human nervous system. They translated the electronic pressure signals from the artificial skin into light pulses, which activated the neurons, proving that the artificial skin could generate a sensory output compatible with nerve cells.
Optogenetics was only used as an experimental proof of concept, Bao said, and other methods of stimulating nerves are likely to be used in real prosthetic devices. Bao’s team has already worked with Bianxiao Cui, an associate professor of chemistry at Stanford, to show that direct stimulation of neurons with electrical pulses is possible.
Bao’s team envisions developing different sensors to replicate, for instance, the ability to distinguish corduroy versus silk, or a cold glass of water from a hot cup of coffee. This will take time. There are six types of biological sensing mechanisms in the human hand, and the experiment described in Science reports success in just one of them.
But the current two-ply approach means the team can add sensations as it develops new mechanisms. And the inkjet printing fabrication process suggests how a network of sensors could be deposited over a flexible layer and folded over a prosthetic hand.
“We have a lot of work to take this from experimental to practical applications,” Bao said. “But after spending many years in this work, I now see a clear path where we can take our artificial skin.”
Here’s a link to and a citation for the paper,
A skin-inspired organic digital mechanoreceptor by Benjamin C.-K. Tee, Alex Chortos, Andre Berndt, Amanda Kim Nguyen, Ariane Tom, Allister McGuire, Ziliang Carter Lin, Kevin Tien, Won-Gyu Bae, Huiliang Wang, Ping Mei, Ho-Hsiu Chou, Bianxiao Cui, Karl Deisseroth, Tse Nga Ng, & Zhenan Bao. Science 16 October 2015 Vol. 350 no. 6258 pp. 313-316 DOI: 10.1126/science.aaa9306