Tag Archives: prosthetics

Mind-reading prosthetic limbs

In a December 21, 2022 Friedrich-Alexander-Universität Erlangen-Nürnberg (FAU press release (also on EurekAlert) problems with current neuroprostheses are described in the context of a new research project intended to solve them,

Lifting a glass, making a fist, entering a phone number using the index finger: it is amazing the things cutting-edge robotic hands can already do thanks to biomedical technology. However, things that work in the laboratory often encounter stumbling blocks when put to practice in daily life. The problem is the vast diversity of the intentions of each individual person, their surroundings and the things that can be found there, making a one size fits all solution all but impossible. A team at FAU is investigating how intelligent prostheses can be improved and made more reliable. The idea is that interactive artificial intelligence will help the prostheses to recognize human intent better, to register their surroundings and to continue to develop and improve over time. The project is to receive 4.5 million euros in funding from the EU, with FAU receiving 467,000 euros.

“We are literally working at the interface between humans and machines,” explains Prof. Dr. Claudio Castellini, professor of medical robotics at FAU. “The technology behind prosthetics for upper limbs has come on in leaps and bounds over the past decades.” Using surface electromyography, for example, skin electrodes at the remaining stump of the arm can detect the slightest muscle movements. These biosignals can be converted and transferred to the prosthetic limb as electrical impulses. “The wearer controls their artificial hand themselves using the stump. Methods taken from pattern recognition and interactive machine learning also allow people to teach their prosthetic their own individual needs when making a gesture or a movement.”

The advantages of AI over purely cosmetic prosthetics

At present, advanced robotic prosthetics have not yet reached optimal standards in terms of comfort, function and control, which is why many people with missing limbs still often prefer purely cosmetic prosthetics with no additional functions. The new EU Horizon project “AI-Powered Manipulation System for Advanced Robotic Service, Manufacturing and Prosthetics (IntelliMan)” therefore focuses on how these can interact with their environment even more effectively and for a specific purpose.

Researchers at FAU concentrate in particular on how to improve control of both real and virtual prosthetic upper limbs. The focus is on what is known as intent detection. Prof. Castellini and his team are continuing work on recording and analyzing human biosignals, and are designing innovative algorithms for machine learning aimed at detecting the individual movement patterns of individuals. User studies conducted on test persons both with and without physical disabilities are used to validate their results. Furthermore, FAU is also leading the area “Shared autonomy between humans and robots” in the EU project, aimed at checking the safety of the results.

At the interface between humans and machines

Prof. Castellini heads the “Assistive Intelligent Robotics” lab (AIROB) at FAU that focuses on controlling assistive robotics for the upper and lower limbs as well as functional electrostimulation. “We are exploiting the potential offered by intent detection to control assistive and rehabilitative robotics,” explains the researcher. “This covers wearable robots worn on the body such as prosthetics and exoskeletons, but also robot arms and simulations using virtual reality.” The professorship focuses particularly on biosignal processing of various sensor modalities and methods of machine learning for intent detection, in other words research directly at the interface between humans and machines.

In his previous research at the German Aerospace Center (DLR), where he was based until 2021, Castellini investigated the question of how virtual hand prosthetics could help amputees cope with phantom pain. Alongside Castellini, doctoral candidate Fabio Egle, a research associate at the professorship, is also actively involved in the IntelliMan project. The FAU share of the EU project will receive funding of 467,000 euros over a period of three and a half years, while the overall budget amounts to 6 million euros. The IntelliMan project is coordinated by the University of Bologna and the DLR, the Polytechnic University of Catalonia, the University of Genoa, Luigi Vanvitelli University in Campania and the Bavarian Research Alliance (BayFOR) are also involved.

Good luck to the team!

Neural and technological inequalities

I’m always happy to see discussions about the social implications of new and emerging technologies. In this case, the discussion was held at the Fast Company (magazine) European Innovation Festival. KC Ifeanyi wrote a July 10, 2019 article for Fast Company highlighting a session between two scientists focusing on what I’ve termed ‘machine/flesh’ or is, sometimes, called a cyborg but not with these two scientists (Note: A link has been removed),

At the Fast Company European Innovation Festival today, scientists Moran Cerf and Riccardo Sabatini had a wide-ranging discussion on the implications of technology that can hack humanity. From ethical questions to looking toward human biology for solutions, here are some of the highlights:

The ethics of ‘neural inequality’

There are already chips that can be implanted in the brain to help recover bodily functions after a stroke or brain injury. However, what happens if (more likely when) a chip in your brain can be hacked or even gain internet access, essentially making it possible for some people (more likely wealthy people) to process information much more quickly than others?

“It’s what some call neural inequality,” says Cerf, a neuroscientist and business professor at the Kellogg School of Management and at the neuroscience program at Northwestern University. …

Opening new pathways to thought through bionics

Cerf mentioned a colleague who was born without his left hand. He engineered a bionic one that he can control with an app and that has the functionality of doing things no human hand can do, like rotating 360 degrees. As fun of a party trick as that is, Cerf brings up a good point in that his colleague’s brain is processing something we can’t, thereby possibly opening new pathways of thought.

“The interesting thing, and this is up to us to investigate, is his brain can think thoughts that you cannot think [emphasis mine] because he has a function you don’t have,” Cerf says. …

The innovation of your human body

As people look to advanced bionics to amplify their senses or abilities, Sabatini, chief data scientist at Orionis Biosciences, makes the argument that our biological bodies are far more advanced than we give them credit for. …

Democratizing tech’s edges

Early innovation so often comes with a high price tag. The cost of experimenting with nascent technology or running clinical trials can be exorbitant. And Sabatini believes democratizing that part of the process is where the true innovation will be. …

Earlier technology that changed our thinking and thoughts

This isn’t the first time that technology has altered our thinking and the kinds of thoughts we have as per ” brain can think thoughts that you cannot think.” According to Walter J. Ong’s 1982 book, ‘Orality and Literacy’,that’s what writing did to us; it changed our thinking and the kinds of thoughts we have.

It took me quite a while to understand ‘writing’ as a technology, largely due to how much I took it for granted. Once I made that leap, it changed how I understood the word technology. Then, the idea that ‘writing’ could change your brain didn’t require as dramatic a leap although it fundamentally altered my concept of the relationship between technology and humans. Up to that time, I had viewed technology as an instrument that allowed me to accomplish goals (e.g., driving a car from point a to point b) but it had very little impact on me as a person.

You can find out more about Walter J. Ong and his work in his Wikipedia entry. Pay special attention to the section about, Orality and Literacy.

Who’s talking about technology and our thinking?

The article about the scientists (Cerf and Sabatini) at the Fast Company European Innovation Festival (held July 9 -10, 2019 in Milan, Italy) never mentions cyborgs. Presumably, neither did Sabatini or Cerf. It seems odd. Two thinkers were discussing ‘neural inequality’ and there was no mention of a cyborg (human and machine joined together).

Interestingly, the lead sponsor for this innovation festival was Gucci. That company would not have been my first guess or any other guess for that matter as having an interest in neural inequality.

So, Gucci sponsored a festival that is not not cheap. A two-day pass was $1600. (early birds got a discount of $457) and a ‘super’ pass was $2,229 (with an early bird discount of $629). So, you didn’t get into the room unless you had a fair chunk of change and time.

The tension, talking about inequality at a festival or other venue that most people can’t afford to attend, is discussed at more length in Anand Giridharadas’s 2018 book, ‘Winners Take All; The Elite Charade of Changing the World’.

It’s not just who gets to discuss ‘neural inequality’, it’s when you get to discuss it, which affects how the discussion is framed.

There aren’t an easy answers to these questions but I find the easy assumption that the wealthy and the science and technology communities get first dibs at the discussion a little disconcerting while being perfectly predictable.

On the plus side, there are artists and others who have jumped in and started the discussion by turning themselves into cyborgs. This August 14, 2015 article (Body-hackers: the people who turn themselves into cyborgs) by Oliver Wainwright for the Guardian is very informative and not for the faint of heart.

For the curious, I’ve been covering these kinds of stories here since 2009. The category ‘human enhancement’ and the search term ‘machine/flesh’ should provide you with an assortment of stories on the topic.

A solar, self-charging supercapacitor for wearable technology

Ravinder Dahiya, Carlos García Núñez, and their colleagues at the University of Glasgow (Scotland) strike again (see my May 10, 2017 posting for their first ‘solar-powered graphene skin’ research announcement). Last time it was all about robots and prosthetics, this time they’ve focused on wearable technology according to a July 18, 2018 news item on phys.org,

A new form of solar-powered supercapacitor could help make future wearable technologies lighter and more energy-efficient, scientists say.

In a paper published in the journal Nano Energy, researchers from the University of Glasgow’s Bendable Electronics and Sensing Technologies (BEST) group describe how they have developed a promising new type of graphene supercapacitor, which could be used in the next generation of wearable health sensors.

A July 18, 2018 University of Glasgow press release, which originated the news item, explains further,

Currently, wearable systems generally rely on relatively heavy, inflexible batteries, which can be uncomfortable for long-term users. The BEST team, led by Professor Ravinder Dahiya, have built on their previous success in developing flexible sensors by developing a supercapacitor which could power health sensors capable of conforming to wearer’s bodies, offering more comfort and a more consistent contact with skin to better collect health data.

Their new supercapacitor uses layers of flexible, three-dimensional porous foam formed from graphene and silver to produce a device capable of storing and releasing around three times more power than any similar flexible supercapacitor. The team demonstrated the durability of the supercapacitor, showing that it provided power consistently across 25,000 charging and discharging cycles.

They have also found a way to charge the system by integrating it with flexible solar powered skin already developed by the BEST group, effectively creating an entirely self-charging system, as well as a pH sensor which uses wearer’s sweat to monitor their health.

Professor Dahiya said: “We’re very pleased by the progress this new form of solar-powered supercapacitor represents. A flexible, wearable health monitoring system which only requires exposure to sunlight to charge has a lot of obvious commercial appeal, but the underlying technology has a great deal of additional potential.

“This research could take the wearable systems for health monitoring to remote parts of the world where solar power is often the most reliable source of energy, and it could also increase the efficiency of hybrid electric vehicles. We’re already looking at further integrating the technology into flexible synthetic skin which we’re developing for use in advanced prosthetics.” [emphasis mine]

In addition to the team’s work on robots, prosthetics, and graphene ‘skin’ mentioned in the May 10, 2017 posting the team is working on a synthetic ‘brainy’ skin for which they have just received £1.5m funding from the Engineering and Physical Science Research Council (EPSRC).

Brainy skin

A July 3, 2018 University of Glasgow press release discusses the proposed work in more detail,

A robotic hand covered in ‘brainy skin’ that mimics the human sense of touch is being developed by scientists.

University of Glasgow’s Professor Ravinder Dahiya has plans to develop ultra-flexible, synthetic Brainy Skin that ‘thinks for itself’.

The super-flexible, hypersensitive skin may one day be used to make more responsive prosthetics for amputees, or to build robots with a sense of touch.

Brainy Skin reacts like human skin, which has its own neurons that respond immediately to touch rather than having to relay the whole message to the brain.

This electronic ‘thinking skin’ is made from silicon based printed neural transistors and graphene – an ultra-thin form of carbon that is only an atom thick, but stronger than steel.

The new version is more powerful, less cumbersome and would work better than earlier prototypes, also developed by Professor Dahiya and his Bendable Electronics and Sensing Technologies (BEST) team at the University’s School of Engineering.

His futuristic research, called neuPRINTSKIN (Neuromorphic Printed Tactile Skin), has just received another £1.5m funding from the Engineering and Physical Science Research Council (EPSRC).

Professor Dahiya said: “Human skin is an incredibly complex system capable of detecting pressure, temperature and texture through an array of neural sensors that carry signals from the skin to the brain.

“Inspired by real skin, this project will harness the technological advances in electronic engineering to mimic some features of human skin, such as softness, bendability and now, also sense of touch. This skin will not just mimic the morphology of the skin but also its functionality.

“Brainy Skin is critical for the autonomy of robots and for a safe human-robot interaction to meet emerging societal needs such as helping the elderly.”

Synthetic ‘Brainy Skin’ with sense of touch gets £1.5m funding. Photo of Professor Ravinder Dahiya

This latest advance means tactile data is gathered over large areas by the synthetic skin’s computing system rather than sent to the brain for interpretation.

With additional EPSRC funding, which extends Professor Dahiya’s fellowship by another three years, he plans to introduce tactile skin with neuron-like processing. This breakthrough in the tactile sensing research will lead to the first neuromorphic tactile skin, or ‘brainy skin.’

To achieve this, Professor Dahiya will add a new neural layer to the e-skin that he has already developed using printing silicon nanowires.

Professor Dahiya added: “By adding a neural layer underneath the current tactile skin, neuPRINTSKIN will add significant new perspective to the e-skin research, and trigger transformations in several areas such as robotics, prosthetics, artificial intelligence, wearable systems, next-generation computing, and flexible and printed electronics.”

The Engineering and Physical Sciences Research Council (EPSRC) is part of UK Research and Innovation, a non-departmental public body funded by a grant-in-aid from the UK government.

EPSRC is the main funding body for engineering and physical sciences research in the UK. By investing in research and postgraduate training, the EPSRC is building the knowledge and skills base needed to address the scientific and technological challenges facing the nation.

Its portfolio covers a vast range of fields from healthcare technologies to structural engineering, manufacturing to mathematics, advanced materials to chemistry. The research funded by EPSRC has impact across all sectors. It provides a platform for future UK prosperity by contributing to a healthy, connected, resilient, productive nation.

It’s fascinating to note how these pieces of research fit together for wearable technology and health monitoring and creating more responsive robot ‘skin’ and, possibly, prosthetic devices that would allow someone to feel again.

The latest research paper

Getting back the solar-charging supercapacitors mentioned in the opening, here’s a link to and a citation for the team’s latest research paper,

Flexible self-charging supercapacitor based on graphene-Ag-3D graphene foam electrodes by Libu Manjakka, Carlos García Núñez, Wenting Dang, Ravinder Dahiya. Nano Energy Volume 51, September 2018, Pages 604-612 DOI: https://doi.org/10.1016/j.nanoen.2018.06.072

This paper is open access.

Prosthetic pain

“Feeling no pain” can be a euphemism for being drunk. However, there are some people for whom it’s not a euphemism and they literally feel no pain for one reason or another. One group of people who feel no pain are amputees and a researcher at Johns Hopkins University (Maryland, US) has found a way so they can feel pain again.

A June 20, 2018 news item on ScienceDaily provides an introduction to the research and to the reason for it,

Amputees often experience the sensation of a “phantom limb” — a feeling that a missing body part is still there.

That sensory illusion is closer to becoming a reality thanks to a team of engineers at the Johns Hopkins University that has created an electronic skin. When layered on top of prosthetic hands, this e-dermis brings back a real sense of touch through the fingertips.

“After many years, I felt my hand, as if a hollow shell got filled with life again,” says the anonymous amputee who served as the team’s principal volunteer tester.

Made of fabric and rubber laced with sensors to mimic nerve endings, e-dermis recreates a sense of touch as well as pain by sensing stimuli and relaying the impulses back to the peripheral nerves.

A June 20, 2018 Johns Hopkins University news release (also on EurekAlert), which originated the news item, explores the research in more depth,

“We’ve made a sensor that goes over the fingertips of a prosthetic hand and acts like your own skin would,” says Luke Osborn, a graduate student in biomedical engineering. “It’s inspired by what is happening in human biology, with receptors for both touch and pain.

“This is interesting and new,” Osborn said, “because now we can have a prosthetic hand that is already on the market and fit it with an e-dermis that can tell the wearer whether he or she is picking up something that is round or whether it has sharp points.”

The work – published June 20 in the journal Science Robotics – shows it is possible to restore a range of natural, touch-based feelings to amputees who use prosthetic limbs. The ability to detect pain could be useful, for instance, not only in prosthetic hands but also in lower limb prostheses, alerting the user to potential damage to the device.

Human skin contains a complex network of receptors that relay a variety of sensations to the brain. This network provided a biological template for the research team, which includes members from the Johns Hopkins departments of Biomedical Engineering, Electrical and Computer Engineering, and Neurology, and from the Singapore Institute of Neurotechnology.

Bringing a more human touch to modern prosthetic designs is critical, especially when it comes to incorporating the ability to feel pain, Osborn says.

“Pain is, of course, unpleasant, but it’s also an essential, protective sense of touch that is lacking in the prostheses that are currently available to amputees,” he says. “Advances in prosthesis designs and control mechanisms can aid an amputee’s ability to regain lost function, but they often lack meaningful, tactile feedback or perception.”

That is where the e-dermis comes in, conveying information to the amputee by stimulating peripheral nerves in the arm, making the so-called phantom limb come to life. The e-dermis device does this by electrically stimulating the amputee’s nerves in a non-invasive way, through the skin, says the paper’s senior author, Nitish Thakor, a professor of biomedical engineering and director of the Biomedical Instrumentation and Neuroengineering Laboratory at Johns Hopkins.

“For the first time, a prosthesis can provide a range of perceptions, from fine touch to noxious to an amputee, making it more like a human hand,” says Thakor, co-founder of Infinite Biomedical Technologies, the Baltimore-based company that provided the prosthetic hardware used in the study.

Inspired by human biology, the e-dermis enables its user to sense a continuous spectrum of tactile perceptions, from light touch to noxious or painful stimulus. The team created a “neuromorphic model” mimicking the touch and pain receptors of the human nervous system, allowing the e-dermis to electronically encode sensations just as the receptors in the skin would. Tracking brain activity via electroencephalography, or EEG, the team determined that the test subject was able to perceive these sensations in his phantom hand.

The researchers then connected the e-dermis output to the volunteer by using a noninvasive method known as transcutaneous electrical nerve stimulation, or TENS. In a pain-detection task, the team determined that the test subject and the prosthesis were able to experience a natural, reflexive reaction to both pain while touching a pointed object and non-pain when touching a round object.

The e-dermis is not sensitive to temperature–for this study, the team focused on detecting object curvature (for touch and shape perception) and sharpness (for pain perception). The e-dermis technology could be used to make robotic systems more human, and it could also be used to expand or extend to astronaut gloves and space suits, Osborn says.

The researchers plan to further develop the technology and better understand how to provide meaningful sensory information to amputees in the hopes of making the system ready for widespread patient use.

Johns Hopkins is a pioneer in the field of upper limb dexterous prostheses. More than a decade ago, the university’s Applied Physics Laboratory led the development of the advanced Modular Prosthetic Limb, which an amputee patient controls with the muscles and nerves that once controlled his or her real arm or hand.

In addition to the funding from Space@Hopkins, which fosters space-related collaboration across the university’s divisions, the team also received grants from the Applied Physics Laboratory Graduate Fellowship Program and the Neuroengineering Training Initiative through the National Institute of Biomedical Imaging and Bioengineering through the National Institutes of Health under grant T32EB003383.

The e-dermis was tested over the course of one year on an amputee who volunteered in the Neuroengineering Laboratory at Johns Hopkins. The subject frequently repeated the testing to demonstrate consistent sensory perceptions via the e-dermis. The team has worked with four other amputee volunteers in other experiments to provide sensory feedback.

Here’s a video about this work,

Sarah Zhang’s June 20, 2018 article for The Atlantic reveals a few more details while covering some of the material in the news release,

Osborn and his team added one more feature to make the prosthetic hand, as he puts it, “more lifelike, more self-aware”: When it grasps something too sharp, it’ll open its fingers and immediately drop it—no human control necessary. The fingers react in just 100 milliseconds, the speed of a human reflex. Existing prosthetic hands have a similar degree of theoretically helpful autonomy: If an object starts slipping, the hand will grasp more tightly. Ideally, users would have a way to override a prosthesis’s reflex, like how you can hold your hand on a stove if you really, really want to. After all, the whole point of having a hand is being able to tell it what to do.

Here’s a link to and a citation for the paper,

Prosthesis with neuromorphic multilayered e-dermis perceives touch and pain by Luke E. Osborn, Andrei Dragomir, Joseph L. Betthauser, Christopher L. Hunt, Harrison H. Nguyen, Rahul R. Kaliki, and Nitish V. Thakor. Science Robotics 20 Jun 2018: Vol. 3, Issue 19, eaat3818 DOI: 10.1126/scirobotics.aat3818

This paper is behind a paywall.

Better motor control for prosthetic hands (the illusion of feeling) and a discussion of superprostheses and reality

I have two bits about prosthetics, one which focuses on how most of us think of them and another about science fiction fantasies.

Better motor control

This new technology comes via a collaboration between the University of Alberta, the University of New Brunswick (UNB) and Ohio’s Cleveland Clinic, from a March 18, 2018 article by Nicole Ireland for the Canadian Broadcasting Corporation’s (CBC) news online,

Rob Anderson was fighting wildfires in Alberta when the helicopter he was in crashed into the side of a mountain. He survived, but lost his left arm and left leg.

More than 10 years after that accident, Anderson, now 39, says prosthetic limb technology has come a long way, and he feels fortunate to be using “top of the line stuff” to help him function as normally as possible. In fact, he continues to work for the Alberta government’s wildfire fighting service.

His powered prosthetic hand can do basic functions like opening and closing, but he doesn’t feel connected to it — and has limited ability to perform more intricate movements with it, such as shaking hands or holding a glass.

Anderson, who lives in Grande Prairie, Alta., compares its function to “doing things with a long pair of pliers.”

“There’s a disconnect between what you’re physically touching and what your body is doing,” he told CBC News.

Anderson is one of four Canadian participants in a study that suggests there’s a way to change that. …

Six people, all of whom had arm amputations from below the elbow or higher, took part in the research. It found that strategically placed vibrating “robots” made them “feel” the movements of their prosthetic hands, allowing them to grasp and grip objects with much more control and accuracy.

All of the participants had all previously undergone a specialized surgical procedure called “targeted re-innervation.” The nerves that had connected to their hands before they were amputated were rewired to link instead to muscles (including the biceps and triceps) in their remaining upper arms and in their chests.

For the study, researchers placed the robotic devices on the skin over those re-innervated muscles and vibrated them as the participants opened, closed, grasped or pinched with their prosthetic hands.

While the vibration was turned on, the participants “felt” their artificial hands moving and could adjust their grip based on the sensation. …

I have an April 24, 2017 posting about a tetraplegic patient who had a number of electrodes implanted in his arms and hands linked to a brain-machine interface and which allowed him to move his hands and arms; the implants were later removed. It is a different problem with a correspondingly different technological solution but there does seem to be increased interest in implanting sensors and electrodes into the human body to increase mobility and/or sensation.

Anderson describes how it ‘feels,

“It was kind of surreal,” Anderson said. “I could visually see the hand go out, I would touch something, I would squeeze it and my phantom hand felt like it was being closed and squeezing on something and it was sending the message back to my brain.

“It was a very strange sensation to actually be able to feel that feedback because I hadn’t in 10 years.”

The feeling of movement in the prosthetic hand is an illusion, the researchers say, since the vibration is actually happening to a muscle elsewhere in the body. But the sensation appeared to have a real effect on the participants.

“They were able to control their grasp function and how much they were opening the hand, to the same degree that someone with an intact hand would,” said study co-author Dr. Jacqueline Hebert, an associate professor in the Faculty of Rehabilitation Medicine at the University of Alberta.

Although the researchers are encouraged by the study findings, they acknowledge that there was a small number of participants, who all had access to the specialized re-innervation surgery to redirect the nerves from their amputated hands to other parts of their body.

The next step, they say, is to see if they can also simulate the feeling of movement in a broader range of people who have had other types of amputations, including legs, and have not had the re-innervation surgery.

Here’s a March 15, 2018  CBC New Brunswick radio interview about the work,

This is a bit longer than most of the embedded audio pieces that I have here but it’s worth it. Sadly, I can’t identify the interviewer who did a very good job with Jon Sensinger, associate director of UNB’s Institute of Biomedical Engineering. One more thing, I noticed that the interviewer made no mention of the University of Alberta in her introduction or in the subsequent interview. I gather regionalism reigns supreme everywhere in Canada. Or, maybe she and Sensinger just forgot. It happens when you’re excited. Also, there were US institutions in Ohio and Virginia that participated in this work.

Here’s a link to and a citation for the team’s paper,

Illusory movement perception improves motor control for prosthetic hands by Paul D. Marasco, Jacqueline S. Hebert, Jon W. Sensinger, Courtney E. Shell, Jonathon S. Schofield, Zachary C. Thumser, Raviraj Nataraj, Dylan T. Beckler, Michael R. Dawson, Dan H. Blustein, Satinder Gill, Brett D. Mensh, Rafael Granja-Vazquez, Madeline D. Newcomb, Jason P. Carey, and Beth M. Orzell. Science Translational Medicine 14 Mar 2018: Vol. 10, Issue 432, eaao6990 DOI: 10.1126/scitranslmed.aao6990

This paper is open access.

Superprostheses and our science fiction future

A March 20, 2018 news item on phys.org features an essay on about superprostheses and/or assistive devices,

Assistive devices may soon allow people to perform virtually superhuman feats. According to Robert Riener, however, there are more pressing goals than developing superhumans.

What had until recently been described as a futuristic vision has become a reality: the first self-declared “cyborgs” have had chips implanted in their bodies so that they can open doors and make cashless payments. The latest robotic hand prostheses succeed in performing all kinds of grips and tasks requiring dexterity. Parathletes fitted with running and spring prostheses compete – and win – against the best, non-impaired athletes. Then there are robotic pets and talking humanoid robots adding a bit of excitement to nursing homes.

Some media are even predicting that these high-tech creations will bring about forms of physiological augmentation overshadowing humans’ physical capabilities in ways never seen before. For instance, hearing aids are eventually expected to offer the ultimate in hearing; retinal implants will enable vision with a sharpness rivalling that of any eagle; motorised exoskeletons will transform soldiers into tireless fighting machines.

Visions of the future: the video game Deus Ex: Human Revolution highlights the emergence of physiological augmentation. (Visualisations: Square Enix) Courtesy: ETH Zurich

Professor Robert Riener uses the image above to illustrate the notion of superprosthese in his March 20, 2018 essay on the ETH Zurich website,

All of these prophecies notwithstanding, our robotic transformation into superheroes will not be happening in the immediate future and can still be filed under Hollywood hero myths. Compared to the technology available today, our bodies are a true marvel whose complexity and performance allows us to perform an extremely wide spectrum of tasks. Hundreds of efficient muscles, thousands of independently operating motor units along with millions of sensory receptors and billions of nerve cells allow us to perform delicate and detailed tasks with tweezers or lift heavy loads. Added to this, our musculoskeletal system is highly adaptable, can partly repair itself and requires only minimal amounts of energy in the form of relatively small amounts of food consumed.

Machines will not be able to match this any time soon. Today’s assistive devices are still laboratory experiments or niche products designed for very specific tasks. Markus Rehm, an athlete with a disability, does not use his innovative spring prosthesis to go for walks or drive a car. Nor can today’s conventional arm prostheses help a person tie their shoes or button up their shirt. Lifting devices used for nursing care are not suitable for helping with personal hygiene tasks or in psychotherapy. And robotic pets quickly lose their charm the moment their batteries die.

Solving real problems

There is no denying that advances continue to be made. Since the scientific and industrial revolutions, we have become dependent on relentless progress and growth, and we can no longer separate today’s world from this development. There are, however, more pressing issues to be solved than creating superhumans.

On the one hand, engineers need to dedicate their efforts to solving the real problems of patients, the elderly and people with disabilities. Better technical solutions are needed to help them lead normal lives and assist them in their work. We need motorised prostheses that also work in the rain and wheelchairs that can manoeuvre even with snow on the ground. Talking robotic nurses also need to be understood by hard-of-hearing pensioners as well as offer simple and dependable interactivity. Their batteries need to last at least one full day to be recharged overnight.

In addition, financial resources need to be available so that all people have access to the latest technologies, such as a high-quality household prosthesis for the family man, an extra prosthesis for the avid athlete or a prosthesis for the pensioner. [emphasis mine]

Breaking down barriers

What is just as important as the ongoing development of prostheses and assistive devices is the ability to minimise or eliminate physical barriers. Where there are no stairs, there is no need for elaborate special solutions like stair lifts or stairclimbing wheelchairs – or, presumably, fully motorised exoskeletons.

Efforts also need to be made to transform the way society thinks about people with disabilities. More acknowledgement of the day-to-day challenges facing patients with disabilities is needed, which requires that people be confronted with the topic of disability when they are still children. Such projects must be promoted at home and in schools so that living with impairments can also attain a state of normality and all people can partake in society. It is therefore also necessary to break down mental barriers.

The road to a virtually superhuman existence is still far and long. Anyone reading this text will not live to see it. In the meantime, the task at hand is to tackle the mundane challenges in order to simplify people’s daily lives in ways that do not require technology, that allow people to be active participants and improve their quality of life – instead of wasting our time getting caught up in cyborg euphoria and digital mania.

I’m struck by Riener’s reference to financial resources and access. Sensinger mentions financial resources in his CBC radio interview although his concern is with convincing funders that prostheses that mimic ‘feeling’ are needed.

I’m also struck by Riener’s discussion about nontechnological solutions for including people with all kinds of abilities and disabilities.

There was no grand plan for combining these two news bits; I just thought they were interesting together.

Robots in Vancouver and in Canada (two of two)

This is the second of a two-part posting about robots in Vancouver and Canada. The first part included a definition, a brief mention a robot ethics quandary, and sexbots. This part is all about the future. (Part one is here.)

Canadian Robotics Strategy

Meetings were held Sept. 28 – 29, 2017 in, surprisingly, Vancouver. (For those who don’t know, this is surprising because most of the robotics and AI research seems to be concentrated in eastern Canada. if you don’t believe me take a look at the speaker list for Day 2 or the ‘Canadian Stakeholder’ meeting day.) From the NSERC (Natural Sciences and Engineering Research Council) events page of the Canadian Robotics Network,

Join us as we gather robotics stakeholders from across the country to initiate the development of a national robotics strategy for Canada. Sponsored by the Natural Sciences and Engineering Research Council of Canada (NSERC), this two-day event coincides with the 2017 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS 2017) in order to leverage the experience of international experts as we explore Canada’s need for a national robotics strategy.

Where
Vancouver, BC, Canada

When
Thursday September 28 & Friday September 29, 2017 — Save the date!

Download the full agenda and speakers’ list here.

Objectives

The purpose of this two-day event is to gather members of the robotics ecosystem from across Canada to initiate the development of a national robotics strategy that builds on our strengths and capacities in robotics, and is uniquely tailored to address Canada’s economic needs and social values.

This event has been sponsored by the Natural Sciences and Engineering Research Council of Canada (NSERC) and is supported in kind by the 2017 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS 2017) as an official Workshop of the conference.  The first of two days coincides with IROS 2017 – one of the premiere robotics conferences globally – in order to leverage the experience of international robotics experts as we explore Canada’s need for a national robotics strategy here at home.

Who should attend

Representatives from industry, research, government, startups, investment, education, policy, law, and ethics who are passionate about building a robust and world-class ecosystem for robotics in Canada.

Program Overview

Download the full agenda and speakers’ list here.

DAY ONE: IROS Workshop 

“Best practices in designing effective roadmaps for robotics innovation”

Thursday September 28, 2017 | 8:30am – 5:00pm | Vancouver Convention Centre

Morning Program:“Developing robotics innovation policy and establishing key performance indicators that are relevant to your region” Leading international experts share their experience designing robotics strategies and policy frameworks in their regions and explore international best practices. Opening Remarks by Prof. Hong Zhang, IROS 2017 Conference Chair.

Afternoon Program: “Understanding the Canadian robotics ecosystem” Canadian stakeholders from research, industry, investment, ethics and law provide a collective overview of the Canadian robotics ecosystem. Opening Remarks by Ryan Gariepy, CTO of Clearpath Robotics.

Thursday Evening Program: Sponsored by Clearpath Robotics  Workshop participants gather at a nearby restaurant to network and socialize.

Learn more about the IROS Workshop.

DAY TWO: NSERC-Sponsored Canadian Robotics Stakeholder Meeting
“Towards a national robotics strategy for Canada”

Friday September 29, 2017 | 8:30am – 5:00pm | University of British Columbia (UBC)

On the second day of the program, robotics stakeholders from across the country gather at UBC for a full day brainstorming session to identify Canada’s unique strengths and opportunities relative to the global competition, and to align on a strategic vision for robotics in Canada.

Friday Evening Program: Sponsored by NSERC Meeting participants gather at a nearby restaurant for the event’s closing dinner reception.

Learn more about the Canadian Robotics Stakeholder Meeting.

I was glad to see in the agenda that some of the international speakers represented research efforts from outside the usual Europe/US axis.

I have been in touch with one of the organizers (also mentioned in part one with regard to robot ethics), Ajung Moon (her website is here), who says that there will be a white paper available on the Canadian Robotics Network website at some point in the future. I’ll keep looking for it and, in the meantime, I wonder what the 2018 Canadian federal budget will offer robotics.

Robots and popular culture

For anyone living in Canada or the US, Westworld (television series) is probably the most recent and well known ‘robot’ drama to premiere in the last year.As for movies, I think Ex Machina from 2014 probably qualifies in that category. Interestingly, both Westworld and Ex Machina seem quite concerned with sex with Westworld adding significant doses of violence as another  concern.

I am going to focus on another robot story, the 2012 movie, Robot & Frank, which features a care robot and an older man,

Frank (played by Frank Langella), a former jewel thief, teaches a robot the skills necessary to rob some neighbours of their valuables. The ethical issue broached in the film isn’t whether or not the robot should learn the skills and assist Frank in his thieving ways although that’s touched on when Frank keeps pointing out that planning his heist requires he live more healthily. No, the problem arises afterward when the neighbour accuses Frank of the robbery and Frank removes what he believes is all the evidence. He believes he’s going successfully evade arrest until the robot notes that Frank will have to erase its memory in order to remove all of the evidence. The film ends without the robot’s fate being made explicit.

In a way, I find the ethics query (was the robot Frank’s friend or just a machine?) posed in the film more interesting than the one in Vikander’s story, an issue which does have a history. For example, care aides, nurses, and/or servants would have dealt with requests to give an alcoholic patient a drink. Wouldn’t there  already be established guidelines and practices which could be adapted for robots? Or, is this question made anew by something intrinsically different about robots?

To be clear, Vikander’s story is a good introduction and starting point for these kinds of discussions as is Moon’s ethical question. But they are starting points and I hope one day there’ll be a more extended discussion of the questions raised by Moon and noted in Vikander’s article (a two- or three-part series of articles? public discussions?).

How will humans react to robots?

Earlier there was the contention that intimate interactions with robots and sexbots would decrease empathy and the ability of human beings to interact with each other in caring ways. This sounds a bit like the argument about smartphones/cell phones and teenagers who don’t relate well to others in real life because most of their interactions are mediated through a screen, which many seem to prefer. It may be partially true but, arguably,, books too are an antisocial technology as noted in Walter J. Ong’s  influential 1982 book, ‘Orality and Literacy’,  (from the Walter J. Ong Wikipedia entry),

A major concern of Ong’s works is the impact that the shift from orality to literacy has had on culture and education. Writing is a technology like other technologies (fire, the steam engine, etc.) that, when introduced to a “primary oral culture” (which has never known writing) has extremely wide-ranging impacts in all areas of life. These include culture, economics, politics, art, and more. Furthermore, even a small amount of education in writing transforms people’s mentality from the holistic immersion of orality to interiorization and individuation. [emphases mine]

So, robotics and artificial intelligence would not be the first technologies to affect our brains and our social interactions.

There’s another area where human-robot interaction may have unintended personal consequences according to April Glaser’s Sept. 14, 2017 article on Slate.com (Note: Links have been removed),

The customer service industry is teeming with robots. From automated phone trees to touchscreens, software and machines answer customer questions, complete orders, send friendly reminders, and even handle money. For an industry that is, at its core, about human interaction, it’s increasingly being driven to a large extent by nonhuman automation.

But despite the dreams of science-fiction writers, few people enter a customer-service encounter hoping to talk to a robot. And when the robot malfunctions, as they so often do, it’s a human who is left to calm angry customers. It’s understandable that after navigating a string of automated phone menus and being put on hold for 20 minutes, a customer might take her frustration out on a customer service representative. Even if you know it’s not the customer service agent’s fault, there’s really no one else to get mad at. It’s not like a robot cares if you’re angry.

When human beings need help with something, says Madeleine Elish, an anthropologist and researcher at the Data and Society Institute who studies how humans interact with machines, they’re not only looking for the most efficient solution to a problem. They’re often looking for a kind of validation that a robot can’t give. “Usually you don’t just want the answer,” Elish explained. “You want sympathy, understanding, and to be heard”—none of which are things robots are particularly good at delivering. In a 2015 survey of over 1,300 people conducted by researchers at Boston University, over 90 percent of respondents said they start their customer service interaction hoping to speak to a real person, and 83 percent admitted that in their last customer service call they trotted through phone menus only to make their way to a human on the line at the end.

“People can get so angry that they have to go through all those automated messages,” said Brian Gnerer, a call center representative with AT&T in Bloomington, Minnesota. “They’ve been misrouted or been on hold forever or they pressed one, then two, then zero to speak to somebody, and they are not getting where they want.” And when people do finally get a human on the phone, “they just sigh and are like, ‘Thank God, finally there’s somebody I can speak to.’ ”

Even if robots don’t always make customers happy, more and more companies are making the leap to bring in machines to take over jobs that used to specifically necessitate human interaction. McDonald’s and Wendy’s both reportedly plan to add touchscreen self-ordering machines to restaurants this year. Facebook is saturated with thousands of customer service chatbots that can do anything from hail an Uber, retrieve movie times, to order flowers for loved ones. And of course, corporations prefer automated labor. As Andy Puzder, CEO of the fast-food chains Carl’s Jr. and Hardee’s and former Trump pick for labor secretary, bluntly put it in an interview with Business Insider last year, robots are “always polite, they always upsell, they never take a vacation, they never show up late, there’s never a slip-and-fall, or an age, sex, or race discrimination case.”

But those robots are backstopped by human beings. How does interacting with more automated technology affect the way we treat each other? …

“We know that people treat artificial entities like they’re alive, even when they’re aware of their inanimacy,” writes Kate Darling, a researcher at MIT who studies ethical relationships between humans and robots, in a recent paper on anthropomorphism in human-robot interaction. Sure, robots don’t have feelings and don’t feel pain (not yet, anyway). But as more robots rely on interaction that resembles human interaction, like voice assistants, the way we treat those machines will increasingly bleed into the way we treat each other.

It took me a while to realize that what Glaser is talking about are AI systems and not robots as such. (sigh) It’s so easy to conflate the concepts.

AI ethics (Toby Walsh and Suzanne Gildert)

Jack Stilgoe of the Guardian published a brief Oct. 9, 2017 introduction to his more substantive (30 mins.?) podcast interview with Dr. Toby Walsh where they discuss stupid AI amongst other topics (Note: A link has been removed),

Professor Toby Walsh has recently published a book – Android Dreams – giving a researcher’s perspective on the uncertainties and opportunities of artificial intelligence. Here, he explains to Jack Stilgoe that we should worry more about the short-term risks of stupid AI in self-driving cars and smartphones than the speculative risks of super-intelligence.

Professor Walsh discusses the effects that AI could have on our jobs, the shapes of our cities and our understandings of ourselves. As someone developing AI, he questions the hype surrounding the technology. He is scared by some drivers’ real-world experimentation with their not-quite-self-driving Teslas. And he thinks that Siri needs to start owning up to being a computer.

I found this discussion to cast a decidedly different light on the future of robotics and AI. Walsh is much more interested in discussing immediate issues like the problems posed by ‘self-driving’ cars. (Aside: Should we be calling them robot cars?)

One ethical issue Walsh raises is with data regarding accidents. He compares what’s happening with accident data from self-driving (robot) cars to how the aviation industry handles accidents. Hint: accident data involving air planes is shared. Would you like to guess who does not share their data?

Sharing and analyzing data and developing new safety techniques based on that data has made flying a remarkably safe transportation technology.. Walsh argues the same could be done for self-driving cars if companies like Tesla took the attitude that safety is in everyone’s best interests and shared their accident data in a scheme similar to the aviation industry’s.

In an Oct. 12, 2017 article by Matthew Braga for Canadian Broadcasting Corporation (CBC) news online another ethical issue is raised by Suzanne Gildert (a participant in the Canadian Robotics Roadmap/Strategy meetings mentioned earlier here), Note: Links have been removed,

… Suzanne Gildert, the co-founder and chief science officer of Vancouver-based robotics company Kindred. Since 2014, her company has been developing intelligent robots [emphasis mine] that can be taught by humans to perform automated tasks — for example, handling and sorting products in a warehouse.

The idea is that when one of Kindred’s robots encounters a scenario it can’t handle, a human pilot can take control. The human can see, feel and hear the same things the robot does, and the robot can learn from how the human pilot handles the problematic task.

This process, called teleoperation, is one way to fast-track learning by manually showing the robot examples of what its trainers want it to do. But it also poses a potential moral and ethical quandary that will only grow more serious as robots become more intelligent.

“That AI is also learning my values,” Gildert explained during a talk on robot ethics at the Singularity University Canada Summit in Toronto on Wednesday [Oct. 11, 2017]. “Everything — my mannerisms, my behaviours — is all going into the AI.”

At its worst, everything from algorithms used in the U.S. to sentence criminals to image-recognition software has been found to inherit the racist and sexist biases of the data on which it was trained.

But just as bad habits can be learned, good habits can be learned too. The question is, if you’re building a warehouse robot like Kindred is, is it more effective to train those robots’ algorithms to reflect the personalities and behaviours of the humans who will be working alongside it? Or do you try to blend all the data from all the humans who might eventually train Kindred robots around the world into something that reflects the best strengths of all?

I notice Gildert distinguishes her robots as “intelligent robots” and then focuses on AI and issues with bias which have already arisen with regard to algorithms (see my May 24, 2017 posting about bias in machine learning, AI, and .Note: if you’re in Vancouver on Oct. 26, 2017 and interested in algorithms and bias), there’s a talk being given by Dr. Cathy O’Neil, author the Weapons of Math Destruction, on the topic of Gender and Bias in Algorithms. It’s not free but  tickets are here.)

Final comments

There is one more aspect I want to mention. Even as someone who usually deals with nanobots, it’s easy to start discussing robots as if the humanoid ones are the only ones that exist. To recapitulate, there are humanoid robots, utilitarian robots, intelligent robots, AI, nanobots, ‘microscopic bots, and more all of which raise questions about ethics and social impacts.

However, there is one more category I want to add to this list: cyborgs. They live amongst us now. Anyone who’s had a hip or knee replacement or a pacemaker or a deep brain stimulator or other such implanted device qualifies as a cyborg. Increasingly too, prosthetics are being introduced and made part of the body. My April 24, 2017 posting features this story,

This Case Western Reserve University (CRWU) video accompanies a March 28, 2017 CRWU news release, (h/t ScienceDaily March 28, 2017 news item)

Bill Kochevar grabbed a mug of water, drew it to his lips and drank through the straw.

His motions were slow and deliberate, but then Kochevar hadn’t moved his right arm or hand for eight years.

And it took some practice to reach and grasp just by thinking about it.

Kochevar, who was paralyzed below his shoulders in a bicycling accident, is believed to be the first person with quadriplegia in the world to have arm and hand movements restored with the help of two temporarily implanted technologies. [emphasis mine]

A brain-computer interface with recording electrodes under his skull, and a functional electrical stimulation (FES) system* activating his arm and hand, reconnect his brain to paralyzed muscles.

Does a brain-computer interface have an effect on human brain and, if so, what might that be?

In any discussion (assuming there is funding for it) about ethics and social impact, we might want to invite the broadest range of people possible at an ‘earlyish’ stage (although we’re already pretty far down the ‘automation road’) stage or as Jack Stilgoe and Toby Walsh note, technological determinism holds sway.

Once again here are links for the articles and information mentioned in this double posting,

That’s it!

ETA Oct. 16, 2017: Well, I guess that wasn’t quite ‘it’. BBC’s (British Broadcasting Corporation) Magazine published a thoughtful Oct. 15, 2017 piece titled: Can we teach robots ethics?

Solar-powered graphene skin for more feeling in your prosthetics

A March 23, 2017 news item on Nanowerk highlights research that could put feeling into a prosthetic limb,

A new way of harnessing the sun’s rays to power ‘synthetic skin’ could help to create advanced prosthetic limbs capable of returning the sense of touch to amputees.

Engineers from the University of Glasgow, who have previously developed an ‘electronic skin’ covering for prosthetic hands made from graphene, have found a way to use some of graphene’s remarkable physical properties to use energy from the sun to power the skin.

Graphene is a highly flexible form of graphite which, despite being just a single atom thick, is stronger than steel, electrically conductive, and transparent. It is graphene’s optical transparency, which allows around 98% of the light which strikes its surface to pass directly through it, which makes it ideal for gathering energy from the sun to generate power.

A March 23, 2017 University of Glasgow press release, which originated the news item, details more about the research,

Ravinder Dahiya

Dr Ravinder Dahiya

A new research paper, published today in the journal Advanced Functional Materials, describes how Dr Dahiya and colleagues from his Bendable Electronics and Sensing Technologies (BEST) group have integrated power-generating photovoltaic cells into their electronic skin for the first time.

Dr Dahiya, from the University of Glasgow’s School of Engineering, said: “Human skin is an incredibly complex system capable of detecting pressure, temperature and texture through an array of neural sensors which carry signals from the skin to the brain.

“My colleagues and I have already made significant steps in creating prosthetic prototypes which integrate synthetic skin and are capable of making very sensitive pressure measurements. Those measurements mean the prosthetic hand is capable of performing challenging tasks like properly gripping soft materials, which other prosthetics can struggle with. We are also using innovative 3D printing strategies to build more affordable sensitive prosthetic limbs, including the formation of a very active student club called ‘Helping Hands’.

“Skin capable of touch sensitivity also opens the possibility of creating robots capable of making better decisions about human safety. A robot working on a construction line, for example, is much less likely to accidentally injure a human if it can feel that a person has unexpectedly entered their area of movement and stop before an injury can occur.”

The new skin requires just 20 nanowatts of power per square centimetre, which is easily met even by the poorest-quality photovoltaic cells currently available on the market. And although currently energy generated by the skin’s photovoltaic cells cannot be stored, the team are already looking into ways to divert unused energy into batteries, allowing the energy to be used as and when it is required.

Dr Dahiya added: “The other next step for us is to further develop the power-generation technology which underpins this research and use it to power the motors which drive the prosthetic hand itself. This could allow the creation of an entirely energy-autonomous prosthetic limb.

“We’ve already made some encouraging progress in this direction and we’re looking forward to presenting those results soon. We are also exploring the possibility of building on these exciting results to develop wearable systems for affordable healthcare. In this direction, recently we also got small funds from Scottish Funding Council.”

For more information about this advance and others in the field of prosthetics you may want to check out Megan Scudellari’s March 30, 2017 article for the IEEE’s (Institute of Electrical and Electronics Engineers) Spectrum (Note: Links have been removed),

Cochlear implants can restore hearing to individuals with some types of hearing loss. Retinal implants are now on the market to restore sight to the blind. But there are no commercially available prosthetics that restore a sense of touch to those who have lost a limb.

Several products are in development, including this haptic system at Case Western Reserve University, which would enable upper-limb prosthetic users to, say, pluck a grape off a stem or pull a potato chip out of a bag. It sounds simple, but such tasks are virtually impossible without a sense of touch and pressure.

Now, a team at the University of Glasgow that previously developed a flexible ‘electronic skin’ capable of making sensitive pressure measurements, has figured out how to power their skin with sunlight. …

Here’s a link to and a citation for the paper,

Energy-Autonomous, Flexible, and Transparent Tactile Skin by Carlos García Núñez, William Taube Navaraj, Emre O. Polat and Ravinder Dahiya. Advanced Functional Materials DOI: 10.1002/adfm.201606287 Version of Record online: 22 MAR 2017

© 2017 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim

This paper is behind a paywall.

A new class of artificial retina

If I read the news release rightly (keep scrolling), this particular artificial retina does not require a device outside the body (e.g. specially developed eyeglasses) to capture an image to be transmitted to the implant. This new artificial retina captures the image directly.

The announcement of a new artificial retina is made in a March 13, 2017 news item on Nanowerk (Note: A link has been removed),

A team of engineers at the University of California San Diego and La Jolla-based startup Nanovision Biosciences Inc. have developed the nanotechnology and wireless electronics for a new type of retinal prosthesis that brings research a step closer to restoring the ability of neurons in the retina to respond to light. The researchers demonstrated this response to light in a rat retina interfacing with a prototype of the device in vitro.

They detail their work in a recent issue of the Journal of Neural Engineering (“Towards high-resolution retinal prostheses with direct optical addressing and inductive telemetry”). The technology could help tens of millions of people worldwide suffering from neurodegenerative diseases that affect eyesight, including macular degeneration, retinitis pigmentosa and loss of vision due to diabetes

Caption: These are primary cortical neurons cultured on the surface of an array of optoelectronic nanowires. Here a neuron is pulling the nanowires, indicating the the cell is doing well on this material. Credit: UC San Diego

A March 13, 2017 University of California at San Diego (UCSD) news release (also on EurekAlert) by Ioana Patringenaru, which originated the news item, details the new approach,

Despite tremendous advances in the development of retinal prostheses over the past two decades, the performance of devices currently on the market to help the blind regain functional vision is still severely limited–well under the acuity threshold of 20/200 that defines legal blindness.

“We want to create a new class of devices with drastically improved capabilities to help people with impaired vision,” said Gabriel A. Silva, one of the senior authors of the work and professor in bioengineering and ophthalmology at UC San Diego. Silva also is one of the original founders of Nanovision.

The new prosthesis relies on two groundbreaking technologies. One consists of arrays of silicon nanowires that simultaneously sense light and electrically stimulate the retina accordingly. The nanowires give the prosthesis higher resolution than anything achieved by other devices–closer to the dense spacing of photoreceptors in the human retina. The other breakthrough is a wireless device that can transmit power and data to the nanowires over the same wireless link at record speed and energy efficiency.

One of the main differences between the researchers’ prototype and existing retinal prostheses is that the new system does not require a vision sensor outside of the eye [emphasis mine] to capture a visual scene and then transform it into alternating signals to sequentially stimulate retinal neurons. Instead, the silicon nanowires mimic the retina’s light-sensing cones and rods to directly stimulate retinal cells. Nanowires are bundled into a grid of electrodes, directly activated by light and powered by a single wireless electrical signal. This direct and local translation of incident light into electrical stimulation makes for a much simpler–and scalable–architecture for the prosthesis.

The power provided to the nanowires from the single wireless electrical signal gives the light-activated electrodes their high sensitivity while also controlling the timing of stimulation.

“To restore functional vision, it is critical that the neural interface matches the resolution and sensitivity of the human retina,” said Gert Cauwenberghs, a professor of bioengineering at the Jacobs School of Engineering at UC San Diego and the paper’s senior author.

Wireless telemetry system

Power is delivered wirelessly, from outside the body to the implant, through an inductive powering telemetry system developed by a team led by Cauwenberghs.

The device is highly energy efficient because it minimizes energy losses in wireless power and data transmission and in the stimulation process, recycling electrostatic energy circulating within the inductive resonant tank, and between capacitance on the electrodes and the resonant tank. Up to 90 percent of the energy transmitted is actually delivered and used for stimulation, which means less RF wireless power emitting radiation in the transmission, and less heating of the surrounding tissue from dissipated power.

The telemetry system is capable of transmitting both power and data over a single pair of inductive coils, one emitting from outside the body, and another on the receiving side in the eye. The link can send and receive one bit of data for every two cycles of the 13.56 megahertz RF signal; other two-coil systems need at least 5 cycles for every bit transmitted.

Proof-of-concept test

For proof-of-concept, the researchers inserted the wirelessly powered nanowire array beneath a transgenic rat retina with rhodopsin P23H knock-in retinal degeneration. The degenerated retina interfaced in vitro with a microelectrode array for recording extracellular neural action potentials (electrical “spikes” from neural activity).

The horizontal and bipolar neurons fired action potentials preferentially when the prosthesis was exposed to a combination of light and electrical potential–and were silent when either light or electrical bias was absent, confirming the light-activated and voltage-controlled responsivity of the nanowire array.

The wireless nanowire array device is the result of a collaboration between a multidisciplinary team led by Cauwenberghs, Silva and William R. Freeman, director of the Jacobs Retina Center at UC San Diego, UC San Diego electrical engineering professor Yu-Hwa Lo and Nanovision Biosciences.

A path to clinical translation

Freeman, Silva and Scott Thorogood, have co-founded La Jolla-based Nanovision Biosciences, a partner in this study, to further develop and translate the technology into clinical use, with the goal of restoring functional vision in patients with severe retinal degeneration. Animal tests with the device are in progress, with clinical trials following.

“We have made rapid progress with the development of the world’s first nanoengineered retinal prosthesis as a result of the unique partnership we have developed with the team at UC San Diego,” said Thorogood, who is the CEO of Nanovision Biosciences.

Here’s a link to and a citation for the paper,

Towards high-resolution retinal prostheses with direct optical addressing and inductive telemetry by Sohmyung Ha, Massoud L Khraiche, Abraham Akinin, Yi Jing, Samir Damle, Yanjin Kuang, Sue Bauchner, Yu-Hwa Lo, William R Freeman, Gabriel A Silva.Journal of Neural Engineering, Volume 13, Number 5 DOI: https://doi.org/10.1088/1741-2560/13/5/056008

Published 16 August 2016 • © 2016 IOP Publishing Ltd

I’m not sure why they waited so long to make the announcement but, in any event, this paper is behind a paywall.

Bidirectional prosthetic-brain communication with light?

The possibility of not only being able to make a prosthetic that allows a tetraplegic to grab a coffee but to feel that coffee  cup with their ‘hand’ is one step closer to reality according to a Feb. 22, 2017 news item on ScienceDaily,

Since the early seventies, scientists have been developing brain-machine interfaces; the main application being the use of neural prosthesis in paralyzed patients or amputees. A prosthetic limb directly controlled by brain activity can partially recover the lost motor function. This is achieved by decoding neuronal activity recorded with electrodes and translating it into robotic movements. Such systems however have limited precision due to the absence of sensory feedback from the artificial limb. Neuroscientists at the University of Geneva (UNIGE), Switzerland, asked whether it was possible to transmit this missing sensation back to the brain by stimulating neural activity in the cortex. They discovered that not only was it possible to create an artificial sensation of neuroprosthetic movements, but that the underlying learning process occurs very rapidly. These findings, published in the scientific journal Neuron, were obtained by resorting to modern imaging and optical stimulation tools, offering an innovative alternative to the classical electrode approach.

A Feb. 22, 2017 Université de Genève press release on EurekAlert, which originated the news item, provides more detail,

Motor function is at the heart of all behavior and allows us to interact with the world. Therefore, replacing a lost limb with a robotic prosthesis is the subject of much research, yet successful outcomes are rare. Why is that? Until this moment, brain-machine interfaces are operated by relying largely on visual perception: the robotic arm is controlled by looking at it. The direct flow of information between the brain and the machine remains thus unidirectional. However, movement perception is not only based on vision but mostly on proprioception, the sensation of where the limb is located in space. “We have therefore asked whether it was possible to establish a bidirectional communication in a brain-machine interface: to simultaneously read out neural activity, translate it into prosthetic movement and reinject sensory feedback of this movement back in the brain”, explains Daniel Huber, professor in the Department of Basic Neurosciences of the Faculty of Medicine at UNIGE.

Providing artificial sensations of prosthetic movements

In contrast to invasive approaches using electrodes, Daniel Huber’s team specializes in optical techniques for imaging and stimulating brain activity. Using a method called two-photon microscopy, they routinely measure the activity of hundreds of neurons with single cell resolution. “We wanted to test whether mice could learn to control a neural prosthesis by relying uniquely on an artificial sensory feedback signal”, explains Mario Prsa, researcher at UNIGE and the first author of the study. “We imaged neural activity in the motor cortex. When the mouse activated a specific neuron, the one chosen for neuroprosthetic control, we simultaneously applied stimulation proportional to this activity to the sensory cortex using blue light”. Indeed, neurons of the sensory cortex were rendered photosensitive to this light, allowing them to be activated by a series of optical flashes and thus integrate the artificial sensory feedback signal. The mouse was rewarded upon every above-threshold activation, and 20 minutes later, once the association learned, the rodent was able to more frequently generate the correct neuronal activity.

This means that the artificial sensation was not only perceived, but that it was successfully integrated as a feedback of the prosthetic movement. In this manner, the brain-machine interface functions bidirectionally. The Geneva researchers think that the reason why this fabricated sensation is so rapidly assimilated is because it most likely taps into very basic brain functions. Feeling the position of our limbs occurs automatically, without much thought and probably reflects fundamental neural circuit mechanisms. This type of bidirectional interface might allow in the future more precisely displacing robotic arms, feeling touched objects or perceiving the necessary force to grasp them.

At present, the neuroscientists at UNIGE are examining how to produce a more efficient sensory feedback. They are currently capable of doing it for a single movement, but is it also possible to provide multiple feedback channels in parallel? This research sets the groundwork for developing a new generation of more precise, bidirectional neural prostheses.

Towards better understanding the neural mechanisms of neuroprosthetic control

By resorting to modern imaging tools, hundreds of neurons in the surrounding area could also be observed as the mouse learned the neuroprosthetic task. “We know that millions of neural connections exist. However, we discovered that the animal activated only the one neuron chosen for controlling the prosthetic action, and did not recruit any of the neighbouring neurons”, adds Daniel Huber. “This is a very interesting finding since it reveals that the brain can home in on and specifically control the activity of just one single neuron”. Researchers can potentially exploit this knowledge to not only develop more stable and precise decoding techniques, but also gain a better understanding of most basic neural circuit functions. It remains to be discovered what mechanisms are involved in routing signals to the uniquely activated neuron.

Caption: A novel optical brain-machine interface allows bidirectional communication with the brain. While a robotic arm is controlled by neuronal activity recorded with optical imaging (red laser), the position of the arm is fed back to the brain via optical microstimulation (blue laser). Credit: © Daniel Huber, UNIGE

Here’s a link to and a citation for the paper,

Rapid Integration of Artificial Sensory Feedback during Operant Conditioning of Motor Cortex Neurons by Mario Prsa, Gregorio L. Galiñanes, Daniel Huber. Neuron Volume 93, Issue 4, p929–939.e6, 22 February 2017 DOI: http://dx.doi.org/10.1016/j.neuron.2017.01.023 Open access funded by European Research Council

This paper is open access.

Technology, athletics, and the ‘new’ human

There is a tension between Olympic athletes and Paralympic athletes as it is felt by some able-bodied athletes that paralympic athletes may have an advantage due to their prosthetics. Roger Pielke Jr. has written a fascinating account of the tensions as a means of asking what it all means. From Pielke Jr.’s Aug. 3, 2016 post on the Guardian Science blogs (Note: Links have been removed),

Athletes are humans too, and they sometimes look for a performance improvement through technological enhancements. In my forthcoming book, The Edge: The War Against Cheating and Corruption in the Cutthroat World of Elite Sports, I discuss a range of technological augmentations to both people and to sports, and the challenges that they pose for rule making. In humans, such improvements can be the result of surgery to reshape (like laser eye surgery) or strengthen (such as replacing a ligament with a tendon) the body to aid performance, or to add biological or non-biological parts that the individual wasn’t born with.

One well-known case of technological augmentation involved the South African sprinter Oscar Pistorius, who ran in the 2012 Olympic Games on prosthetic “blades” below his knees (during happier days for the athlete who is currently jailed in South Africa for the killing of his girlfriend, Reeva Steenkamp). Years before the London Games Pistorius began to have success on the track running against able-bodied athletes. As a consequence of this success and Pistorius’s interest in competing at the Olympic games, the International Association of Athletics Federations (or IAAF, which oversees elite track and field competitions) introduced a rule in 2007, focused specifically on Pistorius, prohibiting the “use of any technical device that incorporates springs, wheels, or any other element that provides the user with an advantage over another athlete not using such a device.” Under this rule, Pistorius was determined by the IAAF to be ineligible to compete against able-bodied athletes.

Pistorius appealed the decision to the Court of Arbitration for Sport. The appeal hinged on answering a metaphysical question—how fast would Pistorius have run had he been born with functioning legs below the knee? In other words, did the blades give him an advantage over other athletes that the hypothetical, able-bodied Oscar Pistorius would not have had? Because there never was an able-bodied Pistorius, the CAS looked to scientists to answer the question.

CAS concluded that the IAAF was in fact fixing the rules to prevent Pistorius from competing and that “at least some IAAF officials had determined that they did not want Mr. Pistorius to be acknowledged as eligible to compete in international IAAF-sanctioned events, regardless of the results that properly conducted scientific studies might demonstrate.” CAS determined that it was the responsibility of the IAAF to show “on the balance of probabilities” that Pistorius gained an advantage by running on his blades. CAS concluded that the research commissioned by the IAAF did not show conclusively such an advantage.

As a result, CAS ruled that Pistorius was able to compete in the London Games, where he reached the semifinals of the 400 meters. CAS concluded that resolving such disputes “must be viewed as just one of the challenges of 21st Century life.”

The story does not end with Oscar Pistorius as Pielke, Jr. notes. There has been another challenge, this time by Markus Rehm, a German long-jumper who leaps off a prosthetic leg. Interestingly, the rules have changed since Oscar Pistorius won his case (Note: Links have been removed),

In the Pistorius case, under the rules for inclusion in the Olympic games the burden of proof had been on the IAAF, not the athlete, to demonstrate the presence of an advantage provided by technology.

This precedent was overturned in 2015, when the IAAF quietly introduced a new rule that in such cases reverses the burden of proof. The switch placed the burden of proof on the athlete instead of the governing body. The new rule—which we might call the Rehm Rule, given its timing—states that an athlete with a prosthetic limb (specifically, any “mechanical aid”) cannot participate in IAAF events “unless the athlete can establish on the balance of probabilities that the use of an aid would not provide him with an overall competitive advantage over an athlete not using such aid.” This new rule effectively slammed the door to participation by Paralympians with prosthetics from participating in Olympic Games.
Advertisement

Even if an athlete might have the resources to enlist researchers to carefully study his or her performance, the IAAF requires the athlete to do something that is very difficult, and often altogether impossible—to prove a negative.

If you have the time, I encourage you to read Pielke Jr.’s piece in its entirety as he notes the secrecy with which the Rehm rule was implemented and the implications for the future. Here’s one last excerpt (Note: A link has been removed),

We may be seeing only the beginning of debates over technological augmentation and sport. Silvia Camporesi, an ethicist at King’s College London, observed: “It is plausible to think that in 50 years, or maybe less, the ‘natural’ able-bodied athletes will just appear anachronistic.” She continues: “As our concept of what is ‘natural’ depends on what we are used to, and evolves with our society and culture, so does our concept of ‘purity’ of sport.”

I have written many times about human augmentation and the possibility that what is now viewed as a ‘normal’ body may one day be viewed as subpar or inferior is not all that farfetched. David Epstein’s 2014 TED talk “Are athletes really getting faster, better, stronger?” points out that in addition to sports technology innovations athletes’ bodies have changed considerably since the beginning of the 20th century. He doesn’t discuss body augmentation but it seems increasingly likely not just for athletes but for everyone.

As for athletes and augmentation, Epstein has an Aug. 7, 2016 Scientific American piece published on Salon.com in time for the 2016 Summer Olympics in Rio de Janeiro,

I knew Eero Mäntyranta had magic blood, but I hadn’t expected to see it in his face. I had tracked him down above the Arctic Circle in Finland where he was — what else? — a reindeer farmer.

He was all red. Not just the crimson sweater with knitted reindeer crossing his belly, but his actual skin. It was cardinal dappled with violet, his nose a bulbous purple plum. In the pictures I’d seen of him in Sports Illustrated in the 1960s — when he’d won three Olympic gold medals in cross-country skiing — he was still white. But now, as an older man, his special blood had turned him red.

Mäntyranta had about 50 percent more red blood cells than a normal man. If Armstrong [Lance Armstrong, cyclist] had as many red blood cells as Mäntyranta, cycling rules would have barred him from even starting a race, unless he could prove it was a natural condition.

During his career, Mäntyranta was accused of doping after his high red blood cell count was discovered. Two decades after he retired, Finnish scientists found his family’s mutation. …

Epstein also covers the Pistorius story, albeit with more detail about the science and controversy of determining whether someone with prosthetics may have an advantage over an able-bodied athlete. Scientists don’t agree about whether or not there is an advantage.

I have many other posts on the topic of augmentation. You can find them under the Human Enhancement category and you can also try the tag, machine/flesh.