Tag Archives: prosthetics

A solar, self-charging supercapacitor for wearable technology

Ravinder Dahiya, Carlos García Núñez, and their colleagues at the University of Glasgow (Scotland) strike again (see my May 10, 2017 posting for their first ‘solar-powered graphene skin’ research announcement). Last time it was all about robots and prosthetics, this time they’ve focused on wearable technology according to a July 18, 2018 news item on phys.org,

A new form of solar-powered supercapacitor could help make future wearable technologies lighter and more energy-efficient, scientists say.

In a paper published in the journal Nano Energy, researchers from the University of Glasgow’s Bendable Electronics and Sensing Technologies (BEST) group describe how they have developed a promising new type of graphene supercapacitor, which could be used in the next generation of wearable health sensors.

A July 18, 2018 University of Glasgow press release, which originated the news item, explains further,

Currently, wearable systems generally rely on relatively heavy, inflexible batteries, which can be uncomfortable for long-term users. The BEST team, led by Professor Ravinder Dahiya, have built on their previous success in developing flexible sensors by developing a supercapacitor which could power health sensors capable of conforming to wearer’s bodies, offering more comfort and a more consistent contact with skin to better collect health data.

Their new supercapacitor uses layers of flexible, three-dimensional porous foam formed from graphene and silver to produce a device capable of storing and releasing around three times more power than any similar flexible supercapacitor. The team demonstrated the durability of the supercapacitor, showing that it provided power consistently across 25,000 charging and discharging cycles.

They have also found a way to charge the system by integrating it with flexible solar powered skin already developed by the BEST group, effectively creating an entirely self-charging system, as well as a pH sensor which uses wearer’s sweat to monitor their health.

Professor Dahiya said: “We’re very pleased by the progress this new form of solar-powered supercapacitor represents. A flexible, wearable health monitoring system which only requires exposure to sunlight to charge has a lot of obvious commercial appeal, but the underlying technology has a great deal of additional potential.

“This research could take the wearable systems for health monitoring to remote parts of the world where solar power is often the most reliable source of energy, and it could also increase the efficiency of hybrid electric vehicles. We’re already looking at further integrating the technology into flexible synthetic skin which we’re developing for use in advanced prosthetics.” [emphasis mine]

In addition to the team’s work on robots, prosthetics, and graphene ‘skin’ mentioned in the May 10, 2017 posting the team is working on a synthetic ‘brainy’ skin for which they have just received £1.5m funding from the Engineering and Physical Science Research Council (EPSRC).

Brainy skin

A July 3, 2018 University of Glasgow press release discusses the proposed work in more detail,

A robotic hand covered in ‘brainy skin’ that mimics the human sense of touch is being developed by scientists.

University of Glasgow’s Professor Ravinder Dahiya has plans to develop ultra-flexible, synthetic Brainy Skin that ‘thinks for itself’.

The super-flexible, hypersensitive skin may one day be used to make more responsive prosthetics for amputees, or to build robots with a sense of touch.

Brainy Skin reacts like human skin, which has its own neurons that respond immediately to touch rather than having to relay the whole message to the brain.

This electronic ‘thinking skin’ is made from silicon based printed neural transistors and graphene – an ultra-thin form of carbon that is only an atom thick, but stronger than steel.

The new version is more powerful, less cumbersome and would work better than earlier prototypes, also developed by Professor Dahiya and his Bendable Electronics and Sensing Technologies (BEST) team at the University’s School of Engineering.

His futuristic research, called neuPRINTSKIN (Neuromorphic Printed Tactile Skin), has just received another £1.5m funding from the Engineering and Physical Science Research Council (EPSRC).

Professor Dahiya said: “Human skin is an incredibly complex system capable of detecting pressure, temperature and texture through an array of neural sensors that carry signals from the skin to the brain.

“Inspired by real skin, this project will harness the technological advances in electronic engineering to mimic some features of human skin, such as softness, bendability and now, also sense of touch. This skin will not just mimic the morphology of the skin but also its functionality.

“Brainy Skin is critical for the autonomy of robots and for a safe human-robot interaction to meet emerging societal needs such as helping the elderly.”

Synthetic ‘Brainy Skin’ with sense of touch gets £1.5m funding. Photo of Professor Ravinder Dahiya

This latest advance means tactile data is gathered over large areas by the synthetic skin’s computing system rather than sent to the brain for interpretation.

With additional EPSRC funding, which extends Professor Dahiya’s fellowship by another three years, he plans to introduce tactile skin with neuron-like processing. This breakthrough in the tactile sensing research will lead to the first neuromorphic tactile skin, or ‘brainy skin.’

To achieve this, Professor Dahiya will add a new neural layer to the e-skin that he has already developed using printing silicon nanowires.

Professor Dahiya added: “By adding a neural layer underneath the current tactile skin, neuPRINTSKIN will add significant new perspective to the e-skin research, and trigger transformations in several areas such as robotics, prosthetics, artificial intelligence, wearable systems, next-generation computing, and flexible and printed electronics.”

The Engineering and Physical Sciences Research Council (EPSRC) is part of UK Research and Innovation, a non-departmental public body funded by a grant-in-aid from the UK government.

EPSRC is the main funding body for engineering and physical sciences research in the UK. By investing in research and postgraduate training, the EPSRC is building the knowledge and skills base needed to address the scientific and technological challenges facing the nation.

Its portfolio covers a vast range of fields from healthcare technologies to structural engineering, manufacturing to mathematics, advanced materials to chemistry. The research funded by EPSRC has impact across all sectors. It provides a platform for future UK prosperity by contributing to a healthy, connected, resilient, productive nation.

It’s fascinating to note how these pieces of research fit together for wearable technology and health monitoring and creating more responsive robot ‘skin’ and, possibly, prosthetic devices that would allow someone to feel again.

The latest research paper

Getting back the solar-charging supercapacitors mentioned in the opening, here’s a link to and a citation for the team’s latest research paper,

Flexible self-charging supercapacitor based on graphene-Ag-3D graphene foam electrodes by Libu Manjakka, Carlos García Núñez, Wenting Dang, Ravinder Dahiya. Nano Energy Volume 51, September 2018, Pages 604-612 DOI: https://doi.org/10.1016/j.nanoen.2018.06.072

This paper is open access.

Prosthetic pain

“Feeling no pain” can be a euphemism for being drunk. However, there are some people for whom it’s not a euphemism and they literally feel no pain for one reason or another. One group of people who feel no pain are amputees and a researcher at Johns Hopkins University (Maryland, US) has found a way so they can feel pain again.

A June 20, 2018 news item on ScienceDaily provides an introduction to the research and to the reason for it,

Amputees often experience the sensation of a “phantom limb” — a feeling that a missing body part is still there.

That sensory illusion is closer to becoming a reality thanks to a team of engineers at the Johns Hopkins University that has created an electronic skin. When layered on top of prosthetic hands, this e-dermis brings back a real sense of touch through the fingertips.

“After many years, I felt my hand, as if a hollow shell got filled with life again,” says the anonymous amputee who served as the team’s principal volunteer tester.

Made of fabric and rubber laced with sensors to mimic nerve endings, e-dermis recreates a sense of touch as well as pain by sensing stimuli and relaying the impulses back to the peripheral nerves.

A June 20, 2018 Johns Hopkins University news release (also on EurekAlert), which originated the news item, explores the research in more depth,

“We’ve made a sensor that goes over the fingertips of a prosthetic hand and acts like your own skin would,” says Luke Osborn, a graduate student in biomedical engineering. “It’s inspired by what is happening in human biology, with receptors for both touch and pain.

“This is interesting and new,” Osborn said, “because now we can have a prosthetic hand that is already on the market and fit it with an e-dermis that can tell the wearer whether he or she is picking up something that is round or whether it has sharp points.”

The work – published June 20 in the journal Science Robotics – shows it is possible to restore a range of natural, touch-based feelings to amputees who use prosthetic limbs. The ability to detect pain could be useful, for instance, not only in prosthetic hands but also in lower limb prostheses, alerting the user to potential damage to the device.

Human skin contains a complex network of receptors that relay a variety of sensations to the brain. This network provided a biological template for the research team, which includes members from the Johns Hopkins departments of Biomedical Engineering, Electrical and Computer Engineering, and Neurology, and from the Singapore Institute of Neurotechnology.

Bringing a more human touch to modern prosthetic designs is critical, especially when it comes to incorporating the ability to feel pain, Osborn says.

“Pain is, of course, unpleasant, but it’s also an essential, protective sense of touch that is lacking in the prostheses that are currently available to amputees,” he says. “Advances in prosthesis designs and control mechanisms can aid an amputee’s ability to regain lost function, but they often lack meaningful, tactile feedback or perception.”

That is where the e-dermis comes in, conveying information to the amputee by stimulating peripheral nerves in the arm, making the so-called phantom limb come to life. The e-dermis device does this by electrically stimulating the amputee’s nerves in a non-invasive way, through the skin, says the paper’s senior author, Nitish Thakor, a professor of biomedical engineering and director of the Biomedical Instrumentation and Neuroengineering Laboratory at Johns Hopkins.

“For the first time, a prosthesis can provide a range of perceptions, from fine touch to noxious to an amputee, making it more like a human hand,” says Thakor, co-founder of Infinite Biomedical Technologies, the Baltimore-based company that provided the prosthetic hardware used in the study.

Inspired by human biology, the e-dermis enables its user to sense a continuous spectrum of tactile perceptions, from light touch to noxious or painful stimulus. The team created a “neuromorphic model” mimicking the touch and pain receptors of the human nervous system, allowing the e-dermis to electronically encode sensations just as the receptors in the skin would. Tracking brain activity via electroencephalography, or EEG, the team determined that the test subject was able to perceive these sensations in his phantom hand.

The researchers then connected the e-dermis output to the volunteer by using a noninvasive method known as transcutaneous electrical nerve stimulation, or TENS. In a pain-detection task, the team determined that the test subject and the prosthesis were able to experience a natural, reflexive reaction to both pain while touching a pointed object and non-pain when touching a round object.

The e-dermis is not sensitive to temperature–for this study, the team focused on detecting object curvature (for touch and shape perception) and sharpness (for pain perception). The e-dermis technology could be used to make robotic systems more human, and it could also be used to expand or extend to astronaut gloves and space suits, Osborn says.

The researchers plan to further develop the technology and better understand how to provide meaningful sensory information to amputees in the hopes of making the system ready for widespread patient use.

Johns Hopkins is a pioneer in the field of upper limb dexterous prostheses. More than a decade ago, the university’s Applied Physics Laboratory led the development of the advanced Modular Prosthetic Limb, which an amputee patient controls with the muscles and nerves that once controlled his or her real arm or hand.

In addition to the funding from Space@Hopkins, which fosters space-related collaboration across the university’s divisions, the team also received grants from the Applied Physics Laboratory Graduate Fellowship Program and the Neuroengineering Training Initiative through the National Institute of Biomedical Imaging and Bioengineering through the National Institutes of Health under grant T32EB003383.

The e-dermis was tested over the course of one year on an amputee who volunteered in the Neuroengineering Laboratory at Johns Hopkins. The subject frequently repeated the testing to demonstrate consistent sensory perceptions via the e-dermis. The team has worked with four other amputee volunteers in other experiments to provide sensory feedback.

Here’s a video about this work,

Sarah Zhang’s June 20, 2018 article for The Atlantic reveals a few more details while covering some of the material in the news release,

Osborn and his team added one more feature to make the prosthetic hand, as he puts it, “more lifelike, more self-aware”: When it grasps something too sharp, it’ll open its fingers and immediately drop it—no human control necessary. The fingers react in just 100 milliseconds, the speed of a human reflex. Existing prosthetic hands have a similar degree of theoretically helpful autonomy: If an object starts slipping, the hand will grasp more tightly. Ideally, users would have a way to override a prosthesis’s reflex, like how you can hold your hand on a stove if you really, really want to. After all, the whole point of having a hand is being able to tell it what to do.

Here’s a link to and a citation for the paper,

Prosthesis with neuromorphic multilayered e-dermis perceives touch and pain by Luke E. Osborn, Andrei Dragomir, Joseph L. Betthauser, Christopher L. Hunt, Harrison H. Nguyen, Rahul R. Kaliki, and Nitish V. Thakor. Science Robotics 20 Jun 2018: Vol. 3, Issue 19, eaat3818 DOI: 10.1126/scirobotics.aat3818

This paper is behind a paywall.

Better motor control for prosthetic hands (the illusion of feeling) and a discussion of superprostheses and reality

I have two bits about prosthetics, one which focuses on how most of us think of them and another about science fiction fantasies.

Better motor control

This new technology comes via a collaboration between the University of Alberta, the University of New Brunswick (UNB) and Ohio’s Cleveland Clinic, from a March 18, 2018 article by Nicole Ireland for the Canadian Broadcasting Corporation’s (CBC) news online,

Rob Anderson was fighting wildfires in Alberta when the helicopter he was in crashed into the side of a mountain. He survived, but lost his left arm and left leg.

More than 10 years after that accident, Anderson, now 39, says prosthetic limb technology has come a long way, and he feels fortunate to be using “top of the line stuff” to help him function as normally as possible. In fact, he continues to work for the Alberta government’s wildfire fighting service.

His powered prosthetic hand can do basic functions like opening and closing, but he doesn’t feel connected to it — and has limited ability to perform more intricate movements with it, such as shaking hands or holding a glass.

Anderson, who lives in Grande Prairie, Alta., compares its function to “doing things with a long pair of pliers.”

“There’s a disconnect between what you’re physically touching and what your body is doing,” he told CBC News.

Anderson is one of four Canadian participants in a study that suggests there’s a way to change that. …

Six people, all of whom had arm amputations from below the elbow or higher, took part in the research. It found that strategically placed vibrating “robots” made them “feel” the movements of their prosthetic hands, allowing them to grasp and grip objects with much more control and accuracy.

All of the participants had all previously undergone a specialized surgical procedure called “targeted re-innervation.” The nerves that had connected to their hands before they were amputated were rewired to link instead to muscles (including the biceps and triceps) in their remaining upper arms and in their chests.

For the study, researchers placed the robotic devices on the skin over those re-innervated muscles and vibrated them as the participants opened, closed, grasped or pinched with their prosthetic hands.

While the vibration was turned on, the participants “felt” their artificial hands moving and could adjust their grip based on the sensation. …

I have an April 24, 2017 posting about a tetraplegic patient who had a number of electrodes implanted in his arms and hands linked to a brain-machine interface and which allowed him to move his hands and arms; the implants were later removed. It is a different problem with a correspondingly different technological solution but there does seem to be increased interest in implanting sensors and electrodes into the human body to increase mobility and/or sensation.

Anderson describes how it ‘feels,

“It was kind of surreal,” Anderson said. “I could visually see the hand go out, I would touch something, I would squeeze it and my phantom hand felt like it was being closed and squeezing on something and it was sending the message back to my brain.

“It was a very strange sensation to actually be able to feel that feedback because I hadn’t in 10 years.”

The feeling of movement in the prosthetic hand is an illusion, the researchers say, since the vibration is actually happening to a muscle elsewhere in the body. But the sensation appeared to have a real effect on the participants.

“They were able to control their grasp function and how much they were opening the hand, to the same degree that someone with an intact hand would,” said study co-author Dr. Jacqueline Hebert, an associate professor in the Faculty of Rehabilitation Medicine at the University of Alberta.

Although the researchers are encouraged by the study findings, they acknowledge that there was a small number of participants, who all had access to the specialized re-innervation surgery to redirect the nerves from their amputated hands to other parts of their body.

The next step, they say, is to see if they can also simulate the feeling of movement in a broader range of people who have had other types of amputations, including legs, and have not had the re-innervation surgery.

Here’s a March 15, 2018  CBC New Brunswick radio interview about the work,

This is a bit longer than most of the embedded audio pieces that I have here but it’s worth it. Sadly, I can’t identify the interviewer who did a very good job with Jon Sensinger, associate director of UNB’s Institute of Biomedical Engineering. One more thing, I noticed that the interviewer made no mention of the University of Alberta in her introduction or in the subsequent interview. I gather regionalism reigns supreme everywhere in Canada. Or, maybe she and Sensinger just forgot. It happens when you’re excited. Also, there were US institutions in Ohio and Virginia that participated in this work.

Here’s a link to and a citation for the team’s paper,

Illusory movement perception improves motor control for prosthetic hands by Paul D. Marasco, Jacqueline S. Hebert, Jon W. Sensinger, Courtney E. Shell, Jonathon S. Schofield, Zachary C. Thumser, Raviraj Nataraj, Dylan T. Beckler, Michael R. Dawson, Dan H. Blustein, Satinder Gill, Brett D. Mensh, Rafael Granja-Vazquez, Madeline D. Newcomb, Jason P. Carey, and Beth M. Orzell. Science Translational Medicine 14 Mar 2018: Vol. 10, Issue 432, eaao6990 DOI: 10.1126/scitranslmed.aao6990

This paper is open access.

Superprostheses and our science fiction future

A March 20, 2018 news item on phys.org features an essay on about superprostheses and/or assistive devices,

Assistive devices may soon allow people to perform virtually superhuman feats. According to Robert Riener, however, there are more pressing goals than developing superhumans.

What had until recently been described as a futuristic vision has become a reality: the first self-declared “cyborgs” have had chips implanted in their bodies so that they can open doors and make cashless payments. The latest robotic hand prostheses succeed in performing all kinds of grips and tasks requiring dexterity. Parathletes fitted with running and spring prostheses compete – and win – against the best, non-impaired athletes. Then there are robotic pets and talking humanoid robots adding a bit of excitement to nursing homes.

Some media are even predicting that these high-tech creations will bring about forms of physiological augmentation overshadowing humans’ physical capabilities in ways never seen before. For instance, hearing aids are eventually expected to offer the ultimate in hearing; retinal implants will enable vision with a sharpness rivalling that of any eagle; motorised exoskeletons will transform soldiers into tireless fighting machines.

Visions of the future: the video game Deus Ex: Human Revolution highlights the emergence of physiological augmentation. (Visualisations: Square Enix) Courtesy: ETH Zurich

Professor Robert Riener uses the image above to illustrate the notion of superprosthese in his March 20, 2018 essay on the ETH Zurich website,

All of these prophecies notwithstanding, our robotic transformation into superheroes will not be happening in the immediate future and can still be filed under Hollywood hero myths. Compared to the technology available today, our bodies are a true marvel whose complexity and performance allows us to perform an extremely wide spectrum of tasks. Hundreds of efficient muscles, thousands of independently operating motor units along with millions of sensory receptors and billions of nerve cells allow us to perform delicate and detailed tasks with tweezers or lift heavy loads. Added to this, our musculoskeletal system is highly adaptable, can partly repair itself and requires only minimal amounts of energy in the form of relatively small amounts of food consumed.

Machines will not be able to match this any time soon. Today’s assistive devices are still laboratory experiments or niche products designed for very specific tasks. Markus Rehm, an athlete with a disability, does not use his innovative spring prosthesis to go for walks or drive a car. Nor can today’s conventional arm prostheses help a person tie their shoes or button up their shirt. Lifting devices used for nursing care are not suitable for helping with personal hygiene tasks or in psychotherapy. And robotic pets quickly lose their charm the moment their batteries die.

Solving real problems

There is no denying that advances continue to be made. Since the scientific and industrial revolutions, we have become dependent on relentless progress and growth, and we can no longer separate today’s world from this development. There are, however, more pressing issues to be solved than creating superhumans.

On the one hand, engineers need to dedicate their efforts to solving the real problems of patients, the elderly and people with disabilities. Better technical solutions are needed to help them lead normal lives and assist them in their work. We need motorised prostheses that also work in the rain and wheelchairs that can manoeuvre even with snow on the ground. Talking robotic nurses also need to be understood by hard-of-hearing pensioners as well as offer simple and dependable interactivity. Their batteries need to last at least one full day to be recharged overnight.

In addition, financial resources need to be available so that all people have access to the latest technologies, such as a high-quality household prosthesis for the family man, an extra prosthesis for the avid athlete or a prosthesis for the pensioner. [emphasis mine]

Breaking down barriers

What is just as important as the ongoing development of prostheses and assistive devices is the ability to minimise or eliminate physical barriers. Where there are no stairs, there is no need for elaborate special solutions like stair lifts or stairclimbing wheelchairs – or, presumably, fully motorised exoskeletons.

Efforts also need to be made to transform the way society thinks about people with disabilities. More acknowledgement of the day-to-day challenges facing patients with disabilities is needed, which requires that people be confronted with the topic of disability when they are still children. Such projects must be promoted at home and in schools so that living with impairments can also attain a state of normality and all people can partake in society. It is therefore also necessary to break down mental barriers.

The road to a virtually superhuman existence is still far and long. Anyone reading this text will not live to see it. In the meantime, the task at hand is to tackle the mundane challenges in order to simplify people’s daily lives in ways that do not require technology, that allow people to be active participants and improve their quality of life – instead of wasting our time getting caught up in cyborg euphoria and digital mania.

I’m struck by Riener’s reference to financial resources and access. Sensinger mentions financial resources in his CBC radio interview although his concern is with convincing funders that prostheses that mimic ‘feeling’ are needed.

I’m also struck by Riener’s discussion about nontechnological solutions for including people with all kinds of abilities and disabilities.

There was no grand plan for combining these two news bits; I just thought they were interesting together.

Robots in Vancouver and in Canada (two of two)

This is the second of a two-part posting about robots in Vancouver and Canada. The first part included a definition, a brief mention a robot ethics quandary, and sexbots. This part is all about the future. (Part one is here.)

Canadian Robotics Strategy

Meetings were held Sept. 28 – 29, 2017 in, surprisingly, Vancouver. (For those who don’t know, this is surprising because most of the robotics and AI research seems to be concentrated in eastern Canada. if you don’t believe me take a look at the speaker list for Day 2 or the ‘Canadian Stakeholder’ meeting day.) From the NSERC (Natural Sciences and Engineering Research Council) events page of the Canadian Robotics Network,

Join us as we gather robotics stakeholders from across the country to initiate the development of a national robotics strategy for Canada. Sponsored by the Natural Sciences and Engineering Research Council of Canada (NSERC), this two-day event coincides with the 2017 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS 2017) in order to leverage the experience of international experts as we explore Canada’s need for a national robotics strategy.

Where
Vancouver, BC, Canada

When
Thursday September 28 & Friday September 29, 2017 — Save the date!

Download the full agenda and speakers’ list here.

Objectives

The purpose of this two-day event is to gather members of the robotics ecosystem from across Canada to initiate the development of a national robotics strategy that builds on our strengths and capacities in robotics, and is uniquely tailored to address Canada’s economic needs and social values.

This event has been sponsored by the Natural Sciences and Engineering Research Council of Canada (NSERC) and is supported in kind by the 2017 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS 2017) as an official Workshop of the conference.  The first of two days coincides with IROS 2017 – one of the premiere robotics conferences globally – in order to leverage the experience of international robotics experts as we explore Canada’s need for a national robotics strategy here at home.

Who should attend

Representatives from industry, research, government, startups, investment, education, policy, law, and ethics who are passionate about building a robust and world-class ecosystem for robotics in Canada.

Program Overview

Download the full agenda and speakers’ list here.

DAY ONE: IROS Workshop 

“Best practices in designing effective roadmaps for robotics innovation”

Thursday September 28, 2017 | 8:30am – 5:00pm | Vancouver Convention Centre

Morning Program:“Developing robotics innovation policy and establishing key performance indicators that are relevant to your region” Leading international experts share their experience designing robotics strategies and policy frameworks in their regions and explore international best practices. Opening Remarks by Prof. Hong Zhang, IROS 2017 Conference Chair.

Afternoon Program: “Understanding the Canadian robotics ecosystem” Canadian stakeholders from research, industry, investment, ethics and law provide a collective overview of the Canadian robotics ecosystem. Opening Remarks by Ryan Gariepy, CTO of Clearpath Robotics.

Thursday Evening Program: Sponsored by Clearpath Robotics  Workshop participants gather at a nearby restaurant to network and socialize.

Learn more about the IROS Workshop.

DAY TWO: NSERC-Sponsored Canadian Robotics Stakeholder Meeting
“Towards a national robotics strategy for Canada”

Friday September 29, 2017 | 8:30am – 5:00pm | University of British Columbia (UBC)

On the second day of the program, robotics stakeholders from across the country gather at UBC for a full day brainstorming session to identify Canada’s unique strengths and opportunities relative to the global competition, and to align on a strategic vision for robotics in Canada.

Friday Evening Program: Sponsored by NSERC Meeting participants gather at a nearby restaurant for the event’s closing dinner reception.

Learn more about the Canadian Robotics Stakeholder Meeting.

I was glad to see in the agenda that some of the international speakers represented research efforts from outside the usual Europe/US axis.

I have been in touch with one of the organizers (also mentioned in part one with regard to robot ethics), Ajung Moon (her website is here), who says that there will be a white paper available on the Canadian Robotics Network website at some point in the future. I’ll keep looking for it and, in the meantime, I wonder what the 2018 Canadian federal budget will offer robotics.

Robots and popular culture

For anyone living in Canada or the US, Westworld (television series) is probably the most recent and well known ‘robot’ drama to premiere in the last year.As for movies, I think Ex Machina from 2014 probably qualifies in that category. Interestingly, both Westworld and Ex Machina seem quite concerned with sex with Westworld adding significant doses of violence as another  concern.

I am going to focus on another robot story, the 2012 movie, Robot & Frank, which features a care robot and an older man,

Frank (played by Frank Langella), a former jewel thief, teaches a robot the skills necessary to rob some neighbours of their valuables. The ethical issue broached in the film isn’t whether or not the robot should learn the skills and assist Frank in his thieving ways although that’s touched on when Frank keeps pointing out that planning his heist requires he live more healthily. No, the problem arises afterward when the neighbour accuses Frank of the robbery and Frank removes what he believes is all the evidence. He believes he’s going successfully evade arrest until the robot notes that Frank will have to erase its memory in order to remove all of the evidence. The film ends without the robot’s fate being made explicit.

In a way, I find the ethics query (was the robot Frank’s friend or just a machine?) posed in the film more interesting than the one in Vikander’s story, an issue which does have a history. For example, care aides, nurses, and/or servants would have dealt with requests to give an alcoholic patient a drink. Wouldn’t there  already be established guidelines and practices which could be adapted for robots? Or, is this question made anew by something intrinsically different about robots?

To be clear, Vikander’s story is a good introduction and starting point for these kinds of discussions as is Moon’s ethical question. But they are starting points and I hope one day there’ll be a more extended discussion of the questions raised by Moon and noted in Vikander’s article (a two- or three-part series of articles? public discussions?).

How will humans react to robots?

Earlier there was the contention that intimate interactions with robots and sexbots would decrease empathy and the ability of human beings to interact with each other in caring ways. This sounds a bit like the argument about smartphones/cell phones and teenagers who don’t relate well to others in real life because most of their interactions are mediated through a screen, which many seem to prefer. It may be partially true but, arguably,, books too are an antisocial technology as noted in Walter J. Ong’s  influential 1982 book, ‘Orality and Literacy’,  (from the Walter J. Ong Wikipedia entry),

A major concern of Ong’s works is the impact that the shift from orality to literacy has had on culture and education. Writing is a technology like other technologies (fire, the steam engine, etc.) that, when introduced to a “primary oral culture” (which has never known writing) has extremely wide-ranging impacts in all areas of life. These include culture, economics, politics, art, and more. Furthermore, even a small amount of education in writing transforms people’s mentality from the holistic immersion of orality to interiorization and individuation. [emphases mine]

So, robotics and artificial intelligence would not be the first technologies to affect our brains and our social interactions.

There’s another area where human-robot interaction may have unintended personal consequences according to April Glaser’s Sept. 14, 2017 article on Slate.com (Note: Links have been removed),

The customer service industry is teeming with robots. From automated phone trees to touchscreens, software and machines answer customer questions, complete orders, send friendly reminders, and even handle money. For an industry that is, at its core, about human interaction, it’s increasingly being driven to a large extent by nonhuman automation.

But despite the dreams of science-fiction writers, few people enter a customer-service encounter hoping to talk to a robot. And when the robot malfunctions, as they so often do, it’s a human who is left to calm angry customers. It’s understandable that after navigating a string of automated phone menus and being put on hold for 20 minutes, a customer might take her frustration out on a customer service representative. Even if you know it’s not the customer service agent’s fault, there’s really no one else to get mad at. It’s not like a robot cares if you’re angry.

When human beings need help with something, says Madeleine Elish, an anthropologist and researcher at the Data and Society Institute who studies how humans interact with machines, they’re not only looking for the most efficient solution to a problem. They’re often looking for a kind of validation that a robot can’t give. “Usually you don’t just want the answer,” Elish explained. “You want sympathy, understanding, and to be heard”—none of which are things robots are particularly good at delivering. In a 2015 survey of over 1,300 people conducted by researchers at Boston University, over 90 percent of respondents said they start their customer service interaction hoping to speak to a real person, and 83 percent admitted that in their last customer service call they trotted through phone menus only to make their way to a human on the line at the end.

“People can get so angry that they have to go through all those automated messages,” said Brian Gnerer, a call center representative with AT&T in Bloomington, Minnesota. “They’ve been misrouted or been on hold forever or they pressed one, then two, then zero to speak to somebody, and they are not getting where they want.” And when people do finally get a human on the phone, “they just sigh and are like, ‘Thank God, finally there’s somebody I can speak to.’ ”

Even if robots don’t always make customers happy, more and more companies are making the leap to bring in machines to take over jobs that used to specifically necessitate human interaction. McDonald’s and Wendy’s both reportedly plan to add touchscreen self-ordering machines to restaurants this year. Facebook is saturated with thousands of customer service chatbots that can do anything from hail an Uber, retrieve movie times, to order flowers for loved ones. And of course, corporations prefer automated labor. As Andy Puzder, CEO of the fast-food chains Carl’s Jr. and Hardee’s and former Trump pick for labor secretary, bluntly put it in an interview with Business Insider last year, robots are “always polite, they always upsell, they never take a vacation, they never show up late, there’s never a slip-and-fall, or an age, sex, or race discrimination case.”

But those robots are backstopped by human beings. How does interacting with more automated technology affect the way we treat each other? …

“We know that people treat artificial entities like they’re alive, even when they’re aware of their inanimacy,” writes Kate Darling, a researcher at MIT who studies ethical relationships between humans and robots, in a recent paper on anthropomorphism in human-robot interaction. Sure, robots don’t have feelings and don’t feel pain (not yet, anyway). But as more robots rely on interaction that resembles human interaction, like voice assistants, the way we treat those machines will increasingly bleed into the way we treat each other.

It took me a while to realize that what Glaser is talking about are AI systems and not robots as such. (sigh) It’s so easy to conflate the concepts.

AI ethics (Toby Walsh and Suzanne Gildert)

Jack Stilgoe of the Guardian published a brief Oct. 9, 2017 introduction to his more substantive (30 mins.?) podcast interview with Dr. Toby Walsh where they discuss stupid AI amongst other topics (Note: A link has been removed),

Professor Toby Walsh has recently published a book – Android Dreams – giving a researcher’s perspective on the uncertainties and opportunities of artificial intelligence. Here, he explains to Jack Stilgoe that we should worry more about the short-term risks of stupid AI in self-driving cars and smartphones than the speculative risks of super-intelligence.

Professor Walsh discusses the effects that AI could have on our jobs, the shapes of our cities and our understandings of ourselves. As someone developing AI, he questions the hype surrounding the technology. He is scared by some drivers’ real-world experimentation with their not-quite-self-driving Teslas. And he thinks that Siri needs to start owning up to being a computer.

I found this discussion to cast a decidedly different light on the future of robotics and AI. Walsh is much more interested in discussing immediate issues like the problems posed by ‘self-driving’ cars. (Aside: Should we be calling them robot cars?)

One ethical issue Walsh raises is with data regarding accidents. He compares what’s happening with accident data from self-driving (robot) cars to how the aviation industry handles accidents. Hint: accident data involving air planes is shared. Would you like to guess who does not share their data?

Sharing and analyzing data and developing new safety techniques based on that data has made flying a remarkably safe transportation technology.. Walsh argues the same could be done for self-driving cars if companies like Tesla took the attitude that safety is in everyone’s best interests and shared their accident data in a scheme similar to the aviation industry’s.

In an Oct. 12, 2017 article by Matthew Braga for Canadian Broadcasting Corporation (CBC) news online another ethical issue is raised by Suzanne Gildert (a participant in the Canadian Robotics Roadmap/Strategy meetings mentioned earlier here), Note: Links have been removed,

… Suzanne Gildert, the co-founder and chief science officer of Vancouver-based robotics company Kindred. Since 2014, her company has been developing intelligent robots [emphasis mine] that can be taught by humans to perform automated tasks — for example, handling and sorting products in a warehouse.

The idea is that when one of Kindred’s robots encounters a scenario it can’t handle, a human pilot can take control. The human can see, feel and hear the same things the robot does, and the robot can learn from how the human pilot handles the problematic task.

This process, called teleoperation, is one way to fast-track learning by manually showing the robot examples of what its trainers want it to do. But it also poses a potential moral and ethical quandary that will only grow more serious as robots become more intelligent.

“That AI is also learning my values,” Gildert explained during a talk on robot ethics at the Singularity University Canada Summit in Toronto on Wednesday [Oct. 11, 2017]. “Everything — my mannerisms, my behaviours — is all going into the AI.”

At its worst, everything from algorithms used in the U.S. to sentence criminals to image-recognition software has been found to inherit the racist and sexist biases of the data on which it was trained.

But just as bad habits can be learned, good habits can be learned too. The question is, if you’re building a warehouse robot like Kindred is, is it more effective to train those robots’ algorithms to reflect the personalities and behaviours of the humans who will be working alongside it? Or do you try to blend all the data from all the humans who might eventually train Kindred robots around the world into something that reflects the best strengths of all?

I notice Gildert distinguishes her robots as “intelligent robots” and then focuses on AI and issues with bias which have already arisen with regard to algorithms (see my May 24, 2017 posting about bias in machine learning, AI, and .Note: if you’re in Vancouver on Oct. 26, 2017 and interested in algorithms and bias), there’s a talk being given by Dr. Cathy O’Neil, author the Weapons of Math Destruction, on the topic of Gender and Bias in Algorithms. It’s not free but  tickets are here.)

Final comments

There is one more aspect I want to mention. Even as someone who usually deals with nanobots, it’s easy to start discussing robots as if the humanoid ones are the only ones that exist. To recapitulate, there are humanoid robots, utilitarian robots, intelligent robots, AI, nanobots, ‘microscopic bots, and more all of which raise questions about ethics and social impacts.

However, there is one more category I want to add to this list: cyborgs. They live amongst us now. Anyone who’s had a hip or knee replacement or a pacemaker or a deep brain stimulator or other such implanted device qualifies as a cyborg. Increasingly too, prosthetics are being introduced and made part of the body. My April 24, 2017 posting features this story,

This Case Western Reserve University (CRWU) video accompanies a March 28, 2017 CRWU news release, (h/t ScienceDaily March 28, 2017 news item)

Bill Kochevar grabbed a mug of water, drew it to his lips and drank through the straw.

His motions were slow and deliberate, but then Kochevar hadn’t moved his right arm or hand for eight years.

And it took some practice to reach and grasp just by thinking about it.

Kochevar, who was paralyzed below his shoulders in a bicycling accident, is believed to be the first person with quadriplegia in the world to have arm and hand movements restored with the help of two temporarily implanted technologies. [emphasis mine]

A brain-computer interface with recording electrodes under his skull, and a functional electrical stimulation (FES) system* activating his arm and hand, reconnect his brain to paralyzed muscles.

Does a brain-computer interface have an effect on human brain and, if so, what might that be?

In any discussion (assuming there is funding for it) about ethics and social impact, we might want to invite the broadest range of people possible at an ‘earlyish’ stage (although we’re already pretty far down the ‘automation road’) stage or as Jack Stilgoe and Toby Walsh note, technological determinism holds sway.

Once again here are links for the articles and information mentioned in this double posting,

That’s it!

ETA Oct. 16, 2017: Well, I guess that wasn’t quite ‘it’. BBC’s (British Broadcasting Corporation) Magazine published a thoughtful Oct. 15, 2017 piece titled: Can we teach robots ethics?

Solar-powered graphene skin for more feeling in your prosthetics

A March 23, 2017 news item on Nanowerk highlights research that could put feeling into a prosthetic limb,

A new way of harnessing the sun’s rays to power ‘synthetic skin’ could help to create advanced prosthetic limbs capable of returning the sense of touch to amputees.

Engineers from the University of Glasgow, who have previously developed an ‘electronic skin’ covering for prosthetic hands made from graphene, have found a way to use some of graphene’s remarkable physical properties to use energy from the sun to power the skin.

Graphene is a highly flexible form of graphite which, despite being just a single atom thick, is stronger than steel, electrically conductive, and transparent. It is graphene’s optical transparency, which allows around 98% of the light which strikes its surface to pass directly through it, which makes it ideal for gathering energy from the sun to generate power.

A March 23, 2017 University of Glasgow press release, which originated the news item, details more about the research,

Ravinder Dahiya

Dr Ravinder Dahiya

A new research paper, published today in the journal Advanced Functional Materials, describes how Dr Dahiya and colleagues from his Bendable Electronics and Sensing Technologies (BEST) group have integrated power-generating photovoltaic cells into their electronic skin for the first time.

Dr Dahiya, from the University of Glasgow’s School of Engineering, said: “Human skin is an incredibly complex system capable of detecting pressure, temperature and texture through an array of neural sensors which carry signals from the skin to the brain.

“My colleagues and I have already made significant steps in creating prosthetic prototypes which integrate synthetic skin and are capable of making very sensitive pressure measurements. Those measurements mean the prosthetic hand is capable of performing challenging tasks like properly gripping soft materials, which other prosthetics can struggle with. We are also using innovative 3D printing strategies to build more affordable sensitive prosthetic limbs, including the formation of a very active student club called ‘Helping Hands’.

“Skin capable of touch sensitivity also opens the possibility of creating robots capable of making better decisions about human safety. A robot working on a construction line, for example, is much less likely to accidentally injure a human if it can feel that a person has unexpectedly entered their area of movement and stop before an injury can occur.”

The new skin requires just 20 nanowatts of power per square centimetre, which is easily met even by the poorest-quality photovoltaic cells currently available on the market. And although currently energy generated by the skin’s photovoltaic cells cannot be stored, the team are already looking into ways to divert unused energy into batteries, allowing the energy to be used as and when it is required.

Dr Dahiya added: “The other next step for us is to further develop the power-generation technology which underpins this research and use it to power the motors which drive the prosthetic hand itself. This could allow the creation of an entirely energy-autonomous prosthetic limb.

“We’ve already made some encouraging progress in this direction and we’re looking forward to presenting those results soon. We are also exploring the possibility of building on these exciting results to develop wearable systems for affordable healthcare. In this direction, recently we also got small funds from Scottish Funding Council.”

For more information about this advance and others in the field of prosthetics you may want to check out Megan Scudellari’s March 30, 2017 article for the IEEE’s (Institute of Electrical and Electronics Engineers) Spectrum (Note: Links have been removed),

Cochlear implants can restore hearing to individuals with some types of hearing loss. Retinal implants are now on the market to restore sight to the blind. But there are no commercially available prosthetics that restore a sense of touch to those who have lost a limb.

Several products are in development, including this haptic system at Case Western Reserve University, which would enable upper-limb prosthetic users to, say, pluck a grape off a stem or pull a potato chip out of a bag. It sounds simple, but such tasks are virtually impossible without a sense of touch and pressure.

Now, a team at the University of Glasgow that previously developed a flexible ‘electronic skin’ capable of making sensitive pressure measurements, has figured out how to power their skin with sunlight. …

Here’s a link to and a citation for the paper,

Energy-Autonomous, Flexible, and Transparent Tactile Skin by Carlos García Núñez, William Taube Navaraj, Emre O. Polat and Ravinder Dahiya. Advanced Functional Materials DOI: 10.1002/adfm.201606287 Version of Record online: 22 MAR 2017

© 2017 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim

This paper is behind a paywall.

A new class of artificial retina

If I read the news release rightly (keep scrolling), this particular artificial retina does not require a device outside the body (e.g. specially developed eyeglasses) to capture an image to be transmitted to the implant. This new artificial retina captures the image directly.

The announcement of a new artificial retina is made in a March 13, 2017 news item on Nanowerk (Note: A link has been removed),

A team of engineers at the University of California San Diego and La Jolla-based startup Nanovision Biosciences Inc. have developed the nanotechnology and wireless electronics for a new type of retinal prosthesis that brings research a step closer to restoring the ability of neurons in the retina to respond to light. The researchers demonstrated this response to light in a rat retina interfacing with a prototype of the device in vitro.

They detail their work in a recent issue of the Journal of Neural Engineering (“Towards high-resolution retinal prostheses with direct optical addressing and inductive telemetry”). The technology could help tens of millions of people worldwide suffering from neurodegenerative diseases that affect eyesight, including macular degeneration, retinitis pigmentosa and loss of vision due to diabetes

Caption: These are primary cortical neurons cultured on the surface of an array of optoelectronic nanowires. Here a neuron is pulling the nanowires, indicating the the cell is doing well on this material. Credit: UC San Diego

A March 13, 2017 University of California at San Diego (UCSD) news release (also on EurekAlert) by Ioana Patringenaru, which originated the news item, details the new approach,

Despite tremendous advances in the development of retinal prostheses over the past two decades, the performance of devices currently on the market to help the blind regain functional vision is still severely limited–well under the acuity threshold of 20/200 that defines legal blindness.

“We want to create a new class of devices with drastically improved capabilities to help people with impaired vision,” said Gabriel A. Silva, one of the senior authors of the work and professor in bioengineering and ophthalmology at UC San Diego. Silva also is one of the original founders of Nanovision.

The new prosthesis relies on two groundbreaking technologies. One consists of arrays of silicon nanowires that simultaneously sense light and electrically stimulate the retina accordingly. The nanowires give the prosthesis higher resolution than anything achieved by other devices–closer to the dense spacing of photoreceptors in the human retina. The other breakthrough is a wireless device that can transmit power and data to the nanowires over the same wireless link at record speed and energy efficiency.

One of the main differences between the researchers’ prototype and existing retinal prostheses is that the new system does not require a vision sensor outside of the eye [emphasis mine] to capture a visual scene and then transform it into alternating signals to sequentially stimulate retinal neurons. Instead, the silicon nanowires mimic the retina’s light-sensing cones and rods to directly stimulate retinal cells. Nanowires are bundled into a grid of electrodes, directly activated by light and powered by a single wireless electrical signal. This direct and local translation of incident light into electrical stimulation makes for a much simpler–and scalable–architecture for the prosthesis.

The power provided to the nanowires from the single wireless electrical signal gives the light-activated electrodes their high sensitivity while also controlling the timing of stimulation.

“To restore functional vision, it is critical that the neural interface matches the resolution and sensitivity of the human retina,” said Gert Cauwenberghs, a professor of bioengineering at the Jacobs School of Engineering at UC San Diego and the paper’s senior author.

Wireless telemetry system

Power is delivered wirelessly, from outside the body to the implant, through an inductive powering telemetry system developed by a team led by Cauwenberghs.

The device is highly energy efficient because it minimizes energy losses in wireless power and data transmission and in the stimulation process, recycling electrostatic energy circulating within the inductive resonant tank, and between capacitance on the electrodes and the resonant tank. Up to 90 percent of the energy transmitted is actually delivered and used for stimulation, which means less RF wireless power emitting radiation in the transmission, and less heating of the surrounding tissue from dissipated power.

The telemetry system is capable of transmitting both power and data over a single pair of inductive coils, one emitting from outside the body, and another on the receiving side in the eye. The link can send and receive one bit of data for every two cycles of the 13.56 megahertz RF signal; other two-coil systems need at least 5 cycles for every bit transmitted.

Proof-of-concept test

For proof-of-concept, the researchers inserted the wirelessly powered nanowire array beneath a transgenic rat retina with rhodopsin P23H knock-in retinal degeneration. The degenerated retina interfaced in vitro with a microelectrode array for recording extracellular neural action potentials (electrical “spikes” from neural activity).

The horizontal and bipolar neurons fired action potentials preferentially when the prosthesis was exposed to a combination of light and electrical potential–and were silent when either light or electrical bias was absent, confirming the light-activated and voltage-controlled responsivity of the nanowire array.

The wireless nanowire array device is the result of a collaboration between a multidisciplinary team led by Cauwenberghs, Silva and William R. Freeman, director of the Jacobs Retina Center at UC San Diego, UC San Diego electrical engineering professor Yu-Hwa Lo and Nanovision Biosciences.

A path to clinical translation

Freeman, Silva and Scott Thorogood, have co-founded La Jolla-based Nanovision Biosciences, a partner in this study, to further develop and translate the technology into clinical use, with the goal of restoring functional vision in patients with severe retinal degeneration. Animal tests with the device are in progress, with clinical trials following.

“We have made rapid progress with the development of the world’s first nanoengineered retinal prosthesis as a result of the unique partnership we have developed with the team at UC San Diego,” said Thorogood, who is the CEO of Nanovision Biosciences.

Here’s a link to and a citation for the paper,

Towards high-resolution retinal prostheses with direct optical addressing and inductive telemetry by Sohmyung Ha, Massoud L Khraiche, Abraham Akinin, Yi Jing, Samir Damle, Yanjin Kuang, Sue Bauchner, Yu-Hwa Lo, William R Freeman, Gabriel A Silva.Journal of Neural Engineering, Volume 13, Number 5 DOI: https://doi.org/10.1088/1741-2560/13/5/056008

Published 16 August 2016 • © 2016 IOP Publishing Ltd

I’m not sure why they waited so long to make the announcement but, in any event, this paper is behind a paywall.

Bidirectional prosthetic-brain communication with light?

The possibility of not only being able to make a prosthetic that allows a tetraplegic to grab a coffee but to feel that coffee  cup with their ‘hand’ is one step closer to reality according to a Feb. 22, 2017 news item on ScienceDaily,

Since the early seventies, scientists have been developing brain-machine interfaces; the main application being the use of neural prosthesis in paralyzed patients or amputees. A prosthetic limb directly controlled by brain activity can partially recover the lost motor function. This is achieved by decoding neuronal activity recorded with electrodes and translating it into robotic movements. Such systems however have limited precision due to the absence of sensory feedback from the artificial limb. Neuroscientists at the University of Geneva (UNIGE), Switzerland, asked whether it was possible to transmit this missing sensation back to the brain by stimulating neural activity in the cortex. They discovered that not only was it possible to create an artificial sensation of neuroprosthetic movements, but that the underlying learning process occurs very rapidly. These findings, published in the scientific journal Neuron, were obtained by resorting to modern imaging and optical stimulation tools, offering an innovative alternative to the classical electrode approach.

A Feb. 22, 2017 Université de Genève press release on EurekAlert, which originated the news item, provides more detail,

Motor function is at the heart of all behavior and allows us to interact with the world. Therefore, replacing a lost limb with a robotic prosthesis is the subject of much research, yet successful outcomes are rare. Why is that? Until this moment, brain-machine interfaces are operated by relying largely on visual perception: the robotic arm is controlled by looking at it. The direct flow of information between the brain and the machine remains thus unidirectional. However, movement perception is not only based on vision but mostly on proprioception, the sensation of where the limb is located in space. “We have therefore asked whether it was possible to establish a bidirectional communication in a brain-machine interface: to simultaneously read out neural activity, translate it into prosthetic movement and reinject sensory feedback of this movement back in the brain”, explains Daniel Huber, professor in the Department of Basic Neurosciences of the Faculty of Medicine at UNIGE.

Providing artificial sensations of prosthetic movements

In contrast to invasive approaches using electrodes, Daniel Huber’s team specializes in optical techniques for imaging and stimulating brain activity. Using a method called two-photon microscopy, they routinely measure the activity of hundreds of neurons with single cell resolution. “We wanted to test whether mice could learn to control a neural prosthesis by relying uniquely on an artificial sensory feedback signal”, explains Mario Prsa, researcher at UNIGE and the first author of the study. “We imaged neural activity in the motor cortex. When the mouse activated a specific neuron, the one chosen for neuroprosthetic control, we simultaneously applied stimulation proportional to this activity to the sensory cortex using blue light”. Indeed, neurons of the sensory cortex were rendered photosensitive to this light, allowing them to be activated by a series of optical flashes and thus integrate the artificial sensory feedback signal. The mouse was rewarded upon every above-threshold activation, and 20 minutes later, once the association learned, the rodent was able to more frequently generate the correct neuronal activity.

This means that the artificial sensation was not only perceived, but that it was successfully integrated as a feedback of the prosthetic movement. In this manner, the brain-machine interface functions bidirectionally. The Geneva researchers think that the reason why this fabricated sensation is so rapidly assimilated is because it most likely taps into very basic brain functions. Feeling the position of our limbs occurs automatically, without much thought and probably reflects fundamental neural circuit mechanisms. This type of bidirectional interface might allow in the future more precisely displacing robotic arms, feeling touched objects or perceiving the necessary force to grasp them.

At present, the neuroscientists at UNIGE are examining how to produce a more efficient sensory feedback. They are currently capable of doing it for a single movement, but is it also possible to provide multiple feedback channels in parallel? This research sets the groundwork for developing a new generation of more precise, bidirectional neural prostheses.

Towards better understanding the neural mechanisms of neuroprosthetic control

By resorting to modern imaging tools, hundreds of neurons in the surrounding area could also be observed as the mouse learned the neuroprosthetic task. “We know that millions of neural connections exist. However, we discovered that the animal activated only the one neuron chosen for controlling the prosthetic action, and did not recruit any of the neighbouring neurons”, adds Daniel Huber. “This is a very interesting finding since it reveals that the brain can home in on and specifically control the activity of just one single neuron”. Researchers can potentially exploit this knowledge to not only develop more stable and precise decoding techniques, but also gain a better understanding of most basic neural circuit functions. It remains to be discovered what mechanisms are involved in routing signals to the uniquely activated neuron.

Caption: A novel optical brain-machine interface allows bidirectional communication with the brain. While a robotic arm is controlled by neuronal activity recorded with optical imaging (red laser), the position of the arm is fed back to the brain via optical microstimulation (blue laser). Credit: © Daniel Huber, UNIGE

Here’s a link to and a citation for the paper,

Rapid Integration of Artificial Sensory Feedback during Operant Conditioning of Motor Cortex Neurons by Mario Prsa, Gregorio L. Galiñanes, Daniel Huber. Neuron Volume 93, Issue 4, p929–939.e6, 22 February 2017 DOI: http://dx.doi.org/10.1016/j.neuron.2017.01.023 Open access funded by European Research Council

This paper is open access.

Technology, athletics, and the ‘new’ human

There is a tension between Olympic athletes and Paralympic athletes as it is felt by some able-bodied athletes that paralympic athletes may have an advantage due to their prosthetics. Roger Pielke Jr. has written a fascinating account of the tensions as a means of asking what it all means. From Pielke Jr.’s Aug. 3, 2016 post on the Guardian Science blogs (Note: Links have been removed),

Athletes are humans too, and they sometimes look for a performance improvement through technological enhancements. In my forthcoming book, The Edge: The War Against Cheating and Corruption in the Cutthroat World of Elite Sports, I discuss a range of technological augmentations to both people and to sports, and the challenges that they pose for rule making. In humans, such improvements can be the result of surgery to reshape (like laser eye surgery) or strengthen (such as replacing a ligament with a tendon) the body to aid performance, or to add biological or non-biological parts that the individual wasn’t born with.

One well-known case of technological augmentation involved the South African sprinter Oscar Pistorius, who ran in the 2012 Olympic Games on prosthetic “blades” below his knees (during happier days for the athlete who is currently jailed in South Africa for the killing of his girlfriend, Reeva Steenkamp). Years before the London Games Pistorius began to have success on the track running against able-bodied athletes. As a consequence of this success and Pistorius’s interest in competing at the Olympic games, the International Association of Athletics Federations (or IAAF, which oversees elite track and field competitions) introduced a rule in 2007, focused specifically on Pistorius, prohibiting the “use of any technical device that incorporates springs, wheels, or any other element that provides the user with an advantage over another athlete not using such a device.” Under this rule, Pistorius was determined by the IAAF to be ineligible to compete against able-bodied athletes.

Pistorius appealed the decision to the Court of Arbitration for Sport. The appeal hinged on answering a metaphysical question—how fast would Pistorius have run had he been born with functioning legs below the knee? In other words, did the blades give him an advantage over other athletes that the hypothetical, able-bodied Oscar Pistorius would not have had? Because there never was an able-bodied Pistorius, the CAS looked to scientists to answer the question.

CAS concluded that the IAAF was in fact fixing the rules to prevent Pistorius from competing and that “at least some IAAF officials had determined that they did not want Mr. Pistorius to be acknowledged as eligible to compete in international IAAF-sanctioned events, regardless of the results that properly conducted scientific studies might demonstrate.” CAS determined that it was the responsibility of the IAAF to show “on the balance of probabilities” that Pistorius gained an advantage by running on his blades. CAS concluded that the research commissioned by the IAAF did not show conclusively such an advantage.

As a result, CAS ruled that Pistorius was able to compete in the London Games, where he reached the semifinals of the 400 meters. CAS concluded that resolving such disputes “must be viewed as just one of the challenges of 21st Century life.”

The story does not end with Oscar Pistorius as Pielke, Jr. notes. There has been another challenge, this time by Markus Rehm, a German long-jumper who leaps off a prosthetic leg. Interestingly, the rules have changed since Oscar Pistorius won his case (Note: Links have been removed),

In the Pistorius case, under the rules for inclusion in the Olympic games the burden of proof had been on the IAAF, not the athlete, to demonstrate the presence of an advantage provided by technology.

This precedent was overturned in 2015, when the IAAF quietly introduced a new rule that in such cases reverses the burden of proof. The switch placed the burden of proof on the athlete instead of the governing body. The new rule—which we might call the Rehm Rule, given its timing—states that an athlete with a prosthetic limb (specifically, any “mechanical aid”) cannot participate in IAAF events “unless the athlete can establish on the balance of probabilities that the use of an aid would not provide him with an overall competitive advantage over an athlete not using such aid.” This new rule effectively slammed the door to participation by Paralympians with prosthetics from participating in Olympic Games.
Advertisement

Even if an athlete might have the resources to enlist researchers to carefully study his or her performance, the IAAF requires the athlete to do something that is very difficult, and often altogether impossible—to prove a negative.

If you have the time, I encourage you to read Pielke Jr.’s piece in its entirety as he notes the secrecy with which the Rehm rule was implemented and the implications for the future. Here’s one last excerpt (Note: A link has been removed),

We may be seeing only the beginning of debates over technological augmentation and sport. Silvia Camporesi, an ethicist at King’s College London, observed: “It is plausible to think that in 50 years, or maybe less, the ‘natural’ able-bodied athletes will just appear anachronistic.” She continues: “As our concept of what is ‘natural’ depends on what we are used to, and evolves with our society and culture, so does our concept of ‘purity’ of sport.”

I have written many times about human augmentation and the possibility that what is now viewed as a ‘normal’ body may one day be viewed as subpar or inferior is not all that farfetched. David Epstein’s 2014 TED talk “Are athletes really getting faster, better, stronger?” points out that in addition to sports technology innovations athletes’ bodies have changed considerably since the beginning of the 20th century. He doesn’t discuss body augmentation but it seems increasingly likely not just for athletes but for everyone.

As for athletes and augmentation, Epstein has an Aug. 7, 2016 Scientific American piece published on Salon.com in time for the 2016 Summer Olympics in Rio de Janeiro,

I knew Eero Mäntyranta had magic blood, but I hadn’t expected to see it in his face. I had tracked him down above the Arctic Circle in Finland where he was — what else? — a reindeer farmer.

He was all red. Not just the crimson sweater with knitted reindeer crossing his belly, but his actual skin. It was cardinal dappled with violet, his nose a bulbous purple plum. In the pictures I’d seen of him in Sports Illustrated in the 1960s — when he’d won three Olympic gold medals in cross-country skiing — he was still white. But now, as an older man, his special blood had turned him red.

Mäntyranta had about 50 percent more red blood cells than a normal man. If Armstrong [Lance Armstrong, cyclist] had as many red blood cells as Mäntyranta, cycling rules would have barred him from even starting a race, unless he could prove it was a natural condition.

During his career, Mäntyranta was accused of doping after his high red blood cell count was discovered. Two decades after he retired, Finnish scientists found his family’s mutation. …

Epstein also covers the Pistorius story, albeit with more detail about the science and controversy of determining whether someone with prosthetics may have an advantage over an able-bodied athlete. Scientists don’t agree about whether or not there is an advantage.

I have many other posts on the topic of augmentation. You can find them under the Human Enhancement category and you can also try the tag, machine/flesh.

Prosthetics in North Carolina and in Vancouver, Canada

North Carolina

This is the first time I’ve seen any kind of hand prosthestic offering finger control. From a May 31, 2016 OrthoCarolina news release (received via email),

Two OrthoCarolina hand surgeons have successfully completed the first surgery to allow for a prosthetic hand with individual finger control on an amputee patient. Partnering with OrthoCarolina Research Institute (OCRI) in pursuit of medical breakthroughs through orthopedic research, Drs. Glenn Gaston and Bryan Loeffler conceptualized and performed the procedure involving transferring existing muscle from the fingers to the back of the hand and wrist without damaging the nerves and blood vessels to the muscles. The patient who underwent the test surgery is now able to control individual prosthetic fingers using the same muscles that controlled his fingers pre-amputation, making him the first person in the world to have individual digit control in a functioning myoelectric prosthesis.

“Patients who have sustained full or partial hand amputations obviously have significant morbidity and limited function, which is a challenge. Because of the limited number of muscles available after a hand amputation, prostheses have previously allowed only control of the thumb and fingers as a group and single finger control was never possible,” said Dr. Glenn Gaston.  “The severity of this patient’s injury was so great that replanting the lost fingers was not possible, so we collaborated on a new surgery that would allow him to have individual digital control.”

Hypothesizing that existing muscle in the back of human fingers could be transferred to the back of the hand and wrist without damaging the nerves and blood vessels to those muscles, Drs. Gaston and Loeffler first performed cadaveric testing to ensure feasibility. The goal of the initial project was for the small muscles that control individual fingers to regain control of prosthetic fingers by maintaining enough blood and nerve supply to allow the prosthetic limb to recognize individual digits.

With successful research completed, they collaborated with the Hanger Clinic to determine how much bone would be required to be removed from the hand, allowing the prosthetic componentry enough space to maintain a normal hand length.

The two surgeons jointly performed the surgery as a pilot case on a partial hand amputee, moving the muscles while still allowing the prosthesis to detect signals from the transferred muscles; a procedure never before reported in orthopedic literature.

“Imagine the limitations you would have if all of your fingers had to move as one unit, and then suddenly you were able to move individual fingers to perform specific actions,” said Dr. Bryan Loeffler. “This muscle transfer is a breakthrough that could impact how upper extremity amputees are managed and specific amputations are done in the future.”

Drs. Loeffler and Gaston have completed a cadaver model demonstrating the capability of the same type of surgery for a more proximal level total hand amputation. They presented their research at a podium presentation to the First International Symposium on Innovations in Amputation Surgery and Prosthetic Technologies (IASPT) May 12-13, 2016 in Chicago.

OrthoCarolina Research Institute is an independent non-profit committed to the advancement of orthopedic practice through clinical research. OCRI will continue to support this ground-breaking research and the manufacturing of this cutting edge prosthesis. “This is a tremendous example of the life-changing impact that orthopedic research plays in advancing patient outcomes,” said Christi Cadd, Executive Director of OCRI.

You can find out more about OrthoCarolina here.

Vancouver, Canada

While they celebrate exciting prosthetic news in North Carolina, those of us in Vancouver have been given the opportunity to view an unusual display of vintage artificial limbs (prosthetics) in an exhibition, All Together Now, featuring a number of rarely seen private collections including corsets, Chinese restaurant menus, and pinball machines. From a June 22, 2016 article by Janet Smith for the Georgia Straight, here’s more about the prosthetic collection,

For those unfamiliar, the lifelike artificial legs and arms that hang on the Museum of Vancouver’s wall might seem like medical oddities from a less advanced era.

But for collector David Moe, a certified prosthetist, they are integral, inspiring pieces for his career, his teaching, and his workspace.

“I love them all,” he says with enthusiasm, standing in the museum’s giant new exhibit All Together Now: Vancouver Collectors and Their World, in a corner of an expansive, cabinet-of-curiosities-styled room that houses everything from scores of local Chinese-restaurant menus to rows of 19th-century corsets and a glass case full of hundreds of action figures. “It’s very strange because they have been all around me for so long and they have sat in predominant spaces at work—they sit on the top of a shelf. So when I walk back in there right now there are these kinds of empty holes.

“But I’m happy to have them on display and to let people think about what they see and have the opportunity to have them think about prosthetics. Because nobody ever thinks about them until they need one.”

Moe began collecting almost from his start, at the age of 14, when he worked sweeping floors and pouring plaster at Northern Alberta Prosthetic & Orthotic Services, his family’s business in Edmonton. One of his first big finds was a leg that sits in the exhibit today—a meticulously carved wooden limb covered in smooth skin-tone leather, dating back to the 1930s. At the time, he recognized the craftsmanship and tucked it away where it wouldn’t disappear; today he still marvels at the anatomical design, with a hinged knee that bends with the use of straps.

“… . The math is the math. But we’ve moved so far. I really love where we’ve come from,” says Moe, gesturing to the vintage pieces he uses regularly to teach students at BCIT [British Columbia Institute of Technology]. He says he can appreciate the human touch and deep care that went into each one, then adds: “All of these were used by people, so the energy of these people is in these. I feel that responsibility of these people in here.”

To show how far his specialty has come, though, Moe has juxtaposed the historic limbs with modern-day advances—decorative limb coverings with fashionable latticework, or a kids’ shin piece that’s been emblazoned with a comic-book image of Superman. Now, instead of trying to just mimic natural limbs, some people are opting for statement pieces that actually draw attention to their prosthetic. “This empowers them in this powerless situation where someone has amputated your leg,” he notes.

As with other exhibits in All Together Now, there are audiovisuals that accompany his collection—in this case showing people using the advanced limbs of today, from a female triathlete carrying her baby to another client playing competitive volleyball.

“When someone does the Grouse Grind or, hell, just walks their child down the street, that’s when they come alive. We’re rebuilding lives, not pieces,” Moe says.

You can find out more about All Together Now here,

All Together Now: Vancouver Collectors and Their Worlds features 20 beautiful, rare, and unconventional collections, with something for everyone including corsets, prosthetics, pinball machines, taxidermy, toys, and much more. In this exhibition both collector and collected are objects of study, interaction, and delight.

The exhibition runs until January 8, 2017. The last Thursday of the month is by donation from 5 pm to 8 pm. More information about admission can be found here and you might also want to check out the exhibition’s Events page.

10- to 15-year-olds as superhero cyborgs

It’s not the first time someone’s tried to redesign a prosthetic (an Aug. 7, 2009 posting touched on reimagining prosthetic arms and other topics) but it’s the first project I’ve seen where children are the featured designers. A Jan. 27, 2016 article by Emily Price for The Guardian describes the idea,

In a hidden room in the back of a pier overlooking the San Francisco Bay, a young girl shoots glitter across the room with a flick of her wrist. On the other side of the room, a boy is shooting darts from his wrist – some travelling at least 20ft high, onto a landing above. It feels like a superhero training center or a party for the next generation of X-Men and, in a way, it is.

This is Superhero Cyborgs, an event that brings six children together with 3D design specialists and augmentation experts to create unique prosthetics that will turn each child into a kind of superhero.

The children are aged between 10 and 15 and all have upper-limb differences, having either been born without a hand or having lost a limb. They are spending five days with prosthetics experts and a design team from 3D software firm Autodesk, creating prosthetics that turn a replacement hand into something much more special.

“We started asking: ‘Why are we trying to replicate the functionality of a hand?’ when we could really do anything. Things that are way cooler that hands aren’t able to do,” says Kate Ganim, co-founder and co-director at KidMob, the nonprofit group that organised this project in partnership with San Rafael, California 3D software firm Autodesk. KidMob first ran this type of project at Rhode Island’s Brown University in 2014.

Details of each superhero prosthetic are being posted on the DIY site Instructables and hacking site Project Ignite in the hope that it inspires other groups, schools and individuals to follow suit. “A classroom might work on building a project and then donate a finished hand to someone they know or appoint it to someone in the community who is in need,” O’Rourke said.

I searched the Project Ignite website using the term ‘superhero cyborg’ and did not receive a single hit. I also used the search term on the Instructables website and got many hits but did not see one that resembled any of the project descriptions in Price’s article. Unfortunately, Price did not offer any suggestions for search terms.

Getting back to the project, Jessica Hullinger has written a March 28, 2016 article about Superhero Cyborgs for Fast Company where she follows one of the participants (Note: Links have been removed),

Jordan [Jordan Reeves, a 10-year-old from Columbia, Missouri] was born with a limb difference: her left arm stops just above the elbow. When she found out she was headed to the Superhero Cyborg workshop, she was over the moon. “I was like, ‘Wow, I can’t believe I’m actually doing this,'” she says.

Over the course of five days, she and five other kids between the ages of 10 and 15 worked with design experts and engineers from Autodesk to brainstorm ideas. “Basically, if they could design the prosthetic or body modification of their dreams in a superhero context, what would that look like?” asks Sarah O’Rourke, a senior product marketing manager with Autodesk.

For Jordan, it looks very sparkly. Her plan was to transform her arm into a cannon that spread a delightful cloud of glitter wherever she went. She started with a few sketches. Then she created a 3-D-printed cast of her arm and a plastic cuff made to fit over it, for prototyping purposes. The kids used Autodesk’s 3-D design tools like TinkerCAD and Fusion 360 to test their prototypes. …

“For us, our interest is in getting kids familiar with taking an idea from concept to execution and learning the skills along the way to do that,” says Ganim. “Ideally, it’s not about the end product they end up with out of workshop; it’s more about realizing they’re not just subject to what’s available on the market. It creates this interesting closed loop system where they’re both designer and end user. That is very powerful.”

The workshop is over now but the children will continue for a few months working on their designs and, in some cases, creating prostheses that can have practical applications.

You can find out more about Superhero Cyborgs in a Feb. 7, 2016 posting on the KIDmob website blog,

SuperHeroCyborgSydney
Sydney: A dual water gun shooter that will automatically refill itself

I got more information on KIDmob on the About page,

KIDmob is the mobile, kid-integrated design firm. We are a Bay Area fiscally sponsored not-for-profit organization that believes design education is an opportunity for creative engagement and community empowerment. We take our passion on the road to bring our innovative approach to local communities around the world.

We engage in the design process through project-based learning. KIDmob workshops use the design process as a beginning curriculum framework on which to build a customized local project brief, based on a partner-identified need. Our workshops facilitate partners in devising imaginative solutions for their community, by their community. We strive to foster local stewardship within all of our projects.

We promote an energetic, hands-on approach to learning – our workshops create an immersive environment of moving, shaking, sketching, whirling, splatting, slicing, sawing, jitterbugging creativity. When we are not swimming in post-it notes, we like to explore all kinds of technologies, from pencils to circuitry mills, as tools for creative expression.