Tag Archives: prosthetics

Robot skin that feels heat, pain, and pressure

This June 17, 2025 news item on ScienceDaily announces research into developing robot skin that more closely mimics skin (human and otherwise),

Scientists have developed a low-cost, durable, highly-sensitive robotic ‘skin’ that can be added to robotic hands like a glove, enabling robots to detect information about their surroundings in a way that’s similar to humans.

The researchers, from the University of Cambridge and University College London (UCL), developed the flexible, conductive skin, which is easy to fabricate and can be melted down and formed into a wide range of complex shapes. The technology senses and processes a range of physical inputs, allowing robots to interact with the physical world in a more meaningful way.

A June 11, 2025 University of Cambridge news release (also on EurekAlert) by Sarah Collins, which originated the news item, describes what makes this work a breakthrough,

Unlike other solutions for robotic touch, which typically work via sensors embedded in small areas and require different sensors to detect different types of touch, the entirety of the electronic skin developed by the Cambridge and UCL researchers is a sensor, bringing it closer to our own sensor system: our skin.  

Although the robotic skin is not as sensitive as human skin, it can detect signals from over 860,000 tiny pathways in the material, enabling it to recognise different types of touch and pressure – like the tap of a finger, a hot or cold surface, damage caused by cutting or stabbing, or multiple points being touched at once – in a single material.

The researchers used a combination of physical tests and machine learning techniques to help the robotic skin ‘learn’ which of these pathways matter most, so it can sense different types of contact more efficiently.

In addition to potential future applications for humanoid robots or human prosthetics where a sense of touch is vital, the researchers say the robotic skin could be useful in industries as varied as the automotive sector or disaster relief. The results are reported in the journal Science Robotics.

Electronic skins work by converting physical information – like pressure or temperature – into electronic signals. In most cases, different types of sensors are needed for different types of touch – one type of sensor to detect pressure, another for temperature, and so on – which are then embedded into soft, flexible materials. However, the signals from these different sensors can interfere with each other, and the materials are easily damaged.

“Having different sensors for different types of touch leads to materials that are complex to make,” said lead author Dr David Hardman from Cambridge’s Department of Engineering. “We wanted to develop a solution that can detect multiple types of touch at once, but in a single material.”

“At the same time, we need something that’s cheap and durable, so that it’s suitable for widespread use,” said co-author Dr Thomas George Thuruthel from UCL.

Their solution uses one type of sensor that reacts differently to different types of touch, known as multi-modal sensing. While it’s challenging to separate out the cause of each signal, multi-modal sensing materials are easier to make and more robust.

The researchers melted down a soft, stretchy and electrically conductive gelatine-based hydrogel, and cast it into the shape of a human hand. They tested a range of different electrode configurations to determine which gave them the most useful information about different types of touch. From just 32 electrodes placed at the wrist, they were able to collect over 1.7 million pieces of information over the whole hand, thanks to the tiny pathways in the conductive material.

The skin was then tested on different types of touch: the researchers blasted it with a heat gun, pressed it with their fingers and a robotic arm, gently touched it with their fingers, and even cut it open with a scalpel. The team then used the data gathered during these tests to train a machine learning model so the hand would recognise what the different types of touch meant. 

“We’re able to squeeze a lot of information from these materials – they can take thousands of measurements very quickly,” said Hardman, who is a postdoctoral researcher in the lab of co-author Professor Fumiya Iida. “They’re measuring lots of different things at once, over a large surface area.”

“We’re not quite at the level where the robotic skin is as good as human skin, but we think it’s better than anything else out there at the moment,” said Thuruthel. “Our method is flexible and easier to build than traditional sensors, and we’re able to calibrate it using human touch for a range of tasks.”

In future, the researchers are hoping to improve the durability of the electronic skin, and to carry out further tests on real-world robotic tasks.

The research was supported by Samsung Global Research Outreach Program, the Royal Society, and the Engineering and Physical Sciences Research Council (EPSRC), part of UK Research and Innovation (UKRI). Fumiya Iida is a Fellow of Corpus Christi College, Cambridge.

Here’s a link to and a citation for the paper,

Multimodal information structuring with single-layer soft skins and high-density electrical impedance tomography by David Hardman, Thomas George Thuruthel, and Fumiya Iida. Science Robotics 11 Jun 2025 Vol 10, Issue 103 DOI: 10.1126/scirobotics.adq2303

This paper is behind a paywall.

Pioneering bionic hand achieves human-like grip on plush toys, water bottles, and other everyday objects

This is not a biohybrid hand incorporating ‘living’ and nonliving materials but a hybrid hand incorporating soft and rigid robotics.

A March 5, 2025 news item on ScienceDaily announces work from Johns Hopkins University (JHU; Maryland, US),

Johns Hopkins University engineers have developed a pioneering prosthetic hand that can grip plush toys, water bottles, and other everyday objects like a human, carefully conforming and adjusting its grasp to avoid damaging or mishandling whatever it holds.

The system’s hybrid design is a first for robotic hands, which have typically been too rigid or too soft to replicate a human’s touch when handling objects of varying textures and materials. The innovation offers a promising solution for people with hand loss and could improve how robotic arms interact with their environment.

A March 5, 2025 Johns Hopkins University (JHU) news release (also on EurekAlert), which originated the news item, provides more details, Note: Links have been removed,

“The goal from the beginning has been to create a prosthetic hand that we model based on the human hand’s physical and sensing capabilities—a more natural prosthetic that functions and feels like a lost limb,” said Sriramana Sankar, a Johns Hopkins biomedical engineer who led the work. We want to give people with upper-limb loss the ability to safely and freely interact with their environment, to feel and hold their loved ones without concern of hurting them.”

The device, developed by the same Neuroengineering and Biomedical Instrumentations Lab that in 2018 created the world’s first electronic “skin” with a humanlike sense of pain [mentioned here in a December 14, 2018 posting], features a multifinger system with rubberlike polymers and a rigid 3D-printed internal skeleton. Its three layers of tactile sensors, inspired by the layers of human skin, allow it to grasp and distinguish objects of various shapes and surface textures, rather than just detect touch. Each of its soft air-filled finger joints can be controlled with the forearm’s muscles, and machine learning algorithms focus the signals from the artificial touch receptors to create a realistic sense of touch, Sankar said. “The sensory information from its fingers is translated into the language of nerves to provide naturalistic sensory feedback through electrical nerve stimulation.”

In the lab, the hand identified and manipulated 15 everyday objects, including delicate stuffed toys, dish sponges, and cardboard boxes, as well as pineapples, metal water bottles, and other sturdier items. In the experiments, the device achieved the best performance compared with the alternatives, successfully handling objects with 99.69% accuracy and adjusting its grip as needed to prevent mishaps. The best example was when it nimbly picked up a thin, fragile plastic cup filled with water, using only three fingers without denting it.

“We’re combining the strengths of both rigid and soft robotics to mimic the human hand,” Sankar said. “The human hand isn’t completely rigid or purely soft—it’s a hybrid system, with bones, soft joints, and tissue working together. That’s what we want our prosthetic hand to achieve. This is new territory for robotics and prosthetics, which haven’t fully embraced this hybrid technology before. It’s being able to give a firm handshake or pick up a soft object without fear of crushing it.”

To help amputees regain the ability to feel objects while grasping, prostheses will need three key components: sensors to detect the environment, a system to translate that data into nerve-like signals, and a way to stimulate nerves so the person can feel the sensation, said Nitish Thakor, a Johns Hopkins biomedical engineering professor who directed the work.

The bioinspired technology allows the hand to function this way, using muscle signals from the forearm, like most hand prostheses. These signals bridge the brain and nerves, allowing the hand to flex, release, or react based on its sense of touch. The result is a robotic hand that intuitively “knows” what it’s touching, much like the nervous system does, Thakor said.

“If you’re holding a cup of coffee, how do you know you’re about to drop it? Your palm and fingertips send signals to your brain that the cup is slipping,” Thakor said. “Our system is neurally inspired—it models the hand’s touch receptors to produce nervelike messages so the prosthetics’ ‘brain,’ or its computer, understands if something is hot or cold, soft or hard, or slipping from the grip.”

While the research is an early breakthrough for hybrid robotic technology that could transform both prosthetics and robotics, more work is needed to refine the system, Thakor said. Future improvements could include stronger grip forces, additional sensors, and industrial-grade materials.

“This hybrid dexterity isn’t just essential for next-generation prostheses,” Thakor said. “It’s what the robotic hands of the future need because they won’t just be handling large, heavy objects. They’ll need to work with delicate materials such as glass, fabric, or soft toys. That’s why a hybrid robot, designed like the human hand, is so valuable—it combines soft and rigid structures, just like our skin, tissue, and bones.” 

Other authors include Wen-Yu Cheng of Florida Atlantic University; Jinghua Zhang, Ariel Slepyan, Mark M. Iskarous, Rebecca J. Greene, Rene DeBrabander, and Junjun Chen of Johns Hopkins; and Arnav Gupta of the University of Illinois Chicago.

Here’s a link to and a citation for the paper,

A natural biomimetic prosthetic hand with neuromorphic tactile sensing for precise and compliant grasping by Sriramana Sankar, Wen-Yu Cheng, Jinghua Zhang, Ariel Slepyan, Mark M. Iskarous, Rebecca J. Greene, Rene DeBrabander, Junjun Chen, Arnav Gupta, and Nitish V. Thakor. Science Advances 5 Mar 2025 Vol 11, Issue 10 DOI: 10.1126/sciadv.adr9300

This paper is open access.

Mind-reading prosthetic limbs

In a December 21, 2022 Friedrich-Alexander-Universität Erlangen-Nürnberg (FAU press release (also on EurekAlert) problems with current neuroprostheses are described in the context of a new research project intended to solve them,

Lifting a glass, making a fist, entering a phone number using the index finger: it is amazing the things cutting-edge robotic hands can already do thanks to biomedical technology. However, things that work in the laboratory often encounter stumbling blocks when put to practice in daily life. The problem is the vast diversity of the intentions of each individual person, their surroundings and the things that can be found there, making a one size fits all solution all but impossible. A team at FAU is investigating how intelligent prostheses can be improved and made more reliable. The idea is that interactive artificial intelligence will help the prostheses to recognize human intent better, to register their surroundings and to continue to develop and improve over time. The project is to receive 4.5 million euros in funding from the EU, with FAU receiving 467,000 euros.

“We are literally working at the interface between humans and machines,” explains Prof. Dr. Claudio Castellini, professor of medical robotics at FAU. “The technology behind prosthetics for upper limbs has come on in leaps and bounds over the past decades.” Using surface electromyography, for example, skin electrodes at the remaining stump of the arm can detect the slightest muscle movements. These biosignals can be converted and transferred to the prosthetic limb as electrical impulses. “The wearer controls their artificial hand themselves using the stump. Methods taken from pattern recognition and interactive machine learning also allow people to teach their prosthetic their own individual needs when making a gesture or a movement.”

The advantages of AI over purely cosmetic prosthetics

At present, advanced robotic prosthetics have not yet reached optimal standards in terms of comfort, function and control, which is why many people with missing limbs still often prefer purely cosmetic prosthetics with no additional functions. The new EU Horizon project “AI-Powered Manipulation System for Advanced Robotic Service, Manufacturing and Prosthetics (IntelliMan)” therefore focuses on how these can interact with their environment even more effectively and for a specific purpose.

Researchers at FAU concentrate in particular on how to improve control of both real and virtual prosthetic upper limbs. The focus is on what is known as intent detection. Prof. Castellini and his team are continuing work on recording and analyzing human biosignals, and are designing innovative algorithms for machine learning aimed at detecting the individual movement patterns of individuals. User studies conducted on test persons both with and without physical disabilities are used to validate their results. Furthermore, FAU is also leading the area “Shared autonomy between humans and robots” in the EU project, aimed at checking the safety of the results.

At the interface between humans and machines

Prof. Castellini heads the “Assistive Intelligent Robotics” lab (AIROB) at FAU that focuses on controlling assistive robotics for the upper and lower limbs as well as functional electrostimulation. “We are exploiting the potential offered by intent detection to control assistive and rehabilitative robotics,” explains the researcher. “This covers wearable robots worn on the body such as prosthetics and exoskeletons, but also robot arms and simulations using virtual reality.” The professorship focuses particularly on biosignal processing of various sensor modalities and methods of machine learning for intent detection, in other words research directly at the interface between humans and machines.

In his previous research at the German Aerospace Center (DLR), where he was based until 2021, Castellini investigated the question of how virtual hand prosthetics could help amputees cope with phantom pain. Alongside Castellini, doctoral candidate Fabio Egle, a research associate at the professorship, is also actively involved in the IntelliMan project. The FAU share of the EU project will receive funding of 467,000 euros over a period of three and a half years, while the overall budget amounts to 6 million euros. The IntelliMan project is coordinated by the University of Bologna and the DLR, the Polytechnic University of Catalonia, the University of Genoa, Luigi Vanvitelli University in Campania and the Bavarian Research Alliance (BayFOR) are also involved.

Good luck to the team!

Neural and technological inequalities

I’m always happy to see discussions about the social implications of new and emerging technologies. In this case, the discussion was held at the Fast Company (magazine) European Innovation Festival. KC Ifeanyi wrote a July 10, 2019 article for Fast Company highlighting a session between two scientists focusing on what I’ve termed ‘machine/flesh’ or is, sometimes, called a cyborg but not with these two scientists (Note: A link has been removed),

At the Fast Company European Innovation Festival today, scientists Moran Cerf and Riccardo Sabatini had a wide-ranging discussion on the implications of technology that can hack humanity. From ethical questions to looking toward human biology for solutions, here are some of the highlights:

The ethics of ‘neural inequality’

There are already chips that can be implanted in the brain to help recover bodily functions after a stroke or brain injury. However, what happens if (more likely when) a chip in your brain can be hacked or even gain internet access, essentially making it possible for some people (more likely wealthy people) to process information much more quickly than others?

“It’s what some call neural inequality,” says Cerf, a neuroscientist and business professor at the Kellogg School of Management and at the neuroscience program at Northwestern University. …

Opening new pathways to thought through bionics

Cerf mentioned a colleague who was born without his left hand. He engineered a bionic one that he can control with an app and that has the functionality of doing things no human hand can do, like rotating 360 degrees. As fun of a party trick as that is, Cerf brings up a good point in that his colleague’s brain is processing something we can’t, thereby possibly opening new pathways of thought.

“The interesting thing, and this is up to us to investigate, is his brain can think thoughts that you cannot think [emphasis mine] because he has a function you don’t have,” Cerf says. …

The innovation of your human body

As people look to advanced bionics to amplify their senses or abilities, Sabatini, chief data scientist at Orionis Biosciences, makes the argument that our biological bodies are far more advanced than we give them credit for. …

Democratizing tech’s edges

Early innovation so often comes with a high price tag. The cost of experimenting with nascent technology or running clinical trials can be exorbitant. And Sabatini believes democratizing that part of the process is where the true innovation will be. …

Earlier technology that changed our thinking and thoughts

This isn’t the first time that technology has altered our thinking and the kinds of thoughts we have as per ” brain can think thoughts that you cannot think.” According to Walter J. Ong’s 1982 book, ‘Orality and Literacy’,that’s what writing did to us; it changed our thinking and the kinds of thoughts we have.

It took me quite a while to understand ‘writing’ as a technology, largely due to how much I took it for granted. Once I made that leap, it changed how I understood the word technology. Then, the idea that ‘writing’ could change your brain didn’t require as dramatic a leap although it fundamentally altered my concept of the relationship between technology and humans. Up to that time, I had viewed technology as an instrument that allowed me to accomplish goals (e.g., driving a car from point a to point b) but it had very little impact on me as a person.

You can find out more about Walter J. Ong and his work in his Wikipedia entry. Pay special attention to the section about, Orality and Literacy.

Who’s talking about technology and our thinking?

The article about the scientists (Cerf and Sabatini) at the Fast Company European Innovation Festival (held July 9 -10, 2019 in Milan, Italy) never mentions cyborgs. Presumably, neither did Sabatini or Cerf. It seems odd. Two thinkers were discussing ‘neural inequality’ and there was no mention of a cyborg (human and machine joined together).

Interestingly, the lead sponsor for this innovation festival was Gucci. That company would not have been my first guess or any other guess for that matter as having an interest in neural inequality.

So, Gucci sponsored a festival that is not not cheap. A two-day pass was $1600. (early birds got a discount of $457) and a ‘super’ pass was $2,229 (with an early bird discount of $629). So, you didn’t get into the room unless you had a fair chunk of change and time.

The tension, talking about inequality at a festival or other venue that most people can’t afford to attend, is discussed at more length in Anand Giridharadas’s 2018 book, ‘Winners Take All; The Elite Charade of Changing the World’.

It’s not just who gets to discuss ‘neural inequality’, it’s when you get to discuss it, which affects how the discussion is framed.

There aren’t an easy answers to these questions but I find the easy assumption that the wealthy and the science and technology communities get first dibs at the discussion a little disconcerting while being perfectly predictable.

On the plus side, there are artists and others who have jumped in and started the discussion by turning themselves into cyborgs. This August 14, 2015 article (Body-hackers: the people who turn themselves into cyborgs) by Oliver Wainwright for the Guardian is very informative and not for the faint of heart.

For the curious, I’ve been covering these kinds of stories here since 2009. The category ‘human enhancement’ and the search term ‘machine/flesh’ should provide you with an assortment of stories on the topic.

A solar, self-charging supercapacitor for wearable technology

Ravinder Dahiya, Carlos García Núñez, and their colleagues at the University of Glasgow (Scotland) strike again (see my May 10, 2017 posting for their first ‘solar-powered graphene skin’ research announcement). Last time it was all about robots and prosthetics, this time they’ve focused on wearable technology according to a July 18, 2018 news item on phys.org,

A new form of solar-powered supercapacitor could help make future wearable technologies lighter and more energy-efficient, scientists say.

In a paper published in the journal Nano Energy, researchers from the University of Glasgow’s Bendable Electronics and Sensing Technologies (BEST) group describe how they have developed a promising new type of graphene supercapacitor, which could be used in the next generation of wearable health sensors.

A July 18, 2018 University of Glasgow press release, which originated the news item, explains further,

Currently, wearable systems generally rely on relatively heavy, inflexible batteries, which can be uncomfortable for long-term users. The BEST team, led by Professor Ravinder Dahiya, have built on their previous success in developing flexible sensors by developing a supercapacitor which could power health sensors capable of conforming to wearer’s bodies, offering more comfort and a more consistent contact with skin to better collect health data.

Their new supercapacitor uses layers of flexible, three-dimensional porous foam formed from graphene and silver to produce a device capable of storing and releasing around three times more power than any similar flexible supercapacitor. The team demonstrated the durability of the supercapacitor, showing that it provided power consistently across 25,000 charging and discharging cycles.

They have also found a way to charge the system by integrating it with flexible solar powered skin already developed by the BEST group, effectively creating an entirely self-charging system, as well as a pH sensor which uses wearer’s sweat to monitor their health.

Professor Dahiya said: “We’re very pleased by the progress this new form of solar-powered supercapacitor represents. A flexible, wearable health monitoring system which only requires exposure to sunlight to charge has a lot of obvious commercial appeal, but the underlying technology has a great deal of additional potential.

“This research could take the wearable systems for health monitoring to remote parts of the world where solar power is often the most reliable source of energy, and it could also increase the efficiency of hybrid electric vehicles. We’re already looking at further integrating the technology into flexible synthetic skin which we’re developing for use in advanced prosthetics.” [emphasis mine]

In addition to the team’s work on robots, prosthetics, and graphene ‘skin’ mentioned in the May 10, 2017 posting the team is working on a synthetic ‘brainy’ skin for which they have just received £1.5m funding from the Engineering and Physical Science Research Council (EPSRC).

Brainy skin

A July 3, 2018 University of Glasgow press release discusses the proposed work in more detail,

A robotic hand covered in ‘brainy skin’ that mimics the human sense of touch is being developed by scientists.

University of Glasgow’s Professor Ravinder Dahiya has plans to develop ultra-flexible, synthetic Brainy Skin that ‘thinks for itself’.

The super-flexible, hypersensitive skin may one day be used to make more responsive prosthetics for amputees, or to build robots with a sense of touch.

Brainy Skin reacts like human skin, which has its own neurons that respond immediately to touch rather than having to relay the whole message to the brain.

This electronic ‘thinking skin’ is made from silicon based printed neural transistors and graphene – an ultra-thin form of carbon that is only an atom thick, but stronger than steel.

The new version is more powerful, less cumbersome and would work better than earlier prototypes, also developed by Professor Dahiya and his Bendable Electronics and Sensing Technologies (BEST) team at the University’s School of Engineering.

His futuristic research, called neuPRINTSKIN (Neuromorphic Printed Tactile Skin), has just received another £1.5m funding from the Engineering and Physical Science Research Council (EPSRC).

Professor Dahiya said: “Human skin is an incredibly complex system capable of detecting pressure, temperature and texture through an array of neural sensors that carry signals from the skin to the brain.

“Inspired by real skin, this project will harness the technological advances in electronic engineering to mimic some features of human skin, such as softness, bendability and now, also sense of touch. This skin will not just mimic the morphology of the skin but also its functionality.

“Brainy Skin is critical for the autonomy of robots and for a safe human-robot interaction to meet emerging societal needs such as helping the elderly.”

Synthetic ‘Brainy Skin’ with sense of touch gets £1.5m funding. Photo of Professor Ravinder Dahiya

This latest advance means tactile data is gathered over large areas by the synthetic skin’s computing system rather than sent to the brain for interpretation.

With additional EPSRC funding, which extends Professor Dahiya’s fellowship by another three years, he plans to introduce tactile skin with neuron-like processing. This breakthrough in the tactile sensing research will lead to the first neuromorphic tactile skin, or ‘brainy skin.’

To achieve this, Professor Dahiya will add a new neural layer to the e-skin that he has already developed using printing silicon nanowires.

Professor Dahiya added: “By adding a neural layer underneath the current tactile skin, neuPRINTSKIN will add significant new perspective to the e-skin research, and trigger transformations in several areas such as robotics, prosthetics, artificial intelligence, wearable systems, next-generation computing, and flexible and printed electronics.”

The Engineering and Physical Sciences Research Council (EPSRC) is part of UK Research and Innovation, a non-departmental public body funded by a grant-in-aid from the UK government.

EPSRC is the main funding body for engineering and physical sciences research in the UK. By investing in research and postgraduate training, the EPSRC is building the knowledge and skills base needed to address the scientific and technological challenges facing the nation.

Its portfolio covers a vast range of fields from healthcare technologies to structural engineering, manufacturing to mathematics, advanced materials to chemistry. The research funded by EPSRC has impact across all sectors. It provides a platform for future UK prosperity by contributing to a healthy, connected, resilient, productive nation.

It’s fascinating to note how these pieces of research fit together for wearable technology and health monitoring and creating more responsive robot ‘skin’ and, possibly, prosthetic devices that would allow someone to feel again.

The latest research paper

Getting back the solar-charging supercapacitors mentioned in the opening, here’s a link to and a citation for the team’s latest research paper,

Flexible self-charging supercapacitor based on graphene-Ag-3D graphene foam electrodes by Libu Manjakka, Carlos García Núñez, Wenting Dang, Ravinder Dahiya. Nano Energy Volume 51, September 2018, Pages 604-612 DOI: https://doi.org/10.1016/j.nanoen.2018.06.072

This paper is open access.

Prosthetic pain

“Feeling no pain” can be a euphemism for being drunk. However, there are some people for whom it’s not a euphemism and they literally feel no pain for one reason or another. One group of people who feel no pain are amputees and a researcher at Johns Hopkins University (Maryland, US) has found a way so they can feel pain again.

A June 20, 2018 news item on ScienceDaily provides an introduction to the research and to the reason for it,

Amputees often experience the sensation of a “phantom limb” — a feeling that a missing body part is still there.

That sensory illusion is closer to becoming a reality thanks to a team of engineers at the Johns Hopkins University that has created an electronic skin. When layered on top of prosthetic hands, this e-dermis brings back a real sense of touch through the fingertips.

“After many years, I felt my hand, as if a hollow shell got filled with life again,” says the anonymous amputee who served as the team’s principal volunteer tester.

Made of fabric and rubber laced with sensors to mimic nerve endings, e-dermis recreates a sense of touch as well as pain by sensing stimuli and relaying the impulses back to the peripheral nerves.

A June 20, 2018 Johns Hopkins University news release (also on EurekAlert), which originated the news item, explores the research in more depth,

“We’ve made a sensor that goes over the fingertips of a prosthetic hand and acts like your own skin would,” says Luke Osborn, a graduate student in biomedical engineering. “It’s inspired by what is happening in human biology, with receptors for both touch and pain.

“This is interesting and new,” Osborn said, “because now we can have a prosthetic hand that is already on the market and fit it with an e-dermis that can tell the wearer whether he or she is picking up something that is round or whether it has sharp points.”

The work – published June 20 in the journal Science Robotics – shows it is possible to restore a range of natural, touch-based feelings to amputees who use prosthetic limbs. The ability to detect pain could be useful, for instance, not only in prosthetic hands but also in lower limb prostheses, alerting the user to potential damage to the device.

Human skin contains a complex network of receptors that relay a variety of sensations to the brain. This network provided a biological template for the research team, which includes members from the Johns Hopkins departments of Biomedical Engineering, Electrical and Computer Engineering, and Neurology, and from the Singapore Institute of Neurotechnology.

Bringing a more human touch to modern prosthetic designs is critical, especially when it comes to incorporating the ability to feel pain, Osborn says.

“Pain is, of course, unpleasant, but it’s also an essential, protective sense of touch that is lacking in the prostheses that are currently available to amputees,” he says. “Advances in prosthesis designs and control mechanisms can aid an amputee’s ability to regain lost function, but they often lack meaningful, tactile feedback or perception.”

That is where the e-dermis comes in, conveying information to the amputee by stimulating peripheral nerves in the arm, making the so-called phantom limb come to life. The e-dermis device does this by electrically stimulating the amputee’s nerves in a non-invasive way, through the skin, says the paper’s senior author, Nitish Thakor, a professor of biomedical engineering and director of the Biomedical Instrumentation and Neuroengineering Laboratory at Johns Hopkins.

“For the first time, a prosthesis can provide a range of perceptions, from fine touch to noxious to an amputee, making it more like a human hand,” says Thakor, co-founder of Infinite Biomedical Technologies, the Baltimore-based company that provided the prosthetic hardware used in the study.

Inspired by human biology, the e-dermis enables its user to sense a continuous spectrum of tactile perceptions, from light touch to noxious or painful stimulus. The team created a “neuromorphic model” mimicking the touch and pain receptors of the human nervous system, allowing the e-dermis to electronically encode sensations just as the receptors in the skin would. Tracking brain activity via electroencephalography, or EEG, the team determined that the test subject was able to perceive these sensations in his phantom hand.

The researchers then connected the e-dermis output to the volunteer by using a noninvasive method known as transcutaneous electrical nerve stimulation, or TENS. In a pain-detection task, the team determined that the test subject and the prosthesis were able to experience a natural, reflexive reaction to both pain while touching a pointed object and non-pain when touching a round object.

The e-dermis is not sensitive to temperature–for this study, the team focused on detecting object curvature (for touch and shape perception) and sharpness (for pain perception). The e-dermis technology could be used to make robotic systems more human, and it could also be used to expand or extend to astronaut gloves and space suits, Osborn says.

The researchers plan to further develop the technology and better understand how to provide meaningful sensory information to amputees in the hopes of making the system ready for widespread patient use.

Johns Hopkins is a pioneer in the field of upper limb dexterous prostheses. More than a decade ago, the university’s Applied Physics Laboratory led the development of the advanced Modular Prosthetic Limb, which an amputee patient controls with the muscles and nerves that once controlled his or her real arm or hand.

In addition to the funding from Space@Hopkins, which fosters space-related collaboration across the university’s divisions, the team also received grants from the Applied Physics Laboratory Graduate Fellowship Program and the Neuroengineering Training Initiative through the National Institute of Biomedical Imaging and Bioengineering through the National Institutes of Health under grant T32EB003383.

The e-dermis was tested over the course of one year on an amputee who volunteered in the Neuroengineering Laboratory at Johns Hopkins. The subject frequently repeated the testing to demonstrate consistent sensory perceptions via the e-dermis. The team has worked with four other amputee volunteers in other experiments to provide sensory feedback.

Here’s a video about this work,

Sarah Zhang’s June 20, 2018 article for The Atlantic reveals a few more details while covering some of the material in the news release,

Osborn and his team added one more feature to make the prosthetic hand, as he puts it, “more lifelike, more self-aware”: When it grasps something too sharp, it’ll open its fingers and immediately drop it—no human control necessary. The fingers react in just 100 milliseconds, the speed of a human reflex. Existing prosthetic hands have a similar degree of theoretically helpful autonomy: If an object starts slipping, the hand will grasp more tightly. Ideally, users would have a way to override a prosthesis’s reflex, like how you can hold your hand on a stove if you really, really want to. After all, the whole point of having a hand is being able to tell it what to do.

Here’s a link to and a citation for the paper,

Prosthesis with neuromorphic multilayered e-dermis perceives touch and pain by Luke E. Osborn, Andrei Dragomir, Joseph L. Betthauser, Christopher L. Hunt, Harrison H. Nguyen, Rahul R. Kaliki, and Nitish V. Thakor. Science Robotics 20 Jun 2018: Vol. 3, Issue 19, eaat3818 DOI: 10.1126/scirobotics.aat3818

This paper is behind a paywall.

Better motor control for prosthetic hands (the illusion of feeling) and a discussion of superprostheses and reality

I have two bits about prosthetics, one which focuses on how most of us think of them and another about science fiction fantasies.

Better motor control

This new technology comes via a collaboration between the University of Alberta, the University of New Brunswick (UNB) and Ohio’s Cleveland Clinic, from a March 18, 2018 article by Nicole Ireland for the Canadian Broadcasting Corporation’s (CBC) news online,

Rob Anderson was fighting wildfires in Alberta when the helicopter he was in crashed into the side of a mountain. He survived, but lost his left arm and left leg.

More than 10 years after that accident, Anderson, now 39, says prosthetic limb technology has come a long way, and he feels fortunate to be using “top of the line stuff” to help him function as normally as possible. In fact, he continues to work for the Alberta government’s wildfire fighting service.

His powered prosthetic hand can do basic functions like opening and closing, but he doesn’t feel connected to it — and has limited ability to perform more intricate movements with it, such as shaking hands or holding a glass.

Anderson, who lives in Grande Prairie, Alta., compares its function to “doing things with a long pair of pliers.”

“There’s a disconnect between what you’re physically touching and what your body is doing,” he told CBC News.

Anderson is one of four Canadian participants in a study that suggests there’s a way to change that. …

Six people, all of whom had arm amputations from below the elbow or higher, took part in the research. It found that strategically placed vibrating “robots” made them “feel” the movements of their prosthetic hands, allowing them to grasp and grip objects with much more control and accuracy.

All of the participants had all previously undergone a specialized surgical procedure called “targeted re-innervation.” The nerves that had connected to their hands before they were amputated were rewired to link instead to muscles (including the biceps and triceps) in their remaining upper arms and in their chests.

For the study, researchers placed the robotic devices on the skin over those re-innervated muscles and vibrated them as the participants opened, closed, grasped or pinched with their prosthetic hands.

While the vibration was turned on, the participants “felt” their artificial hands moving and could adjust their grip based on the sensation. …

I have an April 24, 2017 posting about a tetraplegic patient who had a number of electrodes implanted in his arms and hands linked to a brain-machine interface and which allowed him to move his hands and arms; the implants were later removed. It is a different problem with a correspondingly different technological solution but there does seem to be increased interest in implanting sensors and electrodes into the human body to increase mobility and/or sensation.

Anderson describes how it ‘feels,

“It was kind of surreal,” Anderson said. “I could visually see the hand go out, I would touch something, I would squeeze it and my phantom hand felt like it was being closed and squeezing on something and it was sending the message back to my brain.

“It was a very strange sensation to actually be able to feel that feedback because I hadn’t in 10 years.”

The feeling of movement in the prosthetic hand is an illusion, the researchers say, since the vibration is actually happening to a muscle elsewhere in the body. But the sensation appeared to have a real effect on the participants.

“They were able to control their grasp function and how much they were opening the hand, to the same degree that someone with an intact hand would,” said study co-author Dr. Jacqueline Hebert, an associate professor in the Faculty of Rehabilitation Medicine at the University of Alberta.

Although the researchers are encouraged by the study findings, they acknowledge that there was a small number of participants, who all had access to the specialized re-innervation surgery to redirect the nerves from their amputated hands to other parts of their body.

The next step, they say, is to see if they can also simulate the feeling of movement in a broader range of people who have had other types of amputations, including legs, and have not had the re-innervation surgery.

Here’s a March 15, 2018  CBC New Brunswick radio interview about the work,

This is a bit longer than most of the embedded audio pieces that I have here but it’s worth it. Sadly, I can’t identify the interviewer who did a very good job with Jon Sensinger, associate director of UNB’s Institute of Biomedical Engineering. One more thing, I noticed that the interviewer made no mention of the University of Alberta in her introduction or in the subsequent interview. I gather regionalism reigns supreme everywhere in Canada. Or, maybe she and Sensinger just forgot. It happens when you’re excited. Also, there were US institutions in Ohio and Virginia that participated in this work.

Here’s a link to and a citation for the team’s paper,

Illusory movement perception improves motor control for prosthetic hands by Paul D. Marasco, Jacqueline S. Hebert, Jon W. Sensinger, Courtney E. Shell, Jonathon S. Schofield, Zachary C. Thumser, Raviraj Nataraj, Dylan T. Beckler, Michael R. Dawson, Dan H. Blustein, Satinder Gill, Brett D. Mensh, Rafael Granja-Vazquez, Madeline D. Newcomb, Jason P. Carey, and Beth M. Orzell. Science Translational Medicine 14 Mar 2018: Vol. 10, Issue 432, eaao6990 DOI: 10.1126/scitranslmed.aao6990

This paper is open access.

Superprostheses and our science fiction future

A March 20, 2018 news item on phys.org features an essay on about superprostheses and/or assistive devices,

Assistive devices may soon allow people to perform virtually superhuman feats. According to Robert Riener, however, there are more pressing goals than developing superhumans.

What had until recently been described as a futuristic vision has become a reality: the first self-declared “cyborgs” have had chips implanted in their bodies so that they can open doors and make cashless payments. The latest robotic hand prostheses succeed in performing all kinds of grips and tasks requiring dexterity. Parathletes fitted with running and spring prostheses compete – and win – against the best, non-impaired athletes. Then there are robotic pets and talking humanoid robots adding a bit of excitement to nursing homes.

Some media are even predicting that these high-tech creations will bring about forms of physiological augmentation overshadowing humans’ physical capabilities in ways never seen before. For instance, hearing aids are eventually expected to offer the ultimate in hearing; retinal implants will enable vision with a sharpness rivalling that of any eagle; motorised exoskeletons will transform soldiers into tireless fighting machines.

Visions of the future: the video game Deus Ex: Human Revolution highlights the emergence of physiological augmentation. (Visualisations: Square Enix) Courtesy: ETH Zurich

Professor Robert Riener uses the image above to illustrate the notion of superprosthese in his March 20, 2018 essay on the ETH Zurich website,

All of these prophecies notwithstanding, our robotic transformation into superheroes will not be happening in the immediate future and can still be filed under Hollywood hero myths. Compared to the technology available today, our bodies are a true marvel whose complexity and performance allows us to perform an extremely wide spectrum of tasks. Hundreds of efficient muscles, thousands of independently operating motor units along with millions of sensory receptors and billions of nerve cells allow us to perform delicate and detailed tasks with tweezers or lift heavy loads. Added to this, our musculoskeletal system is highly adaptable, can partly repair itself and requires only minimal amounts of energy in the form of relatively small amounts of food consumed.

Machines will not be able to match this any time soon. Today’s assistive devices are still laboratory experiments or niche products designed for very specific tasks. Markus Rehm, an athlete with a disability, does not use his innovative spring prosthesis to go for walks or drive a car. Nor can today’s conventional arm prostheses help a person tie their shoes or button up their shirt. Lifting devices used for nursing care are not suitable for helping with personal hygiene tasks or in psychotherapy. And robotic pets quickly lose their charm the moment their batteries die.

Solving real problems

There is no denying that advances continue to be made. Since the scientific and industrial revolutions, we have become dependent on relentless progress and growth, and we can no longer separate today’s world from this development. There are, however, more pressing issues to be solved than creating superhumans.

On the one hand, engineers need to dedicate their efforts to solving the real problems of patients, the elderly and people with disabilities. Better technical solutions are needed to help them lead normal lives and assist them in their work. We need motorised prostheses that also work in the rain and wheelchairs that can manoeuvre even with snow on the ground. Talking robotic nurses also need to be understood by hard-of-hearing pensioners as well as offer simple and dependable interactivity. Their batteries need to last at least one full day to be recharged overnight.

In addition, financial resources need to be available so that all people have access to the latest technologies, such as a high-quality household prosthesis for the family man, an extra prosthesis for the avid athlete or a prosthesis for the pensioner. [emphasis mine]

Breaking down barriers

What is just as important as the ongoing development of prostheses and assistive devices is the ability to minimise or eliminate physical barriers. Where there are no stairs, there is no need for elaborate special solutions like stair lifts or stairclimbing wheelchairs – or, presumably, fully motorised exoskeletons.

Efforts also need to be made to transform the way society thinks about people with disabilities. More acknowledgement of the day-to-day challenges facing patients with disabilities is needed, which requires that people be confronted with the topic of disability when they are still children. Such projects must be promoted at home and in schools so that living with impairments can also attain a state of normality and all people can partake in society. It is therefore also necessary to break down mental barriers.

The road to a virtually superhuman existence is still far and long. Anyone reading this text will not live to see it. In the meantime, the task at hand is to tackle the mundane challenges in order to simplify people’s daily lives in ways that do not require technology, that allow people to be active participants and improve their quality of life – instead of wasting our time getting caught up in cyborg euphoria and digital mania.

I’m struck by Riener’s reference to financial resources and access. Sensinger mentions financial resources in his CBC radio interview although his concern is with convincing funders that prostheses that mimic ‘feeling’ are needed.

I’m also struck by Riener’s discussion about nontechnological solutions for including people with all kinds of abilities and disabilities.

There was no grand plan for combining these two news bits; I just thought they were interesting together.

Robots in Vancouver and in Canada (two of two)

This is the second of a two-part posting about robots in Vancouver and Canada. The first part included a definition, a brief mention a robot ethics quandary, and sexbots. This part is all about the future. (Part one is here.)

Canadian Robotics Strategy

Meetings were held Sept. 28 – 29, 2017 in, surprisingly, Vancouver. (For those who don’t know, this is surprising because most of the robotics and AI research seems to be concentrated in eastern Canada. if you don’t believe me take a look at the speaker list for Day 2 or the ‘Canadian Stakeholder’ meeting day.) From the NSERC (Natural Sciences and Engineering Research Council) events page of the Canadian Robotics Network,

Join us as we gather robotics stakeholders from across the country to initiate the development of a national robotics strategy for Canada. Sponsored by the Natural Sciences and Engineering Research Council of Canada (NSERC), this two-day event coincides with the 2017 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS 2017) in order to leverage the experience of international experts as we explore Canada’s need for a national robotics strategy.

Where
Vancouver, BC, Canada

When
Thursday September 28 & Friday September 29, 2017 — Save the date!

Download the full agenda and speakers’ list here.

Objectives

The purpose of this two-day event is to gather members of the robotics ecosystem from across Canada to initiate the development of a national robotics strategy that builds on our strengths and capacities in robotics, and is uniquely tailored to address Canada’s economic needs and social values.

This event has been sponsored by the Natural Sciences and Engineering Research Council of Canada (NSERC) and is supported in kind by the 2017 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS 2017) as an official Workshop of the conference.  The first of two days coincides with IROS 2017 – one of the premiere robotics conferences globally – in order to leverage the experience of international robotics experts as we explore Canada’s need for a national robotics strategy here at home.

Who should attend

Representatives from industry, research, government, startups, investment, education, policy, law, and ethics who are passionate about building a robust and world-class ecosystem for robotics in Canada.

Program Overview

Download the full agenda and speakers’ list here.

DAY ONE: IROS Workshop 

“Best practices in designing effective roadmaps for robotics innovation”

Thursday September 28, 2017 | 8:30am – 5:00pm | Vancouver Convention Centre

Morning Program:“Developing robotics innovation policy and establishing key performance indicators that are relevant to your region” Leading international experts share their experience designing robotics strategies and policy frameworks in their regions and explore international best practices. Opening Remarks by Prof. Hong Zhang, IROS 2017 Conference Chair.

Afternoon Program: “Understanding the Canadian robotics ecosystem” Canadian stakeholders from research, industry, investment, ethics and law provide a collective overview of the Canadian robotics ecosystem. Opening Remarks by Ryan Gariepy, CTO of Clearpath Robotics.

Thursday Evening Program: Sponsored by Clearpath Robotics  Workshop participants gather at a nearby restaurant to network and socialize.

Learn more about the IROS Workshop.

DAY TWO: NSERC-Sponsored Canadian Robotics Stakeholder Meeting
“Towards a national robotics strategy for Canada”

Friday September 29, 2017 | 8:30am – 5:00pm | University of British Columbia (UBC)

On the second day of the program, robotics stakeholders from across the country gather at UBC for a full day brainstorming session to identify Canada’s unique strengths and opportunities relative to the global competition, and to align on a strategic vision for robotics in Canada.

Friday Evening Program: Sponsored by NSERC Meeting participants gather at a nearby restaurant for the event’s closing dinner reception.

Learn more about the Canadian Robotics Stakeholder Meeting.

I was glad to see in the agenda that some of the international speakers represented research efforts from outside the usual Europe/US axis.

I have been in touch with one of the organizers (also mentioned in part one with regard to robot ethics), Ajung Moon (her website is here), who says that there will be a white paper available on the Canadian Robotics Network website at some point in the future. I’ll keep looking for it and, in the meantime, I wonder what the 2018 Canadian federal budget will offer robotics.

Robots and popular culture

For anyone living in Canada or the US, Westworld (television series) is probably the most recent and well known ‘robot’ drama to premiere in the last year.As for movies, I think Ex Machina from 2014 probably qualifies in that category. Interestingly, both Westworld and Ex Machina seem quite concerned with sex with Westworld adding significant doses of violence as another  concern.

I am going to focus on another robot story, the 2012 movie, Robot & Frank, which features a care robot and an older man,

Frank (played by Frank Langella), a former jewel thief, teaches a robot the skills necessary to rob some neighbours of their valuables. The ethical issue broached in the film isn’t whether or not the robot should learn the skills and assist Frank in his thieving ways although that’s touched on when Frank keeps pointing out that planning his heist requires he live more healthily. No, the problem arises afterward when the neighbour accuses Frank of the robbery and Frank removes what he believes is all the evidence. He believes he’s going successfully evade arrest until the robot notes that Frank will have to erase its memory in order to remove all of the evidence. The film ends without the robot’s fate being made explicit.

In a way, I find the ethics query (was the robot Frank’s friend or just a machine?) posed in the film more interesting than the one in Vikander’s story, an issue which does have a history. For example, care aides, nurses, and/or servants would have dealt with requests to give an alcoholic patient a drink. Wouldn’t there  already be established guidelines and practices which could be adapted for robots? Or, is this question made anew by something intrinsically different about robots?

To be clear, Vikander’s story is a good introduction and starting point for these kinds of discussions as is Moon’s ethical question. But they are starting points and I hope one day there’ll be a more extended discussion of the questions raised by Moon and noted in Vikander’s article (a two- or three-part series of articles? public discussions?).

How will humans react to robots?

Earlier there was the contention that intimate interactions with robots and sexbots would decrease empathy and the ability of human beings to interact with each other in caring ways. This sounds a bit like the argument about smartphones/cell phones and teenagers who don’t relate well to others in real life because most of their interactions are mediated through a screen, which many seem to prefer. It may be partially true but, arguably,, books too are an antisocial technology as noted in Walter J. Ong’s  influential 1982 book, ‘Orality and Literacy’,  (from the Walter J. Ong Wikipedia entry),

A major concern of Ong’s works is the impact that the shift from orality to literacy has had on culture and education. Writing is a technology like other technologies (fire, the steam engine, etc.) that, when introduced to a “primary oral culture” (which has never known writing) has extremely wide-ranging impacts in all areas of life. These include culture, economics, politics, art, and more. Furthermore, even a small amount of education in writing transforms people’s mentality from the holistic immersion of orality to interiorization and individuation. [emphases mine]

So, robotics and artificial intelligence would not be the first technologies to affect our brains and our social interactions.

There’s another area where human-robot interaction may have unintended personal consequences according to April Glaser’s Sept. 14, 2017 article on Slate.com (Note: Links have been removed),

The customer service industry is teeming with robots. From automated phone trees to touchscreens, software and machines answer customer questions, complete orders, send friendly reminders, and even handle money. For an industry that is, at its core, about human interaction, it’s increasingly being driven to a large extent by nonhuman automation.

But despite the dreams of science-fiction writers, few people enter a customer-service encounter hoping to talk to a robot. And when the robot malfunctions, as they so often do, it’s a human who is left to calm angry customers. It’s understandable that after navigating a string of automated phone menus and being put on hold for 20 minutes, a customer might take her frustration out on a customer service representative. Even if you know it’s not the customer service agent’s fault, there’s really no one else to get mad at. It’s not like a robot cares if you’re angry.

When human beings need help with something, says Madeleine Elish, an anthropologist and researcher at the Data and Society Institute who studies how humans interact with machines, they’re not only looking for the most efficient solution to a problem. They’re often looking for a kind of validation that a robot can’t give. “Usually you don’t just want the answer,” Elish explained. “You want sympathy, understanding, and to be heard”—none of which are things robots are particularly good at delivering. In a 2015 survey of over 1,300 people conducted by researchers at Boston University, over 90 percent of respondents said they start their customer service interaction hoping to speak to a real person, and 83 percent admitted that in their last customer service call they trotted through phone menus only to make their way to a human on the line at the end.

“People can get so angry that they have to go through all those automated messages,” said Brian Gnerer, a call center representative with AT&T in Bloomington, Minnesota. “They’ve been misrouted or been on hold forever or they pressed one, then two, then zero to speak to somebody, and they are not getting where they want.” And when people do finally get a human on the phone, “they just sigh and are like, ‘Thank God, finally there’s somebody I can speak to.’ ”

Even if robots don’t always make customers happy, more and more companies are making the leap to bring in machines to take over jobs that used to specifically necessitate human interaction. McDonald’s and Wendy’s both reportedly plan to add touchscreen self-ordering machines to restaurants this year. Facebook is saturated with thousands of customer service chatbots that can do anything from hail an Uber, retrieve movie times, to order flowers for loved ones. And of course, corporations prefer automated labor. As Andy Puzder, CEO of the fast-food chains Carl’s Jr. and Hardee’s and former Trump pick for labor secretary, bluntly put it in an interview with Business Insider last year, robots are “always polite, they always upsell, they never take a vacation, they never show up late, there’s never a slip-and-fall, or an age, sex, or race discrimination case.”

But those robots are backstopped by human beings. How does interacting with more automated technology affect the way we treat each other? …

“We know that people treat artificial entities like they’re alive, even when they’re aware of their inanimacy,” writes Kate Darling, a researcher at MIT who studies ethical relationships between humans and robots, in a recent paper on anthropomorphism in human-robot interaction. Sure, robots don’t have feelings and don’t feel pain (not yet, anyway). But as more robots rely on interaction that resembles human interaction, like voice assistants, the way we treat those machines will increasingly bleed into the way we treat each other.

It took me a while to realize that what Glaser is talking about are AI systems and not robots as such. (sigh) It’s so easy to conflate the concepts.

AI ethics (Toby Walsh and Suzanne Gildert)

Jack Stilgoe of the Guardian published a brief Oct. 9, 2017 introduction to his more substantive (30 mins.?) podcast interview with Dr. Toby Walsh where they discuss stupid AI amongst other topics (Note: A link has been removed),

Professor Toby Walsh has recently published a book – Android Dreams – giving a researcher’s perspective on the uncertainties and opportunities of artificial intelligence. Here, he explains to Jack Stilgoe that we should worry more about the short-term risks of stupid AI in self-driving cars and smartphones than the speculative risks of super-intelligence.

Professor Walsh discusses the effects that AI could have on our jobs, the shapes of our cities and our understandings of ourselves. As someone developing AI, he questions the hype surrounding the technology. He is scared by some drivers’ real-world experimentation with their not-quite-self-driving Teslas. And he thinks that Siri needs to start owning up to being a computer.

I found this discussion to cast a decidedly different light on the future of robotics and AI. Walsh is much more interested in discussing immediate issues like the problems posed by ‘self-driving’ cars. (Aside: Should we be calling them robot cars?)

One ethical issue Walsh raises is with data regarding accidents. He compares what’s happening with accident data from self-driving (robot) cars to how the aviation industry handles accidents. Hint: accident data involving air planes is shared. Would you like to guess who does not share their data?

Sharing and analyzing data and developing new safety techniques based on that data has made flying a remarkably safe transportation technology.. Walsh argues the same could be done for self-driving cars if companies like Tesla took the attitude that safety is in everyone’s best interests and shared their accident data in a scheme similar to the aviation industry’s.

In an Oct. 12, 2017 article by Matthew Braga for Canadian Broadcasting Corporation (CBC) news online another ethical issue is raised by Suzanne Gildert (a participant in the Canadian Robotics Roadmap/Strategy meetings mentioned earlier here), Note: Links have been removed,

… Suzanne Gildert, the co-founder and chief science officer of Vancouver-based robotics company Kindred. Since 2014, her company has been developing intelligent robots [emphasis mine] that can be taught by humans to perform automated tasks — for example, handling and sorting products in a warehouse.

The idea is that when one of Kindred’s robots encounters a scenario it can’t handle, a human pilot can take control. The human can see, feel and hear the same things the robot does, and the robot can learn from how the human pilot handles the problematic task.

This process, called teleoperation, is one way to fast-track learning by manually showing the robot examples of what its trainers want it to do. But it also poses a potential moral and ethical quandary that will only grow more serious as robots become more intelligent.

“That AI is also learning my values,” Gildert explained during a talk on robot ethics at the Singularity University Canada Summit in Toronto on Wednesday [Oct. 11, 2017]. “Everything — my mannerisms, my behaviours — is all going into the AI.”

At its worst, everything from algorithms used in the U.S. to sentence criminals to image-recognition software has been found to inherit the racist and sexist biases of the data on which it was trained.

But just as bad habits can be learned, good habits can be learned too. The question is, if you’re building a warehouse robot like Kindred is, is it more effective to train those robots’ algorithms to reflect the personalities and behaviours of the humans who will be working alongside it? Or do you try to blend all the data from all the humans who might eventually train Kindred robots around the world into something that reflects the best strengths of all?

I notice Gildert distinguishes her robots as “intelligent robots” and then focuses on AI and issues with bias which have already arisen with regard to algorithms (see my May 24, 2017 posting about bias in machine learning, AI, and .Note: if you’re in Vancouver on Oct. 26, 2017 and interested in algorithms and bias), there’s a talk being given by Dr. Cathy O’Neil, author the Weapons of Math Destruction, on the topic of Gender and Bias in Algorithms. It’s not free but  tickets are here.)

Final comments

There is one more aspect I want to mention. Even as someone who usually deals with nanobots, it’s easy to start discussing robots as if the humanoid ones are the only ones that exist. To recapitulate, there are humanoid robots, utilitarian robots, intelligent robots, AI, nanobots, ‘microscopic bots, and more all of which raise questions about ethics and social impacts.

However, there is one more category I want to add to this list: cyborgs. They live amongst us now. Anyone who’s had a hip or knee replacement or a pacemaker or a deep brain stimulator or other such implanted device qualifies as a cyborg. Increasingly too, prosthetics are being introduced and made part of the body. My April 24, 2017 posting features this story,

This Case Western Reserve University (CRWU) video accompanies a March 28, 2017 CRWU news release, (h/t ScienceDaily March 28, 2017 news item)

Bill Kochevar grabbed a mug of water, drew it to his lips and drank through the straw.

His motions were slow and deliberate, but then Kochevar hadn’t moved his right arm or hand for eight years.

And it took some practice to reach and grasp just by thinking about it.

Kochevar, who was paralyzed below his shoulders in a bicycling accident, is believed to be the first person with quadriplegia in the world to have arm and hand movements restored with the help of two temporarily implanted technologies. [emphasis mine]

A brain-computer interface with recording electrodes under his skull, and a functional electrical stimulation (FES) system* activating his arm and hand, reconnect his brain to paralyzed muscles.

Does a brain-computer interface have an effect on human brain and, if so, what might that be?

In any discussion (assuming there is funding for it) about ethics and social impact, we might want to invite the broadest range of people possible at an ‘earlyish’ stage (although we’re already pretty far down the ‘automation road’) stage or as Jack Stilgoe and Toby Walsh note, technological determinism holds sway.

Once again here are links for the articles and information mentioned in this double posting,

That’s it!

ETA Oct. 16, 2017: Well, I guess that wasn’t quite ‘it’. BBC’s (British Broadcasting Corporation) Magazine published a thoughtful Oct. 15, 2017 piece titled: Can we teach robots ethics?

Solar-powered graphene skin for more feeling in your prosthetics

A March 23, 2017 news item on Nanowerk highlights research that could put feeling into a prosthetic limb,

A new way of harnessing the sun’s rays to power ‘synthetic skin’ could help to create advanced prosthetic limbs capable of returning the sense of touch to amputees.

Engineers from the University of Glasgow, who have previously developed an ‘electronic skin’ covering for prosthetic hands made from graphene, have found a way to use some of graphene’s remarkable physical properties to use energy from the sun to power the skin.

Graphene is a highly flexible form of graphite which, despite being just a single atom thick, is stronger than steel, electrically conductive, and transparent. It is graphene’s optical transparency, which allows around 98% of the light which strikes its surface to pass directly through it, which makes it ideal for gathering energy from the sun to generate power.

A March 23, 2017 University of Glasgow press release, which originated the news item, details more about the research,

Ravinder Dahiya

Dr Ravinder Dahiya

A new research paper, published today in the journal Advanced Functional Materials, describes how Dr Dahiya and colleagues from his Bendable Electronics and Sensing Technologies (BEST) group have integrated power-generating photovoltaic cells into their electronic skin for the first time.

Dr Dahiya, from the University of Glasgow’s School of Engineering, said: “Human skin is an incredibly complex system capable of detecting pressure, temperature and texture through an array of neural sensors which carry signals from the skin to the brain.

“My colleagues and I have already made significant steps in creating prosthetic prototypes which integrate synthetic skin and are capable of making very sensitive pressure measurements. Those measurements mean the prosthetic hand is capable of performing challenging tasks like properly gripping soft materials, which other prosthetics can struggle with. We are also using innovative 3D printing strategies to build more affordable sensitive prosthetic limbs, including the formation of a very active student club called ‘Helping Hands’.

“Skin capable of touch sensitivity also opens the possibility of creating robots capable of making better decisions about human safety. A robot working on a construction line, for example, is much less likely to accidentally injure a human if it can feel that a person has unexpectedly entered their area of movement and stop before an injury can occur.”

The new skin requires just 20 nanowatts of power per square centimetre, which is easily met even by the poorest-quality photovoltaic cells currently available on the market. And although currently energy generated by the skin’s photovoltaic cells cannot be stored, the team are already looking into ways to divert unused energy into batteries, allowing the energy to be used as and when it is required.

Dr Dahiya added: “The other next step for us is to further develop the power-generation technology which underpins this research and use it to power the motors which drive the prosthetic hand itself. This could allow the creation of an entirely energy-autonomous prosthetic limb.

“We’ve already made some encouraging progress in this direction and we’re looking forward to presenting those results soon. We are also exploring the possibility of building on these exciting results to develop wearable systems for affordable healthcare. In this direction, recently we also got small funds from Scottish Funding Council.”

For more information about this advance and others in the field of prosthetics you may want to check out Megan Scudellari’s March 30, 2017 article for the IEEE’s (Institute of Electrical and Electronics Engineers) Spectrum (Note: Links have been removed),

Cochlear implants can restore hearing to individuals with some types of hearing loss. Retinal implants are now on the market to restore sight to the blind. But there are no commercially available prosthetics that restore a sense of touch to those who have lost a limb.

Several products are in development, including this haptic system at Case Western Reserve University, which would enable upper-limb prosthetic users to, say, pluck a grape off a stem or pull a potato chip out of a bag. It sounds simple, but such tasks are virtually impossible without a sense of touch and pressure.

Now, a team at the University of Glasgow that previously developed a flexible ‘electronic skin’ capable of making sensitive pressure measurements, has figured out how to power their skin with sunlight. …

Here’s a link to and a citation for the paper,

Energy-Autonomous, Flexible, and Transparent Tactile Skin by Carlos García Núñez, William Taube Navaraj, Emre O. Polat and Ravinder Dahiya. Advanced Functional Materials DOI: 10.1002/adfm.201606287 Version of Record online: 22 MAR 2017

© 2017 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim

This paper is behind a paywall.

A new class of artificial retina

If I read the news release rightly (keep scrolling), this particular artificial retina does not require a device outside the body (e.g. specially developed eyeglasses) to capture an image to be transmitted to the implant. This new artificial retina captures the image directly.

The announcement of a new artificial retina is made in a March 13, 2017 news item on Nanowerk (Note: A link has been removed),

A team of engineers at the University of California San Diego and La Jolla-based startup Nanovision Biosciences Inc. have developed the nanotechnology and wireless electronics for a new type of retinal prosthesis that brings research a step closer to restoring the ability of neurons in the retina to respond to light. The researchers demonstrated this response to light in a rat retina interfacing with a prototype of the device in vitro.

They detail their work in a recent issue of the Journal of Neural Engineering (“Towards high-resolution retinal prostheses with direct optical addressing and inductive telemetry”). The technology could help tens of millions of people worldwide suffering from neurodegenerative diseases that affect eyesight, including macular degeneration, retinitis pigmentosa and loss of vision due to diabetes

Caption: These are primary cortical neurons cultured on the surface of an array of optoelectronic nanowires. Here a neuron is pulling the nanowires, indicating the the cell is doing well on this material. Credit: UC San Diego

A March 13, 2017 University of California at San Diego (UCSD) news release (also on EurekAlert) by Ioana Patringenaru, which originated the news item, details the new approach,

Despite tremendous advances in the development of retinal prostheses over the past two decades, the performance of devices currently on the market to help the blind regain functional vision is still severely limited–well under the acuity threshold of 20/200 that defines legal blindness.

“We want to create a new class of devices with drastically improved capabilities to help people with impaired vision,” said Gabriel A. Silva, one of the senior authors of the work and professor in bioengineering and ophthalmology at UC San Diego. Silva also is one of the original founders of Nanovision.

The new prosthesis relies on two groundbreaking technologies. One consists of arrays of silicon nanowires that simultaneously sense light and electrically stimulate the retina accordingly. The nanowires give the prosthesis higher resolution than anything achieved by other devices–closer to the dense spacing of photoreceptors in the human retina. The other breakthrough is a wireless device that can transmit power and data to the nanowires over the same wireless link at record speed and energy efficiency.

One of the main differences between the researchers’ prototype and existing retinal prostheses is that the new system does not require a vision sensor outside of the eye [emphasis mine] to capture a visual scene and then transform it into alternating signals to sequentially stimulate retinal neurons. Instead, the silicon nanowires mimic the retina’s light-sensing cones and rods to directly stimulate retinal cells. Nanowires are bundled into a grid of electrodes, directly activated by light and powered by a single wireless electrical signal. This direct and local translation of incident light into electrical stimulation makes for a much simpler–and scalable–architecture for the prosthesis.

The power provided to the nanowires from the single wireless electrical signal gives the light-activated electrodes their high sensitivity while also controlling the timing of stimulation.

“To restore functional vision, it is critical that the neural interface matches the resolution and sensitivity of the human retina,” said Gert Cauwenberghs, a professor of bioengineering at the Jacobs School of Engineering at UC San Diego and the paper’s senior author.

Wireless telemetry system

Power is delivered wirelessly, from outside the body to the implant, through an inductive powering telemetry system developed by a team led by Cauwenberghs.

The device is highly energy efficient because it minimizes energy losses in wireless power and data transmission and in the stimulation process, recycling electrostatic energy circulating within the inductive resonant tank, and between capacitance on the electrodes and the resonant tank. Up to 90 percent of the energy transmitted is actually delivered and used for stimulation, which means less RF wireless power emitting radiation in the transmission, and less heating of the surrounding tissue from dissipated power.

The telemetry system is capable of transmitting both power and data over a single pair of inductive coils, one emitting from outside the body, and another on the receiving side in the eye. The link can send and receive one bit of data for every two cycles of the 13.56 megahertz RF signal; other two-coil systems need at least 5 cycles for every bit transmitted.

Proof-of-concept test

For proof-of-concept, the researchers inserted the wirelessly powered nanowire array beneath a transgenic rat retina with rhodopsin P23H knock-in retinal degeneration. The degenerated retina interfaced in vitro with a microelectrode array for recording extracellular neural action potentials (electrical “spikes” from neural activity).

The horizontal and bipolar neurons fired action potentials preferentially when the prosthesis was exposed to a combination of light and electrical potential–and were silent when either light or electrical bias was absent, confirming the light-activated and voltage-controlled responsivity of the nanowire array.

The wireless nanowire array device is the result of a collaboration between a multidisciplinary team led by Cauwenberghs, Silva and William R. Freeman, director of the Jacobs Retina Center at UC San Diego, UC San Diego electrical engineering professor Yu-Hwa Lo and Nanovision Biosciences.

A path to clinical translation

Freeman, Silva and Scott Thorogood, have co-founded La Jolla-based Nanovision Biosciences, a partner in this study, to further develop and translate the technology into clinical use, with the goal of restoring functional vision in patients with severe retinal degeneration. Animal tests with the device are in progress, with clinical trials following.

“We have made rapid progress with the development of the world’s first nanoengineered retinal prosthesis as a result of the unique partnership we have developed with the team at UC San Diego,” said Thorogood, who is the CEO of Nanovision Biosciences.

Here’s a link to and a citation for the paper,

Towards high-resolution retinal prostheses with direct optical addressing and inductive telemetry by Sohmyung Ha, Massoud L Khraiche, Abraham Akinin, Yi Jing, Samir Damle, Yanjin Kuang, Sue Bauchner, Yu-Hwa Lo, William R Freeman, Gabriel A Silva.Journal of Neural Engineering, Volume 13, Number 5 DOI: https://doi.org/10.1088/1741-2560/13/5/056008

Published 16 August 2016 • © 2016 IOP Publishing Ltd

I’m not sure why they waited so long to make the announcement but, in any event, this paper is behind a paywall.