Tag Archives: touch

Touchy robots and prosthetics

I have briefly speculated about the importance of touch elsewhere (see my July 19, 2019 posting regarding BlocKit and blockchain; scroll down about 50% of the way) but this upcoming news bit and the one following it put a different spin on the importance of touch.

Exceptional sense of touch

Robots need a sense of touch to perform their tasks and a July 18, 2019 National University of Singapore press release (also on EurekAlert) announces work on an improved sense of touch,

Robots and prosthetic devices may soon have a sense of touch equivalent to, or better than, the human skin with the Asynchronous Coded Electronic Skin (ACES), an artificial nervous system developed by a team of researchers at the National University of Singapore (NUS).

The new electronic skin system achieved ultra-high responsiveness and robustness to damage, and can be paired with any kind of sensor skin layers to function effectively as an electronic skin.

The innovation, achieved by Assistant Professor Benjamin Tee and his team from the Department of Materials Science and Engineering at the NUS Faculty of Engineering, was first reported in prestigious scientific journal Science Robotics on 18 July 2019.

Faster than the human sensory nervous system

“Humans use our sense of touch to accomplish almost every daily task, such as picking up a cup of coffee or making a handshake. Without it, we will even lose our sense of balance when walking. Similarly, robots need to have a sense of touch in order to interact better with humans, but robots today still cannot feel objects very well,” explained Asst Prof Tee, who has been working on electronic skin technologies for over a decade in hope of giving robots and prosthetic devices a better sense of touch.

Drawing inspiration from the human sensory nervous system, the NUS team spent a year and a half developing a sensor system that could potentially perform better. While the ACES electronic nervous system detects signals like the human sensor nervous system, it is made up of a network of sensors connected via a single electrical conductor, unlike the nerve bundles in the human skin. It is also unlike existing electronic skins which have interlinked wiring systems that can make them sensitive to damage and difficult to scale up.

Elaborating on the inspiration, Asst Prof Tee, who also holds appointments in the NUS Department of Electrical and Computer Engineering, NUS Institute for Health Innovation & Technology (iHealthTech), N.1 Institute for Health and the Hybrid Integrated Flexible Electronic Systems (HiFES) programme, said, “The human sensory nervous system is extremely efficient, and it works all the time to the extent that we often take it for granted. It is also very robust to damage. Our sense of touch, for example, does not get affected when we suffer a cut. If we can mimic how our biological system works and make it even better, we can bring about tremendous advancements in the field of robotics where electronic skins are predominantly applied.”

ACES can detect touches more than 1,000 times faster than the human sensory nervous system. For example, it is capable of differentiating physical contacts between different sensors in less than 60 nanoseconds – the fastest ever achieved for an electronic skin technology – even with large numbers of sensors. ACES-enabled skin can also accurately identify the shape, texture and hardness of objects within 10 milliseconds, ten times faster than the blinking of an eye. This is enabled by the high fidelity and capture speed of the ACES system.

The ACES platform can also be designed to achieve high robustness to physical damage, an important property for electronic skins because they come into the frequent physical contact with the environment. Unlike the current system used to interconnect sensors in existing electronic skins, all the sensors in ACES can be connected to a common electrical conductor with each sensor operating independently. This allows ACES-enabled electronic skins to continue functioning as long as there is one connection between the sensor and the conductor, making them less vulnerable to damage.

Smart electronic skins for robots and prosthetics

ACES’ simple wiring system and remarkable responsiveness even with increasing numbers of sensors are key characteristics that will facilitate the scale-up of intelligent electronic skins for Artificial Intelligence (AI) applications in robots, prosthetic devices and other human machine interfaces.

“Scalability is a critical consideration as big pieces of high performing electronic skins are required to cover the relatively large surface areas of robots and prosthetic devices,” explained Asst Prof Tee. “ACES can be easily paired with any kind of sensor skin layers, for example, those designed to sense temperatures and humidity, to create high performance ACES-enabled electronic skin with an exceptional sense of touch that can be used for a wide range of purposes,” he added.

For instance, pairing ACES with the transparent, self-healing and water-resistant sensor skin layer also recently developed by Asst Prof Tee’s team, creates an electronic skin that can self-repair, like the human skin. This type of electronic skin can be used to develop more realistic prosthetic limbs that will help disabled individuals restore their sense of touch.

Other potential applications include developing more intelligent robots that can perform disaster recovery tasks or take over mundane operations such as packing of items in warehouses. The NUS team is therefore looking to further apply the ACES platform on advanced robots and prosthetic devices in the next phase of their research.

For those who like videos, the researchers have prepared this,

Here’s a link to and a citation for the paper,

A neuro-inspired artificial peripheral nervous system for scalable electronic skins by Wang Wei Lee, Yu Jun Tan, Haicheng Yao, Si Li, Hian Hian See, Matthew Hon, Kian Ann Ng, Betty Xiong, John S. Ho and Benjamin C. K. Tee. Science Robotics Vol 4, Issue 32 31 July 2019 eaax2198 DOI: 10.1126/scirobotics.aax2198 Published online first: 17 Jul 2019:

This paper is behind a paywall.

Picking up a grape and holding his wife’s hand

This story comes from the Canadian Broadcasting Corporation (CBC) Radio with a six minute story embedded in the text, from a July 25, 2019 CBC Radio ‘As It Happens’ article by Sheena Goodyear,

The West Valley City, Utah, real estate agent [Keven Walgamott] lost his left hand in an electrical accident 17 years ago. Since then, he’s tried out a few different prosthetic limbs, but always found them too clunky and uncomfortable.

Then he decided to work with the University of Utah in 2016 to test out new prosthetic technology that mimics the sensation of human touch, allowing Walgamott to perform delicate tasks with precision — including shaking his wife’s hand. 

“I extended my left hand, she came and extended hers, and we were able to feel each other with the left hand for the first time in 13 years, and it was just a marvellous and wonderful experience,” Walgamott told As It Happens guest host Megan Williams. 

Walgamott, one of seven participants in the University of Utah study, was able to use an advanced prosthetic hand called the LUKE Arm to pick up an egg without cracking it, pluck a single grape from a bunch, hammer a nail, take a ring on and off his finger, fit a pillowcase over a pillow and more. 

While performing the tasks, Walgamott was able to actually feel the items he was holding and correctly gauge the amount of pressure he needed to exert — mimicking a process the human brain does automatically.

“I was able to feel something in each of my fingers,” he said. “What I feel, I guess the easiest way to explain it, is little electrical shocks.”

Those shocks — which he describes as a kind of a tingling sensation — intensify as he tightens his grip.

“Different variations of the intensity of the electricity as I move my fingers around and as I touch things,” he said. 

To make that [sense of touch] happen, the researchers implanted electrodes into the nerves on Walgamott’s forearm, allowing his brain to communicate with his prosthetic through a computer outside his body. That means he can move the hand just by thinking about it.

But those signals also work in reverse.

The team attached sensors to the hand of a LUKE Arm. Those sensors detect touch and positioning, and send that information to the electrodes so it can be interpreted by the brain.

For Walgamott, performing a series of menial tasks as a team of scientists recorded his progress was “fun to do.”

“I’d forgotten how well two hands work,” he said. “That was pretty cool.”

But it was also a huge relief from the phantom limb pain he has experienced since the accident, which he describes as a “burning sensation” in the place where his hand used to be.

A July 24, 2019 University of Utah news release (also on EurekAlert) provides more detail about the research,

Keven Walgamott had a good “feeling” about picking up the egg without crushing it.

What seems simple for nearly everyone else can be more of a Herculean task for Walgamott, who lost his left hand and part of his arm in an electrical accident 17 years ago. But he was testing out the prototype of a high-tech prosthetic arm with fingers that not only can move, they can move with his thoughts. And thanks to a biomedical engineering team at the University of Utah, he “felt” the egg well enough so his brain could tell the prosthetic hand not to squeeze too hard.

That’s because the team, led by U biomedical engineering associate professor Gregory Clark, has developed a way for the “LUKE Arm” (so named after the robotic hand that Luke Skywalker got in “The Empire Strikes Back”) to mimic the way a human hand feels objects by sending the appropriate signals to the brain. Their findings were published in a new paper co-authored by U biomedical engineering doctoral student Jacob George, former doctoral student David Kluger, Clark and other colleagues in the latest edition of the journal Science Robotics. A copy of the paper may be obtained by emailing robopak@aaas.org.

“We changed the way we are sending that information to the brain so that it matches the human body. And by matching the human body, we were able to see improved benefits,” George says. “We’re making more biologically realistic signals.”

That means an amputee wearing the prosthetic arm can sense the touch of something soft or hard, understand better how to pick it up and perform delicate tasks that would otherwise be impossible with a standard prosthetic with metal hooks or claws for hands.

“It almost put me to tears,” Walgamott says about using the LUKE Arm for the first time during clinical tests in 2017. “It was really amazing. I never thought I would be able to feel in that hand again.”

Walgamott, a real estate agent from West Valley City, Utah, and one of seven test subjects at the U, was able to pluck grapes without crushing them, pick up an egg without cracking it and hold his wife’s hand with a sensation in the fingers similar to that of an able-bodied person.

“One of the first things he wanted to do was put on his wedding ring. That’s hard to do with one hand,” says Clark. “It was very moving.”

Those things are accomplished through a complex series of mathematical calculations and modeling.

The LUKE Arm

The LUKE Arm has been in development for some 15 years. The arm itself is made of mostly metal motors and parts with a clear silicon “skin” over the hand. It is powered by an external battery and wired to a computer. It was developed by DEKA Research & Development Corp., a New Hampshire-based company founded by Segway inventor Dean Kamen.

Meanwhile, the U’s team has been developing a system that allows the prosthetic arm to tap into the wearer’s nerves, which are like biological wires that send signals to the arm to move. It does that thanks to an invention by U biomedical engineering Emeritus Distinguished Professor Richard A. Normann called the Utah Slanted Electrode Array. The array is a bundle of 100 microelectrodes and wires that are implanted into the amputee’s nerves in the forearm and connected to a computer outside the body. The array interprets the signals from the still-remaining arm nerves, and the computer translates them to digital signals that tell the arm to move.

But it also works the other way. To perform tasks such as picking up objects requires more than just the brain telling the hand to move. The prosthetic hand must also learn how to “feel” the object in order to know how much pressure to exert because you can’t figure that out just by looking at it.

First, the prosthetic arm has sensors in its hand that send signals to the nerves via the array to mimic the feeling the hand gets upon grabbing something. But equally important is how those signals are sent. It involves understanding how your brain deals with transitions in information when it first touches something. Upon first contact of an object, a burst of impulses runs up the nerves to the brain and then tapers off. Recreating this was a big step.

“Just providing sensation is a big deal, but the way you send that information is also critically important, and if you make it more biologically realistic, the brain will understand it better and the performance of this sensation will also be better,” says Clark.

To achieve that, Clark’s team used mathematical calculations along with recorded impulses from a primate’s arm to create an approximate model of how humans receive these different signal patterns. That model was then implemented into the LUKE Arm system.

Future research

In addition to creating a prototype of the LUKE Arm with a sense of touch, the overall team is already developing a version that is completely portable and does not need to be wired to a computer outside the body. Instead, everything would be connected wirelessly, giving the wearer complete freedom.

Clark says the Utah Slanted Electrode Array is also capable of sending signals to the brain for more than just the sense of touch, such as pain and temperature, though the paper primarily addresses touch. And while their work currently has only involved amputees who lost their extremities below the elbow, where the muscles to move the hand are located, Clark says their research could also be applied to those who lost their arms above the elbow.

Clark hopes that in 2020 or 2021, three test subjects will be able to take the arm home to use, pending federal regulatory approval.

The research involves a number of institutions including the U’s Department of Neurosurgery, Department of Physical Medicine and Rehabilitation and Department of Orthopedics, the University of Chicago’s Department of Organismal Biology and Anatomy, the Cleveland Clinic’s Department of Biomedical Engineering and Utah neurotechnology companies Ripple Neuro LLC and Blackrock Microsystems. The project is funded by the Defense Advanced Research Projects Agency and the National Science Foundation.

“This is an incredible interdisciplinary effort,” says Clark. “We could not have done this without the substantial efforts of everybody on that team.”

Here’s a link to and a citation for the paper,

Biomimetic sensory feedback through peripheral nerve stimulation improves dexterous use of a bionic hand by J. A. George, D. T. Kluger, T. S. Davis, S. M. Wendelken, E. V. Okorokova, Q. He, C. C. Duncan, D. T. Hutchinson, Z. C. Thumser, D. T. Beckler, P. D. Marasco, S. J. Bensmaia and G. A. Clark. Science Robotics Vol. 4, Issue 32, eaax2352 31 July 2019 DOI: 10.1126/scirobotics.aax2352 Published online first: 24 Jul 2019

This paper is definitely behind a paywall.

The University of Utah researchers have produced a video highlighting their work,

Smartphone as augmented reality system with software from Brown University

You need to see this,

Amazing, eh? The researchers are scheduled to present this work sometime this week at the ACM Symposium on User Interface Software and Technology (UIST) being held in New Orleans, US, from October 20-23, 2019.

Here’s more about ‘Portal-ble’ in an October 16, 2019 news item on ScienceDaily,

A new software system developed by Brown University [US] researchers turns cell phones into augmented reality portals, enabling users to place virtual building blocks, furniture and other objects into real-world backdrops, and use their hands to manipulate those objects as if they were really there.

The developers hope the new system, called Portal-ble, could be a tool for artists, designers, game developers and others to experiment with augmented reality (AR). The team will present the work later this month at the ACM Symposium on User Interface Software and Technology (UIST 2019) in New Orleans. The source code for Andriod is freely available for download on the researchers’ website, and iPhone code will follow soon.

“AR is going to be a great new mode of interaction,” said Jeff Huang, an assistant professor of computer science at Brown who developed the system with his students. “We wanted to make something that made AR portable so that people could use anywhere without any bulky headsets. We also wanted people to be able to interact with the virtual world in a natural way using their hands.”

An October 16, 2019 Brown University news release (also on EurekAlert), which originated the news item, provides more detail,

Huang said the idea for Portal-ble’s “hands-on” interaction grew out of some frustration with AR apps like Pokemon GO. AR apps use smartphones to place virtual objects (like Pokemon characters) into real-world scenes, but interacting with those objects requires users to swipe on the screen.

“Swiping just wasn’t a satisfying way of interacting,” Huang said. “In the real world, we interact with objects with our hands. We turn doorknobs, pick things up and throw things. So we thought manipulating virtual objects by hand would be much more powerful than swiping. That’s what’s different about Portal-ble.”

The platform makes use of a small infrared sensor mounted on the back of a phone. The sensor tracks the position of people’s hands in relation to virtual objects, enabling users to pick objects up, turn them, stack them or drop them. It also lets people use their hands to virtually “paint” onto real-world backdrops. As a demonstration, Huang and his students used the system to paint a virtual garden into a green space on Brown’s College Hill campus.

Huang says the main technical contribution of the work was developing the right accommodations and feedback tools to enable people to interact intuitively with virtual objects.

“It turns out that picking up a virtual object is really hard if you try to apply real-world physics,” Huang said. “People try to grab in the wrong place, or they put their fingers through the objects. So we had to observe how people tried to interact with these objects and then make our system able accommodate those tendencies.”

To do that, Huang enlisted students in a class he was teaching to come up with tasks they might want to do in the AR world — stacking a set of blocks, for example. The students then asked other people to try performing those tasks using Portal-ble, while recording what people were able to do and what they couldn’t. They could then adjust the system’s physics and user interface to make interactions more successful.

“It’s a little like what happens when people draw lines in Photoshop,” Huang said. “The lines people draw are never perfect, but the program can smooth them out and make them perfectly straight. Those were the kinds of accommodations we were trying to make with these virtual objects.”

The team also added sensory feedback — visual highlights on objects and phone vibrations — to make interactions easier. Huang said he was somewhat surprised that phone vibrations helped users to interact. Users feel the vibrations in the hand they’re using to hold the phone, not in the hand that’s actually grabbing for the virtual object. Still, Huang said, vibration feedback still helped users to more successfully interact with objects.

In follow-up studies, users reported that the accommodations and feedback used by the system made tasks significantly easier, less time-consuming and more satisfying.

Huang and his students plan to continue working with Portal-ble — expanding its object library, refining interactions and developing new activities. They also hope to streamline the system to make it run entirely on a phone. Currently the infrared sensor requires an infrared sensor and external compute stick for extra processing power.

Huang hopes people will download the freely available source code and try it for themselves. 
“We really just want to put this out there and see what people do with it,” he said. “The code is on our website for people to download, edit and build off of. It will be interesting to see what people do with it.

Co-authors on the research paper were Jing Qian, Jiaju Ma, Xiangyu Li, Benjamin Attal, Haoming Lai, James Tompkin and John Hughes. The work was supported by the National Science Foundation (IIS-1552663) and by a gift from Pixar.

You can find the conference paper here on jeffhuang.com,

Portal-ble: Intuitive Free-hand Manipulationin Unbounded Smartphone-based Augmented Reality by Jing Qian, Jiaju Ma, Xiangyu Li∗, Benjamin Attal, Haoming Lai,James Tompkin, John F. Hughes, Jeff Huang. Brown University, Providence RI, USA; Southeast University, Nanjing, China. Presented at ACM Symposium on User Interface Software and Technology (UIST) being held in New Orleans, US

This is the first time I’ve seen an augmented reality system that seems accessible, i.e., affordable. You can find out more on the Portal-ble ‘resource’ page where you’ll also find a link to the source code repository. The researchers, as noted in the news release, have an Android version available now with an iPhone version to be released in the future.

Emotional robots

This is some very intriguing work,

“I’ve always felt that robots shouldn’t just be modeled after humans [emphasis mine] or be copies of humans,” he [Guy Hoffman, assistant professor at Cornell University)] said. “We have a lot of interesting relationships with other species. Robots could be thought of as one of those ‘other species,’ not trying to copy what we do but interacting with us with their own language, tapping into our own instincts.”

A July 16, 2018 Cornell University news release on EurekAlert offers more insight into the work,

Cornell University researchers have developed a prototype of a robot that can express “emotions” through changes in its outer surface. The robot’s skin covers a grid of texture units whose shapes change based on the robot’s feelings.

Assistant professor of mechanical and aerospace engineering Guy Hoffman, who has given a TEDx talk on “Robots with ‘soul'” said the inspiration for designing a robot that gives off nonverbal cues through its outer skin comes from the animal world, based on the idea that robots shouldn’t be thought of in human terms.

“I’ve always felt that robots shouldn’t just be modeled after humans or be copies of humans,” he said. “We have a lot of interesting relationships with other species. Robots could be thought of as one of those ‘other species,’ not trying to copy what we do but interacting with us with their own language, tapping into our own instincts.”

Their work is detailed in a paper, “Soft Skin Texture Modulation for Social Robots,” presented at the International Conference on Soft Robotics in Livorno, Italy. Doctoral student Yuhan Hu was lead author; the paper was featured in IEEE Spectrum, a publication of the Institute of Electrical and Electronics Engineers.

Hoffman and Hu’s design features an array of two shapes, goosebumps and spikes, which map to different emotional states. The actuation units for both shapes are integrated into texture modules, with fluidic chambers connecting bumps of the same kind.

The team tried two different actuation control systems, with minimizing size and noise level a driving factor in both designs. “One of the challenges,” Hoffman said, “is that a lot of shape-changing technologies are quite loud, due to the pumps involved, and these make them also quite bulky.”

Hoffman does not have a specific application for his robot with texture-changing skin mapped to its emotional state. At this point, just proving that this can be done is a sizable first step. “It’s really just giving us another way to think about how robots could be designed,” he said.

Future challenges include scaling the technology to fit into a self-contained robot – whatever shape that robot takes – and making the technology more responsive to the robot’s immediate emotional changes.

“At the moment, most social robots express [their] internal state only by using facial expressions and gestures,” the paper concludes. “We believe that the integration of a texture-changing skin, combining both haptic [feel] and visual modalities, can thus significantly enhance the expressive spectrum of robots for social interaction.”

A video helps to explain the work,

I don’t consider ‘sleepy’ to be an emotional state but as noted earlier this is intriguing. You can find out more in a July 9, 2018 article by Tom Fleischman for the Cornell Chronicle (Note: tthe news release was fashioned from this article so you will find some redundancy should you read in its entirety),

In 1872, Charles Darwin published his third major work on evolutionary theory, “The Expression of the Emotions in Man and Animals,” which explores the biological aspects of emotional life.

In it, Darwin writes: “Hardly any expressive movement is so general as the involuntary erection of the hairs, feathers and other dermal appendages … it is common throughout three of the great vertebrate classes.” Nearly 150 years later, the field of robotics is starting to draw inspiration from those words.

“The aspect of touch has not been explored much in human-robot interaction, but I often thought that people and animals do have this change in their skin that expresses their internal state,” said Guy Hoffman, assistant professor and Mills Family Faculty Fellow in the Sibley School of Mechanical and Aerospace Engineering (MAE).

Inspired by this idea, Hoffman and students in his Human-Robot Collaboration and Companionship Lab have developed a prototype of a robot that can express “emotions” through changes in its outer surface. …

Part of our relationship with other species is our understanding of the nonverbal cues animals give off – like the raising of fur on a dog’s back or a cat’s neck, or the ruffling of a bird’s feathers. Those are unmistakable signals that the animal is somehow aroused or angered; the fact that they can be both seen and felt strengthens the message.

“Yuhan put it very nicely: She said that humans are part of the family of species, they are not disconnected,” Hoffman said. “Animals communicate this way, and we do have a sensitivity to this kind of behavior.”

You can find the paper presented at the International Conference on Soft Robotics in Livorno, Italy, ‘Soft Skin Texture Modulation for Social Robotics’ by Yuhan Hu, Zhengnan Zhao, Abheek Vimal, and Guy Hoffman, here.

A solar, self-charging supercapacitor for wearable technology

Ravinder Dahiya, Carlos García Núñez, and their colleagues at the University of Glasgow (Scotland) strike again (see my May 10, 2017 posting for their first ‘solar-powered graphene skin’ research announcement). Last time it was all about robots and prosthetics, this time they’ve focused on wearable technology according to a July 18, 2018 news item on phys.org,

A new form of solar-powered supercapacitor could help make future wearable technologies lighter and more energy-efficient, scientists say.

In a paper published in the journal Nano Energy, researchers from the University of Glasgow’s Bendable Electronics and Sensing Technologies (BEST) group describe how they have developed a promising new type of graphene supercapacitor, which could be used in the next generation of wearable health sensors.

A July 18, 2018 University of Glasgow press release, which originated the news item, explains further,

Currently, wearable systems generally rely on relatively heavy, inflexible batteries, which can be uncomfortable for long-term users. The BEST team, led by Professor Ravinder Dahiya, have built on their previous success in developing flexible sensors by developing a supercapacitor which could power health sensors capable of conforming to wearer’s bodies, offering more comfort and a more consistent contact with skin to better collect health data.

Their new supercapacitor uses layers of flexible, three-dimensional porous foam formed from graphene and silver to produce a device capable of storing and releasing around three times more power than any similar flexible supercapacitor. The team demonstrated the durability of the supercapacitor, showing that it provided power consistently across 25,000 charging and discharging cycles.

They have also found a way to charge the system by integrating it with flexible solar powered skin already developed by the BEST group, effectively creating an entirely self-charging system, as well as a pH sensor which uses wearer’s sweat to monitor their health.

Professor Dahiya said: “We’re very pleased by the progress this new form of solar-powered supercapacitor represents. A flexible, wearable health monitoring system which only requires exposure to sunlight to charge has a lot of obvious commercial appeal, but the underlying technology has a great deal of additional potential.

“This research could take the wearable systems for health monitoring to remote parts of the world where solar power is often the most reliable source of energy, and it could also increase the efficiency of hybrid electric vehicles. We’re already looking at further integrating the technology into flexible synthetic skin which we’re developing for use in advanced prosthetics.” [emphasis mine]

In addition to the team’s work on robots, prosthetics, and graphene ‘skin’ mentioned in the May 10, 2017 posting the team is working on a synthetic ‘brainy’ skin for which they have just received £1.5m funding from the Engineering and Physical Science Research Council (EPSRC).

Brainy skin

A July 3, 2018 University of Glasgow press release discusses the proposed work in more detail,

A robotic hand covered in ‘brainy skin’ that mimics the human sense of touch is being developed by scientists.

University of Glasgow’s Professor Ravinder Dahiya has plans to develop ultra-flexible, synthetic Brainy Skin that ‘thinks for itself’.

The super-flexible, hypersensitive skin may one day be used to make more responsive prosthetics for amputees, or to build robots with a sense of touch.

Brainy Skin reacts like human skin, which has its own neurons that respond immediately to touch rather than having to relay the whole message to the brain.

This electronic ‘thinking skin’ is made from silicon based printed neural transistors and graphene – an ultra-thin form of carbon that is only an atom thick, but stronger than steel.

The new version is more powerful, less cumbersome and would work better than earlier prototypes, also developed by Professor Dahiya and his Bendable Electronics and Sensing Technologies (BEST) team at the University’s School of Engineering.

His futuristic research, called neuPRINTSKIN (Neuromorphic Printed Tactile Skin), has just received another £1.5m funding from the Engineering and Physical Science Research Council (EPSRC).

Professor Dahiya said: “Human skin is an incredibly complex system capable of detecting pressure, temperature and texture through an array of neural sensors that carry signals from the skin to the brain.

“Inspired by real skin, this project will harness the technological advances in electronic engineering to mimic some features of human skin, such as softness, bendability and now, also sense of touch. This skin will not just mimic the morphology of the skin but also its functionality.

“Brainy Skin is critical for the autonomy of robots and for a safe human-robot interaction to meet emerging societal needs such as helping the elderly.”

Synthetic ‘Brainy Skin’ with sense of touch gets £1.5m funding. Photo of Professor Ravinder Dahiya

This latest advance means tactile data is gathered over large areas by the synthetic skin’s computing system rather than sent to the brain for interpretation.

With additional EPSRC funding, which extends Professor Dahiya’s fellowship by another three years, he plans to introduce tactile skin with neuron-like processing. This breakthrough in the tactile sensing research will lead to the first neuromorphic tactile skin, or ‘brainy skin.’

To achieve this, Professor Dahiya will add a new neural layer to the e-skin that he has already developed using printing silicon nanowires.

Professor Dahiya added: “By adding a neural layer underneath the current tactile skin, neuPRINTSKIN will add significant new perspective to the e-skin research, and trigger transformations in several areas such as robotics, prosthetics, artificial intelligence, wearable systems, next-generation computing, and flexible and printed electronics.”

The Engineering and Physical Sciences Research Council (EPSRC) is part of UK Research and Innovation, a non-departmental public body funded by a grant-in-aid from the UK government.

EPSRC is the main funding body for engineering and physical sciences research in the UK. By investing in research and postgraduate training, the EPSRC is building the knowledge and skills base needed to address the scientific and technological challenges facing the nation.

Its portfolio covers a vast range of fields from healthcare technologies to structural engineering, manufacturing to mathematics, advanced materials to chemistry. The research funded by EPSRC has impact across all sectors. It provides a platform for future UK prosperity by contributing to a healthy, connected, resilient, productive nation.

It’s fascinating to note how these pieces of research fit together for wearable technology and health monitoring and creating more responsive robot ‘skin’ and, possibly, prosthetic devices that would allow someone to feel again.

The latest research paper

Getting back the solar-charging supercapacitors mentioned in the opening, here’s a link to and a citation for the team’s latest research paper,

Flexible self-charging supercapacitor based on graphene-Ag-3D graphene foam electrodes by Libu Manjakka, Carlos García Núñez, Wenting Dang, Ravinder Dahiya. Nano Energy Volume 51, September 2018, Pages 604-612 DOI: https://doi.org/10.1016/j.nanoen.2018.06.072

This paper is open access.

Prosthetic pain

“Feeling no pain” can be a euphemism for being drunk. However, there are some people for whom it’s not a euphemism and they literally feel no pain for one reason or another. One group of people who feel no pain are amputees and a researcher at Johns Hopkins University (Maryland, US) has found a way so they can feel pain again.

A June 20, 2018 news item on ScienceDaily provides an introduction to the research and to the reason for it,

Amputees often experience the sensation of a “phantom limb” — a feeling that a missing body part is still there.

That sensory illusion is closer to becoming a reality thanks to a team of engineers at the Johns Hopkins University that has created an electronic skin. When layered on top of prosthetic hands, this e-dermis brings back a real sense of touch through the fingertips.

“After many years, I felt my hand, as if a hollow shell got filled with life again,” says the anonymous amputee who served as the team’s principal volunteer tester.

Made of fabric and rubber laced with sensors to mimic nerve endings, e-dermis recreates a sense of touch as well as pain by sensing stimuli and relaying the impulses back to the peripheral nerves.

A June 20, 2018 Johns Hopkins University news release (also on EurekAlert), which originated the news item, explores the research in more depth,

“We’ve made a sensor that goes over the fingertips of a prosthetic hand and acts like your own skin would,” says Luke Osborn, a graduate student in biomedical engineering. “It’s inspired by what is happening in human biology, with receptors for both touch and pain.

“This is interesting and new,” Osborn said, “because now we can have a prosthetic hand that is already on the market and fit it with an e-dermis that can tell the wearer whether he or she is picking up something that is round or whether it has sharp points.”

The work – published June 20 in the journal Science Robotics – shows it is possible to restore a range of natural, touch-based feelings to amputees who use prosthetic limbs. The ability to detect pain could be useful, for instance, not only in prosthetic hands but also in lower limb prostheses, alerting the user to potential damage to the device.

Human skin contains a complex network of receptors that relay a variety of sensations to the brain. This network provided a biological template for the research team, which includes members from the Johns Hopkins departments of Biomedical Engineering, Electrical and Computer Engineering, and Neurology, and from the Singapore Institute of Neurotechnology.

Bringing a more human touch to modern prosthetic designs is critical, especially when it comes to incorporating the ability to feel pain, Osborn says.

“Pain is, of course, unpleasant, but it’s also an essential, protective sense of touch that is lacking in the prostheses that are currently available to amputees,” he says. “Advances in prosthesis designs and control mechanisms can aid an amputee’s ability to regain lost function, but they often lack meaningful, tactile feedback or perception.”

That is where the e-dermis comes in, conveying information to the amputee by stimulating peripheral nerves in the arm, making the so-called phantom limb come to life. The e-dermis device does this by electrically stimulating the amputee’s nerves in a non-invasive way, through the skin, says the paper’s senior author, Nitish Thakor, a professor of biomedical engineering and director of the Biomedical Instrumentation and Neuroengineering Laboratory at Johns Hopkins.

“For the first time, a prosthesis can provide a range of perceptions, from fine touch to noxious to an amputee, making it more like a human hand,” says Thakor, co-founder of Infinite Biomedical Technologies, the Baltimore-based company that provided the prosthetic hardware used in the study.

Inspired by human biology, the e-dermis enables its user to sense a continuous spectrum of tactile perceptions, from light touch to noxious or painful stimulus. The team created a “neuromorphic model” mimicking the touch and pain receptors of the human nervous system, allowing the e-dermis to electronically encode sensations just as the receptors in the skin would. Tracking brain activity via electroencephalography, or EEG, the team determined that the test subject was able to perceive these sensations in his phantom hand.

The researchers then connected the e-dermis output to the volunteer by using a noninvasive method known as transcutaneous electrical nerve stimulation, or TENS. In a pain-detection task, the team determined that the test subject and the prosthesis were able to experience a natural, reflexive reaction to both pain while touching a pointed object and non-pain when touching a round object.

The e-dermis is not sensitive to temperature–for this study, the team focused on detecting object curvature (for touch and shape perception) and sharpness (for pain perception). The e-dermis technology could be used to make robotic systems more human, and it could also be used to expand or extend to astronaut gloves and space suits, Osborn says.

The researchers plan to further develop the technology and better understand how to provide meaningful sensory information to amputees in the hopes of making the system ready for widespread patient use.

Johns Hopkins is a pioneer in the field of upper limb dexterous prostheses. More than a decade ago, the university’s Applied Physics Laboratory led the development of the advanced Modular Prosthetic Limb, which an amputee patient controls with the muscles and nerves that once controlled his or her real arm or hand.

In addition to the funding from Space@Hopkins, which fosters space-related collaboration across the university’s divisions, the team also received grants from the Applied Physics Laboratory Graduate Fellowship Program and the Neuroengineering Training Initiative through the National Institute of Biomedical Imaging and Bioengineering through the National Institutes of Health under grant T32EB003383.

The e-dermis was tested over the course of one year on an amputee who volunteered in the Neuroengineering Laboratory at Johns Hopkins. The subject frequently repeated the testing to demonstrate consistent sensory perceptions via the e-dermis. The team has worked with four other amputee volunteers in other experiments to provide sensory feedback.

Here’s a video about this work,

Sarah Zhang’s June 20, 2018 article for The Atlantic reveals a few more details while covering some of the material in the news release,

Osborn and his team added one more feature to make the prosthetic hand, as he puts it, “more lifelike, more self-aware”: When it grasps something too sharp, it’ll open its fingers and immediately drop it—no human control necessary. The fingers react in just 100 milliseconds, the speed of a human reflex. Existing prosthetic hands have a similar degree of theoretically helpful autonomy: If an object starts slipping, the hand will grasp more tightly. Ideally, users would have a way to override a prosthesis’s reflex, like how you can hold your hand on a stove if you really, really want to. After all, the whole point of having a hand is being able to tell it what to do.

Here’s a link to and a citation for the paper,

Prosthesis with neuromorphic multilayered e-dermis perceives touch and pain by Luke E. Osborn, Andrei Dragomir, Joseph L. Betthauser, Christopher L. Hunt, Harrison H. Nguyen, Rahul R. Kaliki, and Nitish V. Thakor. Science Robotics 20 Jun 2018: Vol. 3, Issue 19, eaat3818 DOI: 10.1126/scirobotics.aat3818

This paper is behind a paywall.

Atomic force microscope (AFM) shrunk down to a dime-sized device?

Before getting to the announcement, here’s a little background from Dexter Johnson’s Feb. 21, 2017 posting on his NanoClast blog (on the IEEE [Institute of Electrical and Electronics Engineers] website; Note: Links have been removed),

Ever since the 1980s, when Gerd Binnig of IBM first heard that “beautiful noise” made by the tip of the first scanning tunneling microscope (STM) dragging across the surface of an atom, and he later developed the atomic force microscope (AFM), these microscopy tools have been the bedrock of nanotechnology research and development.

AFMs have continued to evolve over the years, and at one time, IBM even looked into using them as the basis of a memory technology in the company’s Millipede project. Despite all this development, AFMs have remained bulky and expensive devices, costing as much as $50,000 [or more].

Now, here’s the announcement in a Feb. 15, 2017 news item on Nanowerk,

Researchers at The University of Texas at Dallas have created an atomic force microscope on a chip, dramatically shrinking the size — and, hopefully, the price tag — of a high-tech device commonly used to characterize material properties.

“A standard atomic force microscope is a large, bulky instrument, with multiple control loops, electronics and amplifiers,” said Dr. Reza Moheimani, professor of mechanical engineering at UT Dallas. “We have managed to miniaturize all of the electromechanical components down onto a single small chip.”

A Feb. 15, 2017 University of Texas at Dallas news release, which originated the news item, provides more detail,

An atomic force microscope (AFM) is a scientific tool that is used to create detailed three-dimensional images of the surfaces of materials, down to the nanometer scale — that’s roughly on the scale of individual molecules.

The basic AFM design consists of a tiny cantilever, or arm, that has a sharp tip attached to one end. As the apparatus scans back and forth across the surface of a sample, or the sample moves under it, the interactive forces between the sample and the tip cause the cantilever to move up and down as the tip follows the contours of the surface. Those movements are then translated into an image.

“An AFM is a microscope that ‘sees’ a surface kind of the way a visually impaired person might, by touching. You can get a resolution that is well beyond what an optical microscope can achieve,” said Moheimani, who holds the James Von Ehr Distinguished Chair in Science and Technology in the Erik Jonsson School of Engineering and Computer Science. “It can capture features that are very, very small.”

The UT Dallas team created its prototype on-chip AFM using a microelectromechanical systems (MEMS) approach.

“A classic example of MEMS technology are the accelerometers and gyroscopes found in smartphones,” said Dr. Anthony Fowler, a research scientist in Moheimani’s Laboratory for Dynamics and Control of Nanosystems and one of the article’s co-authors. “These used to be big, expensive, mechanical devices, but using MEMS technology, accelerometers have shrunk down onto a single chip, which can be manufactured for just a few dollars apiece.”

The MEMS-based AFM is about 1 square centimeter in size, or a little smaller than a dime. It is attached to a small printed circuit board, about half the size of a credit card, which contains circuitry, sensors and other miniaturized components that control the movement and other aspects of the device.

Conventional AFMs operate in various modes. Some map out a sample’s features by maintaining a constant force as the probe tip drags across the surface, while others do so by maintaining a constant distance between the two.

“The problem with using a constant height approach is that the tip is applying varying forces on a sample all the time, which can damage a sample that is very soft,” Fowler said. “Or, if you are scanning a very hard surface, you could wear down the tip,”

The MEMS-based AFM operates in “tapping mode,” which means the cantilever and tip oscillate up and down perpendicular to the sample, and the tip alternately contacts then lifts off from the surface. As the probe moves back and forth across a sample material, a feedback loop maintains the height of that oscillation, ultimately creating an image.

“In tapping mode, as the oscillating cantilever moves across the surface topography, the amplitude of the oscillation wants to change as it interacts with sample,” said Dr. Mohammad Maroufi, a research associate in mechanical engineering and co-author of the paper. “This device creates an image by maintaining the amplitude of oscillation.”

Because conventional AFMs require lasers and other large components to operate, their use can be limited. They’re also expensive.

“An educational version can cost about $30,000 or $40,000, and a laboratory-level AFM can run $500,000 or more,” Moheimani said. “Our MEMS approach to AFM design has the potential to significantly reduce the complexity and cost of the instrument.

“One of the attractive aspects about MEMS is that you can mass produce them, building hundreds or thousands of them in one shot, so the price of each chip would only be a few dollars. As a result, you might be able to offer the whole miniature AFM system for a few thousand dollars.”

A reduced size and price tag also could expand the AFMs’ utility beyond current scientific applications.

“For example, the semiconductor industry might benefit from these small devices, in particular companies that manufacture the silicon wafers from which computer chips are made,” Moheimani said. “With our technology, you might have an array of AFMs to characterize the wafer’s surface to find micro-faults before the product is shipped out.”

The lab prototype is a first-generation device, Moheimani said, and the group is already working on ways to improve and streamline the fabrication of the device.

“This is one of those technologies where, as they say, ‘If you build it, they will come.’ We anticipate finding many applications as the technology matures,” Moheimani said.

In addition to the UT Dallas researchers, Michael Ruppert, a visiting graduate student from the University of Newcastle in Australia, was a co-author of the journal article. Moheimani was Ruppert’s doctoral advisor.

So, an AFM that could cost as much as $500,000 for a laboratory has been shrunk to this size and become far less expensive,

A MEMS-based atomic force microscope developed by engineers at UT Dallas is about 1 square centimeter in size (top center). Here it is attached to a small printed circuit board that contains circuitry, sensors and other miniaturized components that control the movement and other aspects of the device. Courtesy: University of Texas at Dallas

Of course, there’s still more work to be done as you’ll note when reading Dexter’s Feb. 21, 2017 posting where he features answers to questions he directed to the researchers.

Here’s a link to and a citation for the paper,

On-Chip Dynamic Mode Atomic Force Microscopy: A Silicon-on-Insulator MEMS Approach by  Michael G. Ruppert, Anthony G. Fowler, Mohammad Maroufi, S. O. Reza Moheimani. IEEE Journal of Microelectromechanical Systems Volume: 26 Issue: 1  Feb. 2017 DOI: 10.1109/JMEMS.2016.2628890 Date of Publication: 06 December 2016

This paper is behind a paywall.

Feeling with a bionic finger

From what I understand one of the most difficult aspects of an amputation is the loss of touch, so, bravo to the engineers. From a March 8, 2016 news item on ScienceDaily,

An amputee was able to feel smoothness and roughness in real-time with an artificial fingertip that was surgically connected to nerves in his upper arm. Moreover, the nerves of non-amputees can also be stimulated to feel roughness, without the need of surgery, meaning that prosthetic touch for amputees can now be developed and safely tested on intact individuals.

The technology to deliver this sophisticated tactile information was developed by Silvestro Micera and his team at EPFL (Ecole polytechnique fédérale de Lausanne) and SSSA (Scuola Superiore Sant’Anna) together with Calogero Oddo and his team at SSSA. The results, published today in eLife, provide new and accelerated avenues for developing bionic prostheses, enhanced with sensory feedback.

A March 8, 2016 EPFL press release (also on EurekAlert), which originated the news item, provides more information about Sorenson’s experience and about the other tests the research team performed,

“The stimulation felt almost like what I would feel with my hand,” says amputee Dennis Aabo Sørensen about the artificial fingertip connected to his stump. He continues, “I still feel my missing hand, it is always clenched in a fist. I felt the texture sensations at the tip of the index finger of my phantom hand.”

Sørensen is the first person in the world to recognize texture using a bionic fingertip connected to electrodes that were surgically implanted above his stump.

Nerves in Sørensen’s arm were wired to an artificial fingertip equipped with sensors. A machine controlled the movement of the fingertip over different pieces of plastic engraved with different patterns, smooth or rough. As the fingertip moved across the textured plastic, the sensors generated an electrical signal. This signal was translated into a series of electrical spikes, imitating the language of the nervous system, then delivered to the nerves.

Sørensen could distinguish between rough and smooth surfaces 96% of the time.

In a previous study, Sorensen’s implants were connected to a sensory-enhanced prosthetic hand that allowed him to recognize shape and softness. In this new publication about texture in the journal eLife, the bionic fingertip attains a superior level of touch resolution.

Simulating touch in non-amputees

This same experiment testing coarseness was performed on non-amputees, without the need of surgery. The tactile information was delivered through fine, needles that were temporarily attached to the arm’s median nerve through the skin. The non-amputees were able to distinguish roughness in textures 77% of the time.

But does this information about touch from the bionic fingertip really resemble the feeling of touch from a real finger? The scientists tested this by comparing brain-wave activity of the non-amputees, once with the artificial fingertip and then with their own finger. The brain scans collected by an EEG cap on the subject’s head revealed that activated regions in the brain were analogous.

The research demonstrates that the needles relay the information about texture in much the same way as the implanted electrodes, giving scientists new protocols to accelerate for improving touch resolution in prosthetics.

“This study merges fundamental sciences and applied engineering: it provides additional evidence that research in neuroprosthetics can contribute to the neuroscience debate, specifically about the neuronal mechanisms of the human sense of touch,” says Calogero Oddo of the BioRobotics Institute of SSSA. “It will also be translated to other applications such as artificial touch in robotics for surgery, rescue, and manufacturing.”

Here’s a link to and a citation for the paper,

Intraneural stimulation elicits discrimination of textural features by artificial fingertip in intact and amputee humans by Calogero Maria Oddo, Stanisa Raspopovic, Fiorenzo Artoni, Alberto Mazzoni, Giacomo Spigler, Francesco Petrini, Federica Giambattistelli, Fabrizio Vecchio, Francesca Miraglia, Loredana Zollo, Giovanni Di Pino, Domenico Camboni, Maria Chiara Carrozza, Eugenio Guglielmelli, Paolo Maria Rossini, Ugo Faraguna, Silvestro Micera. eLife, 2016; 5 DOI: 10.7554/eLife.09148 Published March 8, 2016

This paper appears to be open access.

The sense of touch via artificial skin

Scientists have been working for years to allow artificial skin to transmit what the brain would recognize as the sense of touch. For anyone who has lost a limb and gotten a prosthetic replacement, the loss of touch is reputedly one of the more difficult losses to accept. The sense of touch is also vital in robotics if the field is to expand and include activities reliant on the sense of touch, e.g., how much pressure do you use to grasp a cup; how much strength  do you apply when moving an object from one place to another?

For anyone interested in the ‘electronic skin and pursuit of touch’ story, I have a Nov. 15, 2013 posting which highlights the evolution of the research into e-skin and what was then some of the latest work.

This posting is a 2015 update of sorts featuring the latest e-skin research from Stanford University and Xerox PARC. (Dexter Johnson in an Oct. 15, 2015 posting on his Nanoclast blog (on the IEEE [Institute of Electrical and Electronics Engineering] site) provides a good research summary.) For anyone with an appetite for more, there’s this from an Oct. 15, 2015 American Association for the Advancement of Science (AAAS) news release on EurekAlert,

Using flexible organic circuits and specialized pressure sensors, researchers have created an artificial “skin” that can sense the force of static objects. Furthermore, they were able to transfer these sensory signals to the brain cells of mice in vitro using optogenetics. For the many people around the world living with prosthetics, such a system could one day allow them to feel sensation in their artificial limbs. To create the artificial skin, Benjamin Tee et al. developed a specialized circuit out of flexible, organic materials. It translates static pressure into digital signals that depend on how much mechanical force is applied. A particular challenge was creating sensors that can “feel” the same range of pressure that humans can. Thus, on the sensors, the team used carbon nanotubes molded into pyramidal microstructures, which are particularly effective at tunneling the signals from the electric field of nearby objects to the receiving electrode in a way that maximizes sensitivity. Transferring the digital signal from the artificial skin system to the cortical neurons of mice proved to be another challenge, since conventional light-sensitive proteins used in optogenetics do not stimulate neural spikes for sufficient durations for these digital signals to be sensed. Tee et al. therefore engineered new optogenetic proteins able to accommodate longer intervals of stimulation. Applying these newly engineered optogenic proteins to fast-spiking interneurons of the somatosensory cortex of mice in vitro sufficiently prolonged the stimulation interval, allowing the neurons to fire in accordance with the digital stimulation pulse. These results indicate that the system may be compatible with other fast-spiking neurons, including peripheral nerves.

And, there’s an Oct. 15, 2015 Stanford University news release on EurkeAlert describing this work from another perspective,

The heart of the technique is a two-ply plastic construct: the top layer creates a sensing mechanism and the bottom layer acts as the circuit to transport electrical signals and translate them into biochemical stimuli compatible with nerve cells. The top layer in the new work featured a sensor that can detect pressure over the same range as human skin, from a light finger tap to a firm handshake.

Five years ago, Bao’s [Zhenan Bao, a professor of chemical engineering at Stanford,] team members first described how to use plastics and rubbers as pressure sensors by measuring the natural springiness of their molecular structures. They then increased this natural pressure sensitivity by indenting a waffle pattern into the thin plastic, which further compresses the plastic’s molecular springs.

To exploit this pressure-sensing capability electronically, the team scattered billions of carbon nanotubes through the waffled plastic. Putting pressure on the plastic squeezes the nanotubes closer together and enables them to conduct electricity.

This allowed the plastic sensor to mimic human skin, which transmits pressure information as short pulses of electricity, similar to Morse code, to the brain. Increasing pressure on the waffled nanotubes squeezes them even closer together, allowing more electricity to flow through the sensor, and those varied impulses are sent as short pulses to the sensing mechanism. Remove pressure, and the flow of pulses relaxes, indicating light touch. Remove all pressure and the pulses cease entirely.

The team then hooked this pressure-sensing mechanism to the second ply of their artificial skin, a flexible electronic circuit that could carry pulses of electricity to nerve cells.

Importing the signal

Bao’s team has been developing flexible electronics that can bend without breaking. For this project, team members worked with researchers from PARC, a Xerox company, which has a technology that uses an inkjet printer to deposit flexible circuits onto plastic. Covering a large surface is important to making artificial skin practical, and the PARC collaboration offered that prospect.

Finally the team had to prove that the electronic signal could be recognized by a biological neuron. It did this by adapting a technique developed by Karl Deisseroth, a fellow professor of bioengineering at Stanford who pioneered a field that combines genetics and optics, called optogenetics. Researchers bioengineer cells to make them sensitive to specific frequencies of light, then use light pulses to switch cells, or the processes being carried on inside them, on and off.

For this experiment the team members engineered a line of neurons to simulate a portion of the human nervous system. They translated the electronic pressure signals from the artificial skin into light pulses, which activated the neurons, proving that the artificial skin could generate a sensory output compatible with nerve cells.

Optogenetics was only used as an experimental proof of concept, Bao said, and other methods of stimulating nerves are likely to be used in real prosthetic devices. Bao’s team has already worked with Bianxiao Cui, an associate professor of chemistry at Stanford, to show that direct stimulation of neurons with electrical pulses is possible.

Bao’s team envisions developing different sensors to replicate, for instance, the ability to distinguish corduroy versus silk, or a cold glass of water from a hot cup of coffee. This will take time. There are six types of biological sensing mechanisms in the human hand, and the experiment described in Science reports success in just one of them.

But the current two-ply approach means the team can add sensations as it develops new mechanisms. And the inkjet printing fabrication process suggests how a network of sensors could be deposited over a flexible layer and folded over a prosthetic hand.

“We have a lot of work to take this from experimental to practical applications,” Bao said. “But after spending many years in this work, I now see a clear path where we can take our artificial skin.”

Here’s a link to and a citation for the paper,

A skin-inspired organic digital mechanoreceptor by Benjamin C.-K. Tee, Alex Chortos, Andre Berndt, Amanda Kim Nguyen, Ariane Tom, Allister McGuire, Ziliang Carter Lin, Kevin Tien, Won-Gyu Bae, Huiliang Wang, Ping Mei, Ho-Hsiu Chou, Bianxiao Cui, Karl Deisseroth, Tse Nga Ng, & Zhenan Bao. Science 16 October 2015 Vol. 350 no. 6258 pp. 313-316 DOI: 10.1126/science.aaa9306

This paper is behind a paywall.

A wearable book (The Girl Who Was Plugged In) makes you feel the protagonists pain

A team of students taking an MIT (Massachusetts Institute of Technology) course called ‘Science Fiction to Science Fabrication‘ have created a new kind of category for books, sensory fiction.  John Brownlee in his Feb. 10, 2014 article for Fast Company describes it this way,

Have you ever felt your pulse quicken when you read a book, or your skin go clammy during a horror story? A new student project out of MIT wants to deepen those sensations. They have created a wearable book that uses inexpensive technology and neuroscientific hacking to create a sort of cyberpunk Neverending Story that blurs the line between the bodies of a reader and protagonist.

Called Sensory Fiction, the project was created by a team of four MIT students–Felix Heibeck, Alexis Hope, Julie Legault, and Sophia Brueckner …

Here’s the MIT video demonstrating the book in use (from the course’s sensory fiction page),

Here’s how the students have described their sensory book, from the project page,

Sensory fiction is about new ways of experiencing and creating stories.

Traditionally, fiction creates and induces emotions and empathy through words and images.  By using a combination of networked sensors and actuators, the Sensory Fiction author is provided with new means of conveying plot, mood, and emotion while still allowing space for the reader’s imagination. These tools can be wielded to create an immersive storytelling experience tailored to the reader.

To explore this idea, we created a connected book and wearable. The ‘augmented’ book portrays the scenery and sets the mood, and the wearable allows the reader to experience the protagonist’s physiological emotions.

The book cover animates to reflect the book’s changing atmosphere, while certain passages trigger vibration patterns.

Changes in the protagonist’s emotional or physical state triggers discrete feedback in the wearable, whether by changing the heartbeat rate, creating constriction through air pressure bags, or causing localized temperature fluctuations.

Our prototype story, ‘The Girl Who Was Plugged In’ by James Tiptree showcases an incredible range of settings and emotions. The main protagonist experiences both deep love and ultimate despair, the freedom of Barcelona sunshine and the captivity of a dark damp cellar.

The book and wearable support the following outputs:

  • Light (the book cover has 150 programmable LEDs to create ambient light based on changing setting and mood)
  • Sound
  • Personal heating device to change skin temperature (through a Peltier junction secured at the collarbone)
  • Vibration to influence heart rate
  • Compression system (to convey tightness or loosening through pressurized airbags)

One of the earliest stories about this project was a Jan. 28,2014 piece written by Alison Flood for the Guardian where she explains how vibration, etc. are used to convey/stimulate the reader’s sensations and emotions,

MIT scientists have created a ‘wearable’ book using temperature and lighting to mimic the experiences of a book’s protagonist

The book, explain the researchers, senses the page a reader is on, and changes ambient lighting and vibrations to “match the mood”. A series of straps form a vest which contains a “heartbeat and shiver simulator”, a body compression system, temperature controls and sound.

“Changes in the protagonist’s emotional or physical state trigger discrete feedback in the wearable [vest], whether by changing the heartbeat rate, creating constriction through air pressure bags, or causing localised temperature fluctuations,” say the academics.

Flood goes on to illuminate how science fiction has explored the notion of ‘sensory books’ (Note: Links have been removed) and how at least one science fiction novelist is responding to this new type of book,,

The Arthur C Clarke award-winning science fiction novelist Chris Beckett wrote about a similar invention in his novel Marcher, although his “sensory” experience comes in the form of a video game:

Adam Roberts, another prize-winning science fiction writer, found the idea of “sensory” fiction “amazing”, but also “infantalising, like reverting to those sorts of books we buy for toddlers that have buttons in them to generate relevant sound-effects”.

Elise Hu in her Feb. 6, 2014 posting on the US National Public Radio (NPR) blog, All Tech Considered, takes a different approach to the topic,

The prototype does work, but it won’t be manufactured anytime soon. The creation was only “meant to provoke discussion,” Hope says. It was put together as part of a class in which designers read science fiction and make functional prototypes to explore the ideas in the books.

If it ever does become more widely available, sensory fiction could have an unintended consequence. When I shared this idea with NPR editor Ellen McDonnell, she quipped, “If these device things are helping ‘put you there,’ it just means the writing won’t have to be as good.”

I hope the students are successful at provoking discussion as so far they seem to have primarily provoked interest.

As for my two cents, I think that in a world where it seems making personal connections  is increasingly difficult (i.e., people becoming more isolated) that sensory fiction which stimulates people into feeling something as they read a book seems a logical progression.  It’s also interesting to me that all of the focus is on the reader with no mention as to what writers might produce (other than McDonnell’s cheeky comment) if they knew their books were going to be given the ‘sensory treatment’. One more musing, I wonder if there might a difference in how males and females, writers and readers, respond to sensory fiction.

Now for a bit of wordplay. Feeling can be emotional but, in English, it can also refer to touch and researchers at MIT have also been investigating new touch-oriented media.  You can read more about that project in my Reaching beyond the screen with the Tangible Media Group at the Massachusetts Institute of Technology (MIT) posting dated Nov. 13, 2013. One final thought, I am intrigued by how interested scientists at MIT seem to be in feelings of all kinds.

Do you hear what I hear?

It’s coming up Christmas time and as my thoughts turn to the music, Stanford University (California, US) researchers are focused on hearing and touch (the two are related) according to a Dec. 4, 2013 news item on Nanowerk,

Much of what is known about sensory touch and hearing cells is based on indirect observation. Scientists know that these exceptionally tiny cells are sensitive to changes in force and pressure. But to truly understand how they function, scientists must be able to manipulate them directly. Now, Stanford scientists are developing a set of tools that are small enough to stimulate an individual nerve or group of nerves, but also fast and flexible enough to mimic a realistic range of forces.

The Dec. 3, 2013 Stanford Report article by Cynthia McKelvey, which originated the news item, provides more detail about hearing and the problem the researchers are attempting to solve,

Our ability to interpret sound is largely dependent on bundles of thousands of tiny hair cells that get their name from the hair-like projections on their top surfaces. As sound waves vibrate the bundles, they force proteins in the cells’ surfaces to open and allow electrically charged molecules, called ions, to flow into the cells. The ions stimulate each hair cell, allowing it to transfer information from the sound wave to the brain. Hair bundles are more sensitive to particular frequencies of sound, which allows us to tell the difference between a siren and a subwoofer.

People with damaged or congenital defects in these delicate hair cells suffer from severe, irreversible hearing loss. Scientists remain unsure how to treat this form of hearing loss   because they do not know how to repair or replace a damaged hair cell. Physical manipulation of the cells is key to exploring the fine details of how they function. This new probe is the first tool nimble enough to do it.

The article also goes on to describe the ‘nano’ probe,

The new force probe represents several advantages over traditional glass force probes. At 300 nanometers thick, Pruitt’s [Beth Pruitt, an associate professor of mechanical engineering] probe is just three-thousandths the width of a human hair. Made of flexible silicon, the probe can mimic a much wider range of sound wave frequencies than rigid glass probes, making it more practical for studying hearing. The probe also measures the force it exerts on hair cells as it pushes, a new achievement for high-speed force probes at such small sizes.

Manipulating the probe requires a gentle touch, said Pruitt’s collaborator, Anthony Ricci, a professor of otolaryngology at the Stanford School of Medicine. The tissue samples – in this case, hair cells from a rat’s ear – sit under a microscope on a stage floating on a cushion of air that keeps it isolated from vibrations.

The probe is controlled using three dials that function similarly to an Etch-a-Sketch. The first step of the experiment involves connecting a tiny, delicate glass electrode to the body of a single hair cell.

Using a similar manipulator, Ricci and his team then press the force probe on a single hair cell, and the glass electrode records the changes in the cell’s electrical output. Pruitt and Ricci say that understanding how physical changes prompt electrical responses in hair cells can lead to a better understanding of how people lose their hearing following damage to the hair cells.

The force probe has the potential to catalyze future research on sensory science, Ricci said.

Up to now, limits in technology have held scientists back from understanding important functions such as hearing, touch, and balance. Like hair cells in the ear, cells involved in touch and balance react to the flexing and stretching of their cell membranes. The force probe can be used to study those cells in the same manner that Pruitt and Ricci are using it to study hair cells.

Understanding the mechanics of how cells register these sensory inputs could lead to innovative new treatments and prosthetics. For example, Pruitt and Ricci think their research could help bioengineers build a better hair cell for people with impaired hearing from damage to their natural hair cells.

Stanford has produced a video about this work,

I find it fascinating that hearing and touch are related although I haven’t yet seen anything that describes or explains the relationship. As for anyone hoping for a Christmas carol, I think I’m going to hold off until later in the season.