Tag Archives: University of Utah

Touchy robots and prosthetics

I have briefly speculated about the importance of touch elsewhere (see my July 19, 2019 posting regarding BlocKit and blockchain; scroll down about 50% of the way) but this upcoming news bit and the one following it put a different spin on the importance of touch.

Exceptional sense of touch

Robots need a sense of touch to perform their tasks and a July 18, 2019 National University of Singapore press release (also on EurekAlert) announces work on an improved sense of touch,

Robots and prosthetic devices may soon have a sense of touch equivalent to, or better than, the human skin with the Asynchronous Coded Electronic Skin (ACES), an artificial nervous system developed by a team of researchers at the National University of Singapore (NUS).

The new electronic skin system achieved ultra-high responsiveness and robustness to damage, and can be paired with any kind of sensor skin layers to function effectively as an electronic skin.

The innovation, achieved by Assistant Professor Benjamin Tee and his team from the Department of Materials Science and Engineering at the NUS Faculty of Engineering, was first reported in prestigious scientific journal Science Robotics on 18 July 2019.

Faster than the human sensory nervous system

“Humans use our sense of touch to accomplish almost every daily task, such as picking up a cup of coffee or making a handshake. Without it, we will even lose our sense of balance when walking. Similarly, robots need to have a sense of touch in order to interact better with humans, but robots today still cannot feel objects very well,” explained Asst Prof Tee, who has been working on electronic skin technologies for over a decade in hope of giving robots and prosthetic devices a better sense of touch.

Drawing inspiration from the human sensory nervous system, the NUS team spent a year and a half developing a sensor system that could potentially perform better. While the ACES electronic nervous system detects signals like the human sensor nervous system, it is made up of a network of sensors connected via a single electrical conductor, unlike the nerve bundles in the human skin. It is also unlike existing electronic skins which have interlinked wiring systems that can make them sensitive to damage and difficult to scale up.

Elaborating on the inspiration, Asst Prof Tee, who also holds appointments in the NUS Department of Electrical and Computer Engineering, NUS Institute for Health Innovation & Technology (iHealthTech), N.1 Institute for Health and the Hybrid Integrated Flexible Electronic Systems (HiFES) programme, said, “The human sensory nervous system is extremely efficient, and it works all the time to the extent that we often take it for granted. It is also very robust to damage. Our sense of touch, for example, does not get affected when we suffer a cut. If we can mimic how our biological system works and make it even better, we can bring about tremendous advancements in the field of robotics where electronic skins are predominantly applied.”

ACES can detect touches more than 1,000 times faster than the human sensory nervous system. For example, it is capable of differentiating physical contacts between different sensors in less than 60 nanoseconds – the fastest ever achieved for an electronic skin technology – even with large numbers of sensors. ACES-enabled skin can also accurately identify the shape, texture and hardness of objects within 10 milliseconds, ten times faster than the blinking of an eye. This is enabled by the high fidelity and capture speed of the ACES system.

The ACES platform can also be designed to achieve high robustness to physical damage, an important property for electronic skins because they come into the frequent physical contact with the environment. Unlike the current system used to interconnect sensors in existing electronic skins, all the sensors in ACES can be connected to a common electrical conductor with each sensor operating independently. This allows ACES-enabled electronic skins to continue functioning as long as there is one connection between the sensor and the conductor, making them less vulnerable to damage.

Smart electronic skins for robots and prosthetics

ACES’ simple wiring system and remarkable responsiveness even with increasing numbers of sensors are key characteristics that will facilitate the scale-up of intelligent electronic skins for Artificial Intelligence (AI) applications in robots, prosthetic devices and other human machine interfaces.

“Scalability is a critical consideration as big pieces of high performing electronic skins are required to cover the relatively large surface areas of robots and prosthetic devices,” explained Asst Prof Tee. “ACES can be easily paired with any kind of sensor skin layers, for example, those designed to sense temperatures and humidity, to create high performance ACES-enabled electronic skin with an exceptional sense of touch that can be used for a wide range of purposes,” he added.

For instance, pairing ACES with the transparent, self-healing and water-resistant sensor skin layer also recently developed by Asst Prof Tee’s team, creates an electronic skin that can self-repair, like the human skin. This type of electronic skin can be used to develop more realistic prosthetic limbs that will help disabled individuals restore their sense of touch.

Other potential applications include developing more intelligent robots that can perform disaster recovery tasks or take over mundane operations such as packing of items in warehouses. The NUS team is therefore looking to further apply the ACES platform on advanced robots and prosthetic devices in the next phase of their research.

For those who like videos, the researchers have prepared this,

Here’s a link to and a citation for the paper,

A neuro-inspired artificial peripheral nervous system for scalable electronic skins by Wang Wei Lee, Yu Jun Tan, Haicheng Yao, Si Li, Hian Hian See, Matthew Hon, Kian Ann Ng, Betty Xiong, John S. Ho and Benjamin C. K. Tee. Science Robotics Vol 4, Issue 32 31 July 2019 eaax2198 DOI: 10.1126/scirobotics.aax2198 Published online first: 17 Jul 2019:

This paper is behind a paywall.

Picking up a grape and holding his wife’s hand

This story comes from the Canadian Broadcasting Corporation (CBC) Radio with a six minute story embedded in the text, from a July 25, 2019 CBC Radio ‘As It Happens’ article by Sheena Goodyear,

The West Valley City, Utah, real estate agent [Keven Walgamott] lost his left hand in an electrical accident 17 years ago. Since then, he’s tried out a few different prosthetic limbs, but always found them too clunky and uncomfortable.

Then he decided to work with the University of Utah in 2016 to test out new prosthetic technology that mimics the sensation of human touch, allowing Walgamott to perform delicate tasks with precision — including shaking his wife’s hand. 

“I extended my left hand, she came and extended hers, and we were able to feel each other with the left hand for the first time in 13 years, and it was just a marvellous and wonderful experience,” Walgamott told As It Happens guest host Megan Williams. 

Walgamott, one of seven participants in the University of Utah study, was able to use an advanced prosthetic hand called the LUKE Arm to pick up an egg without cracking it, pluck a single grape from a bunch, hammer a nail, take a ring on and off his finger, fit a pillowcase over a pillow and more. 

While performing the tasks, Walgamott was able to actually feel the items he was holding and correctly gauge the amount of pressure he needed to exert — mimicking a process the human brain does automatically.

“I was able to feel something in each of my fingers,” he said. “What I feel, I guess the easiest way to explain it, is little electrical shocks.”

Those shocks — which he describes as a kind of a tingling sensation — intensify as he tightens his grip.

“Different variations of the intensity of the electricity as I move my fingers around and as I touch things,” he said. 

To make that [sense of touch] happen, the researchers implanted electrodes into the nerves on Walgamott’s forearm, allowing his brain to communicate with his prosthetic through a computer outside his body. That means he can move the hand just by thinking about it.

But those signals also work in reverse.

The team attached sensors to the hand of a LUKE Arm. Those sensors detect touch and positioning, and send that information to the electrodes so it can be interpreted by the brain.

For Walgamott, performing a series of menial tasks as a team of scientists recorded his progress was “fun to do.”

“I’d forgotten how well two hands work,” he said. “That was pretty cool.”

But it was also a huge relief from the phantom limb pain he has experienced since the accident, which he describes as a “burning sensation” in the place where his hand used to be.

A July 24, 2019 University of Utah news release (also on EurekAlert) provides more detail about the research,

Keven Walgamott had a good “feeling” about picking up the egg without crushing it.

What seems simple for nearly everyone else can be more of a Herculean task for Walgamott, who lost his left hand and part of his arm in an electrical accident 17 years ago. But he was testing out the prototype of a high-tech prosthetic arm with fingers that not only can move, they can move with his thoughts. And thanks to a biomedical engineering team at the University of Utah, he “felt” the egg well enough so his brain could tell the prosthetic hand not to squeeze too hard.

That’s because the team, led by U biomedical engineering associate professor Gregory Clark, has developed a way for the “LUKE Arm” (so named after the robotic hand that Luke Skywalker got in “The Empire Strikes Back”) to mimic the way a human hand feels objects by sending the appropriate signals to the brain. Their findings were published in a new paper co-authored by U biomedical engineering doctoral student Jacob George, former doctoral student David Kluger, Clark and other colleagues in the latest edition of the journal Science Robotics. A copy of the paper may be obtained by emailing robopak@aaas.org.

“We changed the way we are sending that information to the brain so that it matches the human body. And by matching the human body, we were able to see improved benefits,” George says. “We’re making more biologically realistic signals.”

That means an amputee wearing the prosthetic arm can sense the touch of something soft or hard, understand better how to pick it up and perform delicate tasks that would otherwise be impossible with a standard prosthetic with metal hooks or claws for hands.

“It almost put me to tears,” Walgamott says about using the LUKE Arm for the first time during clinical tests in 2017. “It was really amazing. I never thought I would be able to feel in that hand again.”

Walgamott, a real estate agent from West Valley City, Utah, and one of seven test subjects at the U, was able to pluck grapes without crushing them, pick up an egg without cracking it and hold his wife’s hand with a sensation in the fingers similar to that of an able-bodied person.

“One of the first things he wanted to do was put on his wedding ring. That’s hard to do with one hand,” says Clark. “It was very moving.”

Those things are accomplished through a complex series of mathematical calculations and modeling.

The LUKE Arm

The LUKE Arm has been in development for some 15 years. The arm itself is made of mostly metal motors and parts with a clear silicon “skin” over the hand. It is powered by an external battery and wired to a computer. It was developed by DEKA Research & Development Corp., a New Hampshire-based company founded by Segway inventor Dean Kamen.

Meanwhile, the U’s team has been developing a system that allows the prosthetic arm to tap into the wearer’s nerves, which are like biological wires that send signals to the arm to move. It does that thanks to an invention by U biomedical engineering Emeritus Distinguished Professor Richard A. Normann called the Utah Slanted Electrode Array. The array is a bundle of 100 microelectrodes and wires that are implanted into the amputee’s nerves in the forearm and connected to a computer outside the body. The array interprets the signals from the still-remaining arm nerves, and the computer translates them to digital signals that tell the arm to move.

But it also works the other way. To perform tasks such as picking up objects requires more than just the brain telling the hand to move. The prosthetic hand must also learn how to “feel” the object in order to know how much pressure to exert because you can’t figure that out just by looking at it.

First, the prosthetic arm has sensors in its hand that send signals to the nerves via the array to mimic the feeling the hand gets upon grabbing something. But equally important is how those signals are sent. It involves understanding how your brain deals with transitions in information when it first touches something. Upon first contact of an object, a burst of impulses runs up the nerves to the brain and then tapers off. Recreating this was a big step.

“Just providing sensation is a big deal, but the way you send that information is also critically important, and if you make it more biologically realistic, the brain will understand it better and the performance of this sensation will also be better,” says Clark.

To achieve that, Clark’s team used mathematical calculations along with recorded impulses from a primate’s arm to create an approximate model of how humans receive these different signal patterns. That model was then implemented into the LUKE Arm system.

Future research

In addition to creating a prototype of the LUKE Arm with a sense of touch, the overall team is already developing a version that is completely portable and does not need to be wired to a computer outside the body. Instead, everything would be connected wirelessly, giving the wearer complete freedom.

Clark says the Utah Slanted Electrode Array is also capable of sending signals to the brain for more than just the sense of touch, such as pain and temperature, though the paper primarily addresses touch. And while their work currently has only involved amputees who lost their extremities below the elbow, where the muscles to move the hand are located, Clark says their research could also be applied to those who lost their arms above the elbow.

Clark hopes that in 2020 or 2021, three test subjects will be able to take the arm home to use, pending federal regulatory approval.

The research involves a number of institutions including the U’s Department of Neurosurgery, Department of Physical Medicine and Rehabilitation and Department of Orthopedics, the University of Chicago’s Department of Organismal Biology and Anatomy, the Cleveland Clinic’s Department of Biomedical Engineering and Utah neurotechnology companies Ripple Neuro LLC and Blackrock Microsystems. The project is funded by the Defense Advanced Research Projects Agency and the National Science Foundation.

“This is an incredible interdisciplinary effort,” says Clark. “We could not have done this without the substantial efforts of everybody on that team.”

Here’s a link to and a citation for the paper,

Biomimetic sensory feedback through peripheral nerve stimulation improves dexterous use of a bionic hand by J. A. George, D. T. Kluger, T. S. Davis, S. M. Wendelken, E. V. Okorokova, Q. He, C. C. Duncan, D. T. Hutchinson, Z. C. Thumser, D. T. Beckler, P. D. Marasco, S. J. Bensmaia and G. A. Clark. Science Robotics Vol. 4, Issue 32, eaax2352 31 July 2019 DOI: 10.1126/scirobotics.aax2352 Published online first: 24 Jul 2019

This paper is definitely behind a paywall.

The University of Utah researchers have produced a video highlighting their work,

Quadriplegic man reanimates a limb with implanted brain-recording and muscle-stimulating systems

It took me a few minutes to figure out why this item about a quadriplegic (also known as, tetraplegic) man is news. After all, I have a May 17, 2012 posting which features a video and information about a quadri(tetra)plegic woman who was drinking her first cup of coffee, independently, in many years. The difference is that she was using an external robotic arm and this man is using *his own arm*,

This Case Western Reserve University (CRWU) video accompanies a March 28, 2017 CRWU news release, (h/t ScienceDaily March 28, 2017 news item)

Bill Kochevar grabbed a mug of water, drew it to his lips and drank through the straw.

His motions were slow and deliberate, but then Kochevar hadn’t moved his right arm or hand for eight years.

And it took some practice to reach and grasp just by thinking about it.

Kochevar, who was paralyzed below his shoulders in a bicycling accident, is believed to be the first person with quadriplegia in the world to have arm and hand movements restored with the help of two temporarily implanted technologies.

A brain-computer interface with recording electrodes under his skull, and a functional electrical stimulation (FES) system* activating his arm and hand, reconnect his brain to paralyzed muscles.

Holding a makeshift handle pierced through a dry sponge, Kochevar scratched the side of his nose with the sponge. He scooped forkfuls of mashed potatoes from a bowl—perhaps his top goal—and savored each mouthful.

“For somebody who’s been injured eight years and couldn’t move, being able to move just that little bit is awesome to me,” said Kochevar, 56, of Cleveland. “It’s better than I thought it would be.”

Kochevar is the focal point of research led by Case Western Reserve University, the Cleveland Functional Electrical Stimulation (FES) Center at the Louis Stokes Cleveland VA Medical Center and University Hospitals Cleveland Medical Center (UH). A study of the work was published in the The Lancet March 28 [2017] at 6:30 p.m. U.S. Eastern time.

“He’s really breaking ground for the spinal cord injury community,” said Bob Kirsch, chair of Case Western Reserve’s Department of Biomedical Engineering, executive director of the FES Center and principal investigator (PI) and senior author of the research. “This is a major step toward restoring some independence.”

When asked, people with quadriplegia say their first priority is to scratch an itch, feed themselves or perform other simple functions with their arm and hand, instead of relying on caregivers.

“By taking the brain signals generated when Bill attempts to move, and using them to control the stimulation of his arm and hand, he was able to perform personal functions that were important to him,” said Bolu Ajiboye, assistant professor of biomedical engineering and lead study author.

Technology and training

The research with Kochevar is part of the ongoing BrainGate2* pilot clinical trial being conducted by a consortium of academic and VA institutions assessing the safety and feasibility of the implanted brain-computer interface (BCI) system in people with paralysis. Other investigational BrainGate research has shown that people with paralysis can control a cursor on a computer screen or a robotic arm (braingate.org).

“Every day, most of us take for granted that when we will to move, we can move any part of our body with precision and control in multiple directions and those with traumatic spinal cord injury or any other form of paralysis cannot,” said Benjamin Walter, associate professor of neurology at Case Western Reserve School of Medicine, clinical PI of the Cleveland BrainGate2 trial and medical director of the Deep Brain Stimulation Program at UH Cleveland Medical Center.

“The ultimate hope of any of these individuals is to restore this function,” Walter said. “By restoring the communication of the will to move from the brain directly to the body this work will hopefully begin to restore the hope of millions of paralyzed individuals that someday they will be able to move freely again.”

Jonathan Miller, assistant professor of neurosurgery at Case Western Reserve School of Medicine and director of the Functional and Restorative Neurosurgery Center at UH, led a team of surgeons who implanted two 96-channel electrode arrays—each about the size of a baby aspirin—in Kochevar’s motor cortex, on the surface of the brain.

The arrays record brain signals created when Kochevar imagines movement of his own arm and hand. The brain-computer interface extracts information from the brain signals about what movements he intends to make, then passes the information to command the electrical stimulation system.

To prepare him to use his arm again, Kochevar first learned how to use his brain signals to move a virtual-reality arm on a computer screen.

“He was able to do it within a few minutes,” Kirsch said. “The code was still in his brain.”

As Kochevar’s ability to move the virtual arm improved through four months of training, the researchers believed he would be capable of controlling his own arm and hand.

Miller then led a team that implanted the FES systems’ 36 electrodes that animate muscles in the upper and lower arm.

The BCI decodes the recorded brain signals into the intended movement command, which is then converted by the FES system into patterns of electrical pulses.

The pulses sent through the FES electrodes trigger the muscles controlling Kochevar’s hand, wrist, arm, elbow and shoulder. To overcome gravity that would otherwise prevent him from raising his arm and reaching, Kochevar uses a mobile arm support, which is also under his brain’s control.

New Capabilities

Eight years of muscle atrophy required rehabilitation. The researchers exercised Kochevar’s arm and hand with cyclical electrical stimulation patterns. Over 45 weeks, his strength, range of motion and endurance improved. As he practiced movements, the researchers adjusted stimulation patterns to further his abilities.

Kochevar can make each joint in his right arm move individually. Or, just by thinking about a task such as feeding himself or getting a drink, the muscles are activated in a coordinated fashion.

When asked to describe how he commanded the arm movements, Kochevar told investigators, “I’m making it move without having to really concentrate hard at it…I just think ‘out’…and it goes.”

Kocehvar is fitted with temporarily implanted FES technology that has a track record of reliable use in people. The BCI and FES system together represent early feasibility that gives the research team insights into the potential future benefit of the combined system.

Advances needed to make the combined technology usable outside of a lab are not far from reality, the researchers say. Work is underway to make the brain implant wireless, and the investigators are improving decoding and stimulation patterns needed to make movements more precise. Fully implantable FES systems have already been developed and are also being tested in separate clinical research.

Kochevar welcomes new technology—even if it requires more surgery—that will enable him to move better. “This won’t replace caregivers,” he said. “But, in the long term, people will be able, in a limited way, to do more for themselves.”

There is more about the research in a March 29, 2017 article by Sarah Boseley for The Guardian,

Bill Kochevar, 53, has had electrical implants in the motor cortex of his brain and sensors inserted in his forearm, which allow the muscles of his arm and hand to be stimulated in response to signals from his brain, decoded by computer. After eight years, he is able to drink and feed himself without assistance.

“I think about what I want to do and the system does it for me,” Kochevar told the Guardian. “It’s not a lot of thinking about it. When I want to do something, my brain does what it does.”

The experimental technology, pioneered by the Case Western Reserve University in Cleveland, Ohio, is the first in the world to restore brain-controlled reaching and grasping in a person with complete paralysis.

For now, the process is relatively slow, but the scientists behind the breakthrough say this is proof of concept and that they hope to streamline the technology until it becomes a routine treatment for people with paralysis. In the future, they say, it will also be wireless and the electrical arrays and sensors will all be implanted under the skin and invisible.

A March 28, 2017 Lancet news release on EurekAlert provides a little more technical insight into the research and Kochevar’s efforts,

Although only tested with one participant, the study is a major advance and the first to restore brain-controlled reaching and grasping in a person with complete paralysis. The technology, which is only for experimental use in the USA, circumvents rather than repairs spinal injuries, meaning the participant relies on the device being implanted and switched on to move.

“Our research is at an early stage, but we believe that this neuro-prosthesis could offer individuals with paralysis the possibility of regaining arm and hand functions to perform day-to-day activities, offering them greater independence,” said lead author Dr Bolu Ajiboye, Case Western Reserve University, USA. “So far it has helped a man with tetraplegia to reach and grasp, meaning he could feed himself and drink. With further development, we believe the technology could give more accurate control, allowing a wider range of actions, which could begin to transform the lives of people living with paralysis.” [1]

Previous research has used similar elements of the neuro-prosthesis. For example, a brain-computer interface linked to electrodes on the skin has helped a person with less severe paralysis open and close his hand, while other studies have allowed participants to control a robotic arm using their brain signals. However, this is the first to restore reaching and grasping via the system in a person with a chronic spinal cord injury.

In this study, a 53 year-old man who had been paralysed below the shoulders for eight years underwent surgery to have the neuro-prosthesis fitted.

This involved brain surgery to place sensors in the motor cortex area of his brain responsible for hand movement – creating a brain-computer interface that learnt which movements his brain signals were instructing for. This initial stage took four months and included training using a virtual reality arm.

He then underwent another procedure placing 36 muscle stimulating electrodes into his upper and lower arm, including four that helped restore finger and thumb, wrist, elbow and shoulder movements. These were switched on 17 days after the procedure, and began stimulating the muscles for eight hours a week over 18 weeks to improve strength, movement and reduce muscle fatigue.

The researchers then wired the brain-computer interface to the electrical stimulators in his arm, using a decoder (mathematical algorithm) to translate his brain signals into commands for the electrodes in his arm. The electrodes stimulated the muscles to produce contractions, helping the participant intuitively complete the movements he was thinking of. The system also involved an arm support to stop gravity simply pulling his arm down.

During his training, the participant described how he controlled the neuro-prosthesis: “It’s probably a good thing that I’m making it move without having to really concentrate hard at it. I just think ‘out’ and it just goes.”

After 12 months of having the neuro-prosthesis fitted, the participant was asked to complete day-to-day tasks, including drinking a cup of coffee and feeding himself. First of all, he observed while his arm completed the action under computer control. During this, he thought about making the same movement so that the system could recognise the corresponding brain signals. The two systems were then linked and he was able to use it to drink a coffee and feed himself.

He successfully drank in 11 out of 12 attempts, and it took him roughly 20-40 seconds to complete the task. When feeding himself, he did so multiple times – scooping forkfuls of food and navigating his hand to his mouth to take several bites.

“Although similar systems have been used before, none of them have been as easy to adopt for day-to-day use and they have not been able to restore both reaching and grasping actions,” said Dr Ajiboye. “Our system builds on muscle stimulating electrode technology that is already available and will continue to improve with the development of new fully implanted and wireless brain-computer interface systems. This could lead to enhanced performance of the neuro-prosthesis with better speed, precision and control.” [1]

At the time of the study, the participant had had the neuro-prosthesis implanted for almost two years (717 days) and in this time experienced four minor, non-serious adverse events which were treated and resolved.

Despite its achievements, the neuro-prosthesis still had some limitations, including that movements made using it were slower and less accurate than those made using the virtual reality arm the participant used for training. When using the technology, the participant also needed to watch his arm as he lost his sense of proprioception – the ability to intuitively sense the position and movement of limbs – as a result of the paralysis.

Writing in a linked Comment, Dr Steve Perlmutter, University of Washington, USA, said: “The goal is futuristic: a paralysed individual thinks about moving her arm as if her brain and muscles were not disconnected, and implanted technology seamlessly executes the desired movement… This study is groundbreaking as the first report of a person executing functional, multi-joint movements of a paralysed limb with a motor neuro-prosthesis. However, this treatment is not nearly ready for use outside the lab. The movements were rough and slow and required continuous visual feedback, as is the case for most available brain-machine interfaces, and had restricted range due to the use of a motorised device to assist shoulder movements… Thus, the study is a proof-of-principle demonstration of what is possible, rather than a fundamental advance in neuro-prosthetic concepts or technology. But it is an exciting demonstration nonetheless, and the future of motor neuro-prosthetics to overcome paralysis is brighter.”

[1] Quote direct from author and cannot be found in the text of the Article.

Here’s a link to and a citation for the paper,

Restoration of reaching and grasping movements through brain-controlled muscle stimulation in a person with tetraplegia: a proof-of-concept demonstration by A Bolu Ajiboye, Francis R Willett, Daniel R Young, William D Memberg, Brian A Murphy, Jonathan P Miller, Benjamin L Walter, Jennifer A Sweet, Harry A Hoyen, Michael W Keith, Prof P Hunter Peckham, John D Simeral, Prof John P Donoghue, Prof Leigh R Hochberg, Prof Robert F Kirsch. The Lancet DOI: http://dx.doi.org/10.1016/S0140-6736(17)30601-3 Published: 28 March 2017 [online?]

This paper is behind a paywall.

For anyone  who’s interested, you can find the BrainGate website here.

*I initially misidentified the nature of the achievement and stated that Kochevar used a “robotic arm, which is attached to his body” when it was his own reanimated arm. Corrected on April 25, 2017.

Sensing fuel leaks and fuel-based explosives with a nanofibril composite

A March 28, 2016 news item on Nanowerk highlights some research from the University of Utah (US),

Alkane fuel is a key ingredient in combustible material such as gasoline, airplane fuel, oil — even a homemade bomb. Yet it’s difficult to detect and there are no portable scanners available that can sniff out the odorless and colorless vapor.

But University of Utah engineers have developed a new type of fiber material for a handheld scanner that can detect small traces of alkane fuel vapor, a valuable advancement that could be an early-warning signal for leaks in an oil pipeline, an airliner, or for locating a terrorist’s explosive.

A March 25, 2016 University of Utah news release, which originated the news item, provides a little more detail,

Currently, there are no small, portable chemical sensors to detect alkane fuel vapor because it is not chemically reactive. The conventional way to detect it is with a large oven-sized instrument in a lab.

“It’s not mobile and very heavy,” Zang [Ling Zang, University of Utah materials science and engineering professor] says of the larger instrument. “There’s no way it can be used in the field. Imagine trying to detect the leak from a gas valve or on the pipelines. You ought to have something portable.”

So Zang’s team developed a type of fiber composite that involves two nanofibers transferring electrons from one to the other.

That kind of interaction would then signal the detector that the alkane vapor is present. Vaporsens, a University of Utah spinoff company, has designed a prototype of the handheld detector with an array of 16 sensor materials that will be able to identify a broad range of chemicals including explosives.  This new composite material will be incorporated into the sensor array to include the detection of alkanes. Vaporsens plans to introduce the device on the market in about a year and a half, says Zang, who is the company’s chief science officer.

Such a small sensor device that can detect alkane vapor will benefit three main categories:

  • Oil pipelines. If leaks from pipelines are not detected early enough, the resulting leaked oil could contaminate the local environment and water sources. Typically, only large leaks in pipelines can be detected if there is a drop in pressure. Zang’s portable sensor — when placed along the pipeline — could detect much smaller leaks before they become bigger.
  • Airplane fuel tanks. Fuel for aircraft is stored in removable “bladders” made of flexible fabric. The only way a leak can be detected is by seeing the dyed fuel seeping from the plane and then removing the bladder to inspect it. Zang’s sensors could be placed around the bladder to warn a pilot if a leak is occurring in real time and where it is located.
  • Security. The scanner will be designed to locate the presence of explosives such as bombs at airports or in other buildings. Many explosives, such as the bomb used in the Oklahoma City bombing in 1995, use fuel oils like diesel as one of its major components. These fuel oils are forms of alkane.

The research was funded by the Department of Homeland Security, National Science Foundation and NASA. The lead author of the paper is University of Utah materials science and engineering doctoral student Chen Wang, and [Benjamin] Bunes is the co-author.

Here’s a link to and a citation for the paper,

Interfacial Donor–Acceptor Nanofibril Composites for Selective Alkane Vapor Detection by Chen Wang, Benjamin R. Bunes, Miao Xu, Na Wu, Xiaomei Yang, Dustin E. Gross, and Ling Zang. ACS Sens DOI: 10.1021/acssensors.6b00018 Publication Date (Web): March 09, 2016

Copyright © 2016 American Chemical Society

This paper is behind a paywall.

Bomb-sniffing and other sniffing possibilities from Utah (US state)

A Nov. 4, 2014 news item on Phys.org features some research in Utah on the use of carbon nanotubes for sensing devices,

University of Utah engineers have developed a new type of carbon nanotube material for handheld sensors that will be quicker and better at sniffing out explosives, deadly gases and illegal drugs.

A carbon nanotube is a cylindrical material that is a hexagonal or six-sided array of carbon atoms rolled up into a tube. Carbon nanotubes are known for their strength and high electrical conductivity and are used in products from baseball bats and other sports equipment to lithium-ion batteries and touchscreen computer displays.

Vaporsens, a university spin-off company, plans to build a prototype handheld sensor by year’s end and produce the first commercial scanners early next year, says co-founder Ling Zang, a professor of materials science and engineering and senior author of a study of the technology published online Nov. 4 [2014] in the journal Advanced Materials.

The new kind of nanotubes also could lead to flexible solar panels that can be rolled up and stored or even “painted” on clothing such as a jacket, he adds.

Here’s Ling Zang holding a prototype of the device,

Ling Zang, a University of Utah professor of materials science and engineering, holds a prototype detector that uses a new type of carbon nanotube material for use in handheld scanners to detect explosives, toxic chemicals and illegal drugs. Zang and colleagues developed the new material, which will make such scanners quicker and more sensitive than today’s standard detection devices. Ling’s spinoff company, Vaporsens, plans to produce commercial versions of the new kind of scanner early next year. Courtesy: University of Utah

Ling Zang, a University of Utah professor of materials science and engineering, holds a prototype detector that uses a new type of carbon nanotube material for use in handheld scanners to detect explosives, toxic chemicals and illegal drugs. Zang and colleagues developed the new material, which will make such scanners quicker and more sensitive than today’s standard detection devices. Ling’s spinoff company, Vaporsens, plans to produce commercial versions of the new kind of scanner early next year. Courtesy: University of Utah

A Nov. 4, 2014 University of Utah news release (also on EurekAlert), which originated the news item, provides more detail about the research,

Zang and his team found a way to break up bundles of the carbon nanotubes with a polymer and then deposit a microscopic amount on electrodes in a prototype handheld scanner that can detect toxic gases such as sarin or chlorine, or explosives such as TNT.

When the sensor detects molecules from an explosive, deadly gas or drugs such as methamphetamine, they alter the electrical current through the nanotube materials, signaling the presence of any of those substances, Zang says.

“You can apply voltage between the electrodes and monitor the current through the nanotube,” says Zang, a professor with USTAR, the Utah Science Technology and Research economic development initiative. “If you have explosives or toxic chemicals caught by the nanotube, you will see an increase or decrease in the current.”

By modifying the surface of the nanotubes with a polymer, the material can be tuned to detect any of more than a dozen explosives, including homemade bombs, and about two-dozen different toxic gases, says Zang. The technology also can be applied to existing detectors or airport scanners used to sense explosives or chemical threats.

Zang says scanners with the new technology “could be used by the military, police, first responders and private industry focused on public safety.”

Unlike the today’s detectors, which analyze the spectra of ionized molecules of explosives and chemicals, the Utah carbon-nanotube technology has four advantages:

• It is more sensitive because all the carbon atoms in the nanotube are exposed to air, “so every part is susceptible to whatever it is detecting,” says study co-author Ben Bunes, a doctoral student in materials science and engineering.

• It is more accurate and generates fewer false positives, according to lab tests.

• It has a faster response time. While current detectors might find an explosive or gas in minutes, this type of device could do it in seconds, the tests showed.

• It is cost-effective because the total amount of the material used is microscopic.

This study was funded by the Department of Homeland Security, Department of Defense, National Science Foundation and NASA. …

Here’s a link to and a citation for the research paper,

Photodoping and Enhanced Visible Light Absorption in Single-Walled Carbon Nanotubes Functionalized with a Wide Band Gap Oligomer by Benjamin R. Bunes, Miao Xu, Yaqiong Zhang, Dustin E. Gross, Avishek Saha, Daniel L. Jacobs, Xiaomei Yang, Jeffrey S. Moore, and Ling Zang. Advanced Materials DOI: 10.1002/adma.201404112 Article first published online: 4 NOV 2014

© 2014 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim

This paper is behind a paywall.

For anyone curious about Vaporsens, you can find more here.

Quantum dots and graphene; a mini roundup

I’ve done very little writing about quantum dots (so much nano, so little time) but there’s been a fair amount of activity lately which has piqued my interest. In the last few days researchers at Kansas State University have been getting noticed for being able to control the size and shape of the graphene quantum dots they produce.  This one has gotten extensive coverage online including this May 17, 2012 news item on physorg.com,

Vikas Berry, William H. Honstead professor of chemical engineering, has developed a novel process that uses a diamond knife to cleave graphite into graphite nanoblocks, which are precursors for graphene quantum dots. These nanoblocks are then exfoliated to produce ultrasmall sheets of carbon atoms of controlled shape and size.

By controlling the size and shape, the researchers can control graphene’s properties over a wide range for varied applications, such as solar cells, electronics, optical dyes, biomarkers, composites and particulate systems. Their work has been published in Nature Communications and supports the university’s vision to become a top 50 public research university by 2025. The article is available online.

Here’s an image of graphene being cut by a diamond knife from the May 16, 2012 posting by jtorline on the K-State News blog,

Molecular dynamics snapshot of stretched graphene being nanotomed via a diamond knife.

Here’s why standardizing the size is so important,

While other researchers have been able to make quantum dots, Berry’s research team can make quantum dots with a controlled structure in large quantities, which may allow these optically active quantum dots to be used in solar cell and other optoelectronic applications. [emphasis mine]

While all this is happening in Kansas, the Econ0mist magazine published a May 12, 2012 article about some important quantum dot optoelectronic developments in Spain (an excellent description for relative beginners is given and, if this area interests you, I’d suggest reading it in full),

Actually converting the wonders of graphene into products has been tough. But Frank Koppens and his colleagues at the Institute of Photonic Sciences in Barcelona think they have found a way to do so. As they describe in Nature Nanotechnology, they believe graphene can be used to make ultra-sensitive, low-cost photodetectors.

A typical photodetector is made of a silicon chip a few millimetres across onto which light is focused by a small lens. Light striking the chip knocks electrons free from some of the silicon atoms, producing a signal that the chip’s electronics convert into a picture or other useful information. …

Silicon photodetectors suffer, though, from a handicap: they are inflexible. Nor are they particularly cheap. And they are not that sensitive. They absorb only 10-20% of the light that falls on to them. For years, therefore, engineers have been on the lookout for a cheap, bendable, sensitive photodetector. …

By itself, graphene is worse than silicon at absorbing light. According to Dr Koppens only 2.7% of the photons falling on it are captured. But he and his colleague Gerasimos Konstantatos have managed to increase this to more than 50% by spraying tiny crystals of lead sulphide onto the surface of the material.

So combining the ability to size quantum dots uniformly with this discovery on how to make graphene more sensitive (and more useful in potential products) with quantum dots suggests some very exciting possibilities including this one mentioned by Dexter Johnson (who’s living in Spain these days) in his May 16, 2012 posting on Nanoclast (on the Institute of Electrical and Electronics Engineers [IEEE] website),

The researchers offer a range of applications for the graphene-and-quantum-dot combination, including digital cameras and sensors.  [emphasis mine] But it seems the researchers seem particularly excited about one application in particular. They expect the material will be used for night-vision technologies in automobiles—an application I have never heard trotted out before in relation to nanotech.

You can get more insights, more precise descriptions if you want to follow up from the Econ0mist article,  and Dexter’s links to more information about the research in his posting.

In my final roundup piece, I received a news release (dated April 24, 2012) about a quantum dot commercialization project at the University of Utah,

One of the biggest challenges for advancing quantum dots is the manufacturing process. Conventional processes are expensive, require high temperatures and produce low yields. However, researchers at the University of Utah believe they have a solution. They recently formed a startup company called Navillum Nanotechnologies, and their efforts are gaining national attention with help from a team of M.B.A. students from the David Eccles School of Business.
The students recently won first place and $100,000 at the regional CU Cleantech New Venture Challenge. The student competition concluded at the University of Colorado in Boulder on Friday, April 20. The student team advances to the national championship, which will be held in June in Washington, D.C. Student teams from six regions will compete for additional prizes and recognition at the prestigious event. Other regional competitions were held at MIT, Cal Tech, the University of Maryland, Clean Energy Trust (Chicago) and Rice University. All the competitions are financed by the U.S. Department of Energy.

The students will be competing in the national Clean Energy Business Plan Competition taking place June 12-13, 2012 in Washington, D.C.  Here are a few more details from the national competition webpage,

Winners of the six regional competitions will represent their home universities and regions as they vie for the honor of presenting the best clean energy business plan before a distinguished panel of expert judges and invited guests from federal agencies, industry, national labs and the venture capital community.

Confirmed Attendees include:

The Honorable Steven Chu
Energy Secretary [US federal government]

Dr. David Danielson
Assistant Secretary, EERE  [US Dept. of Energy, energy efficiency and renewable energy technologies)

Dr. Karina Edmonds
Technology Transfer Coordinator [US Dept. of Energy]

Mr. Todd Park
Chief Technology Officer, White House

Good luck to the students!

US soldiers get batteries woven into their clothes

Last time I wrote about soldiers, equipment, and energy-efficiency (April 5, 2012 posting) the soldiers in question were British. Today’s posting focuses on US soldiers. From the May 7, 2012 news item on Nanowerk,

U.S. soldiers are increasingly weighed down by batteries to power weapons, detection devices and communications equipment. So the Army Research Laboratory has awarded a University of Utah-led consortium almost $15 million to use computer simulations to help design materials for lighter-weight, energy efficient devices and batteries.

“We want to help the Army make advances in fundamental research that will lead to better materials to help our soldiers in the field,” says computing Professor Martin Berzins, principal investigator among five University of Utah faculty members who will work on the project. “One of Utah’s main contributions will be the batteries.”

Of the five-year Army grant of $14,898,000, the University of Utah will retain $4.2 million for research plus additional administrative costs. The remainder will go to members of the consortium led by the University of Utah, including Boston University, Rensselaer Polytechnic Institute, Pennsylvania State University, Harvard University, Brown University, the University of California, Davis, and the Polytechnic University of Turin, Italy.

The new research effort is based on the idea that by using powerful computers to simulate the behavior of materials on multiple scales – from the atomic and molecular nanoscale to the large or “bulk” scale – new, lighter, more energy efficient power supplies and materials can be designed and developed. Improving existing materials also is a goal.

“We want to model everything from the nanoscale to the soldier scale,” Berzins says. “It’s virtual design, in some sense.”

“Today’s soldier enters the battle space with an amazing array of advanced electronic materials devices and systems,” the University of Utah said in its grant proposal. “The soldier of the future will rely even more heavily on electronic weaponry, detection devices, advanced communications systems and protection systems. Currently, a typical infantry soldier might carry up to 35 pounds of batteries in order to power these systems, and it is clear that the energy and power requirements for future soldiers will be much greater.” [emphasis mine]

“These requirements have a dramatic adverse effect on the survivability and lethality of the soldier by reducing mobility as well as the amount of weaponry, sensors, communication equipment and armor that the soldier can carry. Hence, the Army’s desire for greater lethality and survivability of its men and women in the field is fundamentally tied to the development of devices and systems with increased energy efficiency as well as dramatic improvement in the energy and power density of [battery] storage and delivery systems.”

Up to 35 lbs. of batteries? I’m trying to imagine what the rest of the equipment would weigh. In any event, they seem to be more interested in adding to the weaponry than reducing weight. At least, that’s how I understand “greater *lethality.” Nice of them to mention greater survivability too.

The British project is more modest, they are weaving e-textiles that harvest energy allowing British soldiers to carry fewer batteries. I believe field trials were scheduled for May 2012.

* Correction: leathility changed to lethality on July 31, 2013.

Nanotechnology and HIV prevention; flying frogs; nanotech regulation conference

It seems to me that whenever researchers announce a nanotechnology application they always estimate that it will take five years before reaching the commercial market. Well, the researchers at the University at Utah are estimating five to seven years before their gel-based anti-HIV condom for women comes to market. From the media release on Azonano,

University of Utah bioengineer Patrick Kiser analyzes polymers used to develop a new kind of AIDS-preventing vaginal gel for eventual use by women in Africa and other impoverished areas. The newly invented gel would be inserted a few hours before sex. During intercourse, polymers — long, chain-like molecules — within the gel become “crosslinked,” forming a microscopic mesh that, in lab experiments, physically trapped HIV (human immunodeficiency virus) particles.

The crosslinked polymers form a mesh that is smaller than microscopic, and instead is nanoscopic – on the scale of atoms and molecules – with a mesh size of a mere 30 to 50 nanometers – or 30 to 50 billionths of a meter. (A meter is about 39 inches.)

By comparison, an HIV particle is about 100 nanometers wide, sperm measure about 5 to 10 microns (5,000 to 10,000 nanometers) in cross section, and the width of a human hair is roughly 100 microns (100,000 nanometers)

I’m not sure why there is such an emphasis on women in the continent of Africa as I’m sure if this product is successful, it could be used in many environments and by many women regardless of their geography.

From 1998 to 2008, researchers found a flying frog in the Eastern Himalayas along with 350 other new species, according to the World Wildlife Federation. From the media release on Physorg.com,

A decade of research carried out by scientists in remote mountain areas endangered by rising global temperatures brought exciting discoveries such as a bright green frog that uses its red and long webbed feet to glide in the air.

A frog that flies -- new species found in Eastern Himalayas

A frog that flies -- new species found in Eastern Himalayas

More details can be found in the media release.

In September, there will be two meetings, one held in London and another in Washington, DC, to discuss a collaborative research project, Regulating Nanotechnologies in the EU and US.  I mentioned the meetings and registration information in an earlier posting here and there’s more information on Nanowerk News here.

I mentioned an event that Raincoaster was organizing, a 3-day novel workshop on the upcoming Labour Day weekend. Unfortunately, it’s been canceled due to one of the downsides of being a freelancer (when you get sick there’s nobody to fill in for you) and arrangements for the lodge/resort couldn’t be finalized in time.