Tag Archives: University of Texas at Austin

Bioinspired ‘smart’ materials a step towards soft robotics and electronics

An October 13, 2022 news item on Nanowerk describes some new work from the University of Texas at Austin,

Inspired by living things from trees to shellfish, researchers at The University of Texas at Austin set out to create a plastic much like many life forms that are hard and rigid in some places and soft and stretchy in others.

Their success — a first, using only light and a catalyst to change properties such as hardness and elasticity in molecules of the same type — has brought about a new material that is 10 times as tough as natural rubber and could lead to more flexible electronics and robotics.

An October 13, 2022 University of Texas at Austin news release (also on EurekAlert), which originated the news item, delves further into the work,

“This is the first material of its type,” said Zachariah Page, assistant professor of chemistry and corresponding author on the paper. “The ability to control crystallization, and therefore the physical properties of the material, with the application of light is potentially transformative for wearable electronics or actuators in soft robotics.”

Scientists have long sought to mimic the properties of living structures, like skin and muscle, with synthetic materials. In living organisms, structures often combine attributes such as strength and flexibility with ease. When using a mix of different synthetic materials to mimic these attributes, materials often fail, coming apart and ripping at the junctures between different materials.

Oftentimes, when bringing materials together, particularly if they have very different mechanical properties, they want to come apart,” Page said. Page and his team were able to control and change the structure of a plastic-like material, using light to alter how firm or stretchy the material would be.

Chemists started with a monomer, a small molecule that binds with others like it to form the building blocks for larger structures called polymers that were similar to the polymer found in the most commonly used plastic. After testing a dozen catalysts, they found one that, when added to their monomer and shown visible light, resulted in a semicrystalline polymer similar to those found in existing synthetic rubber. A harder and more rigid material was formed in the areas the light touched, while the unlit areas retained their soft, stretchy properties.

Because the substance is made of one material with different properties, it was stronger and could be stretched farther than most mixed materials.

The reaction takes place at room temperature, the monomer and catalyst are commercially available, and researchers used inexpensive blue LEDs as the light source in the experiment. The reaction also takes less than an hour and minimizes use of any hazardous waste, which makes the process rapid, inexpensive, energy efficient and environmentally benign.

The researchers will next seek to develop more objects with the material to continue to test its usability.

“We are looking forward to exploring methods of applying this chemistry towards making 3D objects containing both hard and soft components,” said first author Adrian Rylski, a doctoral student at UT Austin.

The team envisions the material could be used as a flexible foundation to anchor electronic components in medical devices or wearable tech. In robotics, strong and flexible materials are desirable to improve movement and durability.

Here’s a link to and a citation for the paper,

Polymeric multimaterials by photochemical patterning of crystallinity by Adrian K. Rylski, Henry L. Cater, Keldy S. Mason, Marshall J. Allen, Anthony J. Arrowood, Benny D. Freeman, Gabriel E. Sanoja, and Zachariah A. Page. Science 13 Oct 2022 Vol 378, Issue 6616 pp. 211-215 DOI: 10.1126/science.add6975

This paper is behind a paywall.

Synaptic transistors for brainlike computers based on (more environmentally friendly) graphene

An August 9, 2022 news item on ScienceDaily describes research investigating materials other than silicon for neuromorphic (brainlike) computing purposes,

Computers that think more like human brains are inching closer to mainstream adoption. But many unanswered questions remain. Among the most pressing, what types of materials can serve as the best building blocks to unlock the potential of this new style of computing.

For most traditional computing devices, silicon remains the gold standard. However, there is a movement to use more flexible, efficient and environmentally friendly materials for these brain-like devices.

In a new paper, researchers from The University of Texas at Austin developed synaptic transistors for brain-like computers using the thin, flexible material graphene. These transistors are similar to synapses in the brain, that connect neurons to each other.

An August 8, 2022 University of Texas at Austin news release (also on EurekAlert but published August 9, 2022), which originated the news item, provides more detail about the research,

“Computers that think like brains can do so much more than today’s devices,” said Jean Anne Incorvia, an assistant professor in the Cockrell School of Engineering’s Department of Electrical and Computer Engineer and the lead author on the paper published today in Nature Communications. “And by mimicking synapses, we can teach these devices to learn on the fly, without requiring huge training methods that take up so much power.”

The Research: A combination of graphene and nafion, a polymer membrane material, make up the backbone of the synaptic transistor. Together, these materials demonstrate key synaptic-like behaviors — most importantly, the ability for the pathways to strengthen over time as they are used more often, a type of neural muscle memory. In computing, this means that devices will be able to get better at tasks like recognizing and interpreting images over time and do it faster.

Another important finding is that these transistors are biocompatible, which means they can interact with living cells and tissue. That is key for potential applications in medical devices that come into contact with the human body. Most materials used for these early brain-like devices are toxic, so they would not be able to contact living cells in any way.

Why It Matters: With new high-tech concepts like self-driving cars, drones and robots, we are reaching the limits of what silicon chips can efficiently do in terms of data processing and storage. For these next-generation technologies, a new computing paradigm is needed. Neuromorphic devices mimic processing capabilities of the brain, a powerful computer for immersive tasks.

“Biocompatibility, flexibility, and softness of our artificial synapses is essential,” said Dmitry Kireev, a post-doctoral researcher who co-led the project. “In the future, we envision their direct integration with the human brain, paving the way for futuristic brain prosthesis.”

Will It Really Happen: Neuromorphic platforms are starting to become more common. Leading chipmakers such as Intel and Samsung have either produced neuromorphic chips already or are in the process of developing them. However, current chip materials place limitations on what neuromorphic devices can do, so academic researchers are working hard to find the perfect materials for soft brain-like computers.

“It’s still a big open space when it comes to materials; it hasn’t been narrowed down to the next big solution to try,” Incorvia said. “And it might not be narrowed down to just one solution, with different materials making more sense for different applications.”

The Team: The research was led by Incorvia and Deji Akinwande, professor in the Department of Electrical and Computer Engineering. The two have collaborated many times together in the past, and Akinwande is a leading expert in graphene, using it in multiple research breakthroughs, most recently as part of a wearable electronic tattoo for blood pressure monitoring.

The idea for the project was conceived by Samuel Liu, a Ph.D. student and first author on the paper, in a class taught by Akinwande. Kireev then suggested the specific project. Harrison Jin, an undergraduate electrical and computer engineering student, measured the devices and analyzed data.

The team collaborated with T. Patrick Xiao and Christopher Bennett of Sandia National Laboratories, who ran neural network simulations and analyzed the resulting data.

Here’s a link to and a citation for the ‘graphene transistor’ paper,

Metaplastic and energy-efficient biocompatible graphene artificial synaptic transistors for enhanced accuracy neuromorphic computing by Dmitry Kireev, Samuel Liu, Harrison Jin, T. Patrick Xiao, Christopher H. Bennett, Deji Akinwande & Jean Anne C. Incorvia. Nature Communications volume 13, Article number: 4386 (2022) DOI: https://doi.org/10.1038/s41467-022-32078-6 Published: 28 July 2022

This paper is open access.

Pulling water from the air

Adele Peters’ May 27, 2022 article for Fast Company describes some research into harvesting water from the air (Note: Links have been removed),

In Ethiopia, where an ongoing drought is the worst in 40 years, getting drinking water for the day can involve walking for eight hours. Some wells are drying up. As climate change progresses, water scarcity keeps getting worse. But new technology in development at the University of Texas at Austin could help: Using simple, low-cost materials, it harvests water from the air, even in the driest climates.

“The advantage of taking water moisture from the air is that it’s not limited geographically,” says Youhong “Nancy” Guo, lead author of a new study in Nature Communications that describes the technology.

It’s a little surprising that Peters doesn’t mention the megadrought in the US Southwest, which has made quite a splash in the news, from a February 15, 2022 article by Denise Chow for NBC [{US} National Broadcasting Corporation] news online, Note: Links have been removed,

The megadrought that has gripped the southwestern United States for the past 22 years is the worst since at least 800 A.D., according to a new study that examined shifts in water availability and soil moisture over the past 12 centuries.

The research, which suggests that the past two decades in the American Southwest have been the driest period in 1,200 years, pointed to human-caused climate change as a major reason for the current drought’s severity. The findings were published Monday in the journal Nature Climate Change.

Jason Smerdon, one of the study’s authors and a climate scientist at Columbia University’s Lamont-Doherty Earth Observatory, said global warming has made the megadrought more extreme because it creates a “thirstier” atmosphere that is better able to pull moisture out of forests, vegetation and soil.

Over the past two decades, temperatures in the Southwest were around 1.64 degrees Fahrenheit higher than the average from 1950 to 1999, according to the researchers. Globally, the world has warmed by about 2 degrees Fahrenheit since the late 1800s.

It’s getting drier even here in the Pacific Northwest. Maybe it’s time to start looking at drought and water shortages as a global issue rather than as a regional issue.

Caption: An example of a different shape the water-capturing film can take. Credit: The University of Texas at Austin / Cockrell School of Engineering

Getting back to the topic, a May 23, 2022 University of Texas at Austin news release (also on EurkeAlert), which originated the Peters’ article, announces the work,

More than a third of the world’s population lives in drylands, areas that experience significant water shortages. Scientists and engineers at The University of Texas at Austin have developed a solution that could help people in these areas access clean drinking water.

The team developed a low-cost gel film made of abundant materials that can pull water from the air in even the driest climates. The materials that facilitate this reaction cost a mere $2 per kilogram, and a single kilogram can produce more than 6 liters of water per day in areas with less than 15% relative humidity and 13 liters in areas with up to 30% relative humidity.

The research builds on previous breakthroughs from the team, including the ability to pull water out of the atmosphere and the application of that technology to create self-watering soil. However, these technologies were designed for relatively high-humidity environments.

“This new work is about practical solutions that people can use to get water in the hottest, driest places on Earth,” said Guihua Yu, professor of materials science and mechanical engineering in the Cockrell School of Engineering’s Walker Department of Mechanical Engineering. “This could allow millions of people without consistent access to drinking water to have simple, water generating devices at home that they can easily operate.”

The researchers used renewable cellulose and a common kitchen ingredient, konjac gum, as a main hydrophilic (attracted to water) skeleton. The open-pore structure of gum speeds the moisture-capturing process. Another designed component, thermo-responsive cellulose with hydrophobic (resistant to water) interaction when heated, helps release the collected water immediately so that overall energy input to produce water is minimized.

Other attempts at pulling water from desert air are typically energy-intensive and do not produce much. And although 6 liters does not sound like much, the researchers say that creating thicker films or absorbent beds or arrays with optimization could drastically increase the amount of water they yield.

The reaction itself is a simple one, the researchers said, which reduces the challenges of scaling it up and achieving mass usage.

“This is not something you need an advanced degree to use,” said Youhong “Nancy” Guo, the lead author on the paper and a former doctoral student in Yu’s lab, now a postdoctoral researcher at the Massachusetts Institute of Technology. “It’s straightforward enough that anyone can make it at home if they have the materials.”

The film is flexible and can be molded into a variety of shapes and sizes, depending on the need of the user. Making the film requires only the gel precursor, which includes all the relevant ingredients poured into a mold.

“The gel takes 2 minutes to set simply. Then, it just needs to be freeze-dried, and it can be peeled off the mold and used immediately after that,” said Weixin Guan, a doctoral student on Yu’s team and a lead researcher of the work.

The research was funded by the U.S. Department of Defense’s Defense Advanced Research Projects Agency (DARPA), and drinking water for soldiers in arid climates is a big part of the project. However, the researchers also envision this as something that people could someday buy at a hardware store and use in their homes because of the simplicity.

Yu directed the project. Guo and Guan co-led experimental efforts on synthesis, characterization of the samples and device demonstration. Other team members are Chuxin Lei, Hengyi Lu and Wen Shi.

Here’s a link to and a citation for the paper,

Scalable super hygroscopic polymer films for sustainable moisture harvesting in arid environments by Youhong Guo, Weixin Guan, Chuxin Lei, Hengyi Lu, Wen Shi & Guihua Yu. Nature Communications volume 13, Article number: 2761 (2022) DOI: https://doi.org/10.1038/s41467-022-30505-2 Published: 19 May 2022

This paper is open access.

Harvest fresh water from dry air with hydrogels

Turning Air Into Drinking Water from University of Texas at Austin on Vimeo. Video by Thomas Swafford. Written by Sara Robberson Lentz.

Seems almost magical but it takes years to do this research. That video was posted in September 2019 and the latest research is being announced in a February 28, 2022 news item on phys.org,

Hydrogels have an astonishing ability to swell and take on water. In daily life, they are used in dressings, nappies, and more to lock moisture away. A team of researchers has now found another use: quickly extracting large amounts of freshwater from air using a specially developed hydrogel containing a hygroscopic salt. The study, published in the journal Angewandte Chemie, shows that the salt enhances the moisture uptake of the gel, making it suitable for water harvesting in dry regions.

A February 28, 2022 Wiley Publishing news release on EurekAlert delves further into hydrogels and the research into how they might be used to harvest water from the air,

Hydrogels can absorb and store many times their weight in water. In so doing, the underlying polymer swells considerably by incorporating water. However, to date, use of this property to produce freshwater from atmospheric water has not been feasible, since collecting moisture from the air is still too slow and inefficient.

On the other hand, moisture absorption could be enhanced by adding hygroscopic salts that can rapidly remove large amounts of moisture from the air. However, hygroscopic salts and hydrogels are usually not compatible, as a large amount of salt influences the swelling capability of the hydrogel and thus degrades its properties. In addition, the salt ions are not tightly coordinated within the gel and are easily washed away.

The materials scientist Guihua Yu and his team at the University of Texas at Austin, USA, have now overcome these issues by developing a particularly “salt-friendly” hydrogel. As their study shows, this gel gains the ability to absorb and retain water when combined with a hygroscopic salt. Using their hydrogel, the team were able to extract almost six liters of pure water per kilo of material in 24 hours, from air with 30% relative humidity.

The basis for the new hydrogel was a polymer constructed from zwitterionic molecules. Polyzwitterions carry both positive and negative charged functional groups, which helped the polymer to become more responsive to the salt in this case. Initially, the molecular strands in the polymer were tightly intermingled, but when the researchers added the lithium chloride salt, the strands relaxed and a porous, spongy hydrogel was formed. This hydrogel loaded with the hygroscopic salt was able to incorporate water molecules quickly and easily.

In fact, water incorporation was so quick and easy that the team were able to set up a cyclical system for continuous water separation. They left the hydrogel for an hour each time to absorb atmospheric moisture, then dried the gel in a condenser to collect the condensed water. They repeated this procedure multiple times without it resulting in any substantial loss of the amount of water absorbed, condensed, or collected.

Yu and the team say that the as-prepared hydrogel “should be optimal for efficient moisture harvesting for the potential daily water yield”. They add that polyzwitterionic hydrogels could play a fundamental role in the future for recovering atmospheric water in arid, drought-stricken regions.

Here’s a link to and a citation for the paper,

Polyzwitterionic Hydrogels for Efficient Atmospheric Water Harvesting by Chuxin Lei, Youhong Guo, Weixin Guan, Hengyi Lu, Wen Shi, Guihua Yu. Angewandte Chemie International Edition Volume 61, Issue1 3 March 21, 2022 e202200271 DOI: https://doi.org/10.1002/anie.202200271 First published: 28 January 2022

This paper is behind a paywall.

Loop quantum cosmology connects the tiniest with the biggest in a cosmic tango

Caption: Tiny quantum fluctuations in the early universe explain two major mysteries about the large-scale structure of the universe, in a cosmic tango of the very small and the very large. A new study by researchers at Penn State used the theory of quantum loop gravity to account for these mysteries, which Einstein’s theory of general relativity considers anomalous.. Credit: Dani Zemba, Penn State

A July 29, 2020 news item on ScienceDaily announces a study showing that quantum loop cosmology can account for some large-scale mysteries,

While [1] Einstein’s theory of general relativity can explain a large array of fascinating astrophysical and cosmological phenomena, some aspects of the properties of the universe at the largest-scales remain a mystery. A new study using loop quantum cosmology — a theory that uses quantum mechanics to extend gravitational physics beyond Einstein’s theory of general relativity — accounts for two major mysteries. While the differences in the theories occur at the tiniest of scales — much smaller than even a proton — they have consequences at the largest of accessible scales in the universe. The study, which appears online July 29 [2020] in the journal Physical Review Letters, also provides new predictions about the universe that future satellite missions could test.

A July 29, 2020 Pennsylvania State University (Penn State) news release (also on EurekAlert) by Gail McCormick, which originated the news item, describes how this work helped us avoid a crisis in cosmology,

While [2] a zoomed-out picture of the universe looks fairly uniform, it does have a large-scale structure, for example because galaxies and dark matter are not uniformly distributed throughout the universe. The origin of this structure has been traced back to the tiny inhomogeneities observed in the Cosmic Microwave Background (CMB)–radiation that was emitted when the universe was 380 thousand years young that we can still see today. But the CMB itself has three puzzling features that are considered anomalies because they are difficult to explain using known physics.

“While [3] seeing one of these anomalies may not be that statistically remarkable, seeing two or more together suggests we live in an exceptional universe,” said Donghui Jeong, associate professor of astronomy and astrophysics at Penn State and an author of the paper. “A recent study in the journal Nature Astronomy proposed an explanation for one of these anomalies that raised so many additional concerns, they flagged a ‘possible crisis in cosmology‘ [emphasis mine].’ Using quantum loop cosmology, however, we have resolved two of these anomalies naturally, avoiding that potential crisis.”

Research over the last three decades has greatly improved our understanding of the early universe, including how the inhomogeneities in the CMB were produced in the first place. These inhomogeneities are a result of inevitable quantum fluctuations in the early universe. During a highly accelerated phase of expansion at very early times–known as inflation–these primordial, miniscule fluctuations were stretched under gravity’s influence and seeded the observed inhomogeneities in the CMB.

“To understand how primordial seeds arose, we need a closer look at the early universe, where Einstein’s theory of general relativity breaks down,” said Abhay Ashtekar, Evan Pugh Professor of Physics, holder of the Eberly Family Chair in Physics, and director of the Penn State Institute for Gravitation and the Cosmos. “The standard inflationary paradigm based on general relativity treats space time as a smooth continuum. Consider a shirt that appears like a two-dimensional surface, but on closer inspection you can see that it is woven by densely packed one-dimensional threads. In this way, the fabric of space time is really woven by quantum threads. In accounting for these threads, loop quantum cosmology allows us to go beyond the continuum described by general relativity where Einstein’s physics breaks down–for example beyond the Big Bang.”

The researchers’ previous investigation into the early universe replaced the idea of a Big Bang singularity, where the universe emerged from nothing, with the Big Bounce, where the current expanding universe emerged from a super-compressed mass that was created when the universe contracted in its preceding phase. They found that all of the large-scale structures of the universe accounted for by general relativity are equally explained by inflation after this Big Bounce using equations of loop quantum cosmology.

In the new study, the researchers determined that inflation under loop quantum cosmology also resolves two of the major anomalies that appear under general relativity.

“The primordial fluctuations we are talking about occur at the incredibly small Planck scale,” said Brajesh Gupt, a postdoctoral researcher at Penn State at the time of the research and currently at the Texas Advanced Computing Center of the University of Texas at Austin. “A Planck length is about 20 orders of magnitude smaller than the radius of a proton. But corrections to inflation at this unimaginably small scale simultaneously explain two of the anomalies at the largest scales in the universe, in a cosmic tango of the very small and the very large.”

The researchers also produced new predictions about a fundamental cosmological parameter and primordial gravitational waves that could be tested during future satellite missions, including LiteBird and Cosmic Origins Explorer, which will continue improve our understanding of the early universe.

That’s a lot of ‘while’. I’ve done this sort of thing, too, and whenever I come across it later; it’s painful.

Here’s a link to and a citation for the paper,

Alleviating the Tension in the Cosmic Microwave Background Using Planck-Scale Physics by Abhay Ashtekar, Brajesh Gupt, Donghui Jeong, and V. Sreenath. Phys. Rev. Lett. 125, 051302 DOI: https://doi.org/10.1103/PhysRevLett.125.051302 Published 29 July 2020 © 2020 American Physical Society

This paper is behind a paywall.

My love is a black, black rose that purifies water

Cockrell School of Engineering, The University of Texas at Austin

The device you see above was apparently inspired by a rose. Personally, Ill need to take the scientists’ word for this image brings to my mind, lava lamps like the one you see below.

A blue lava lamp Credit: Risa1029 – Own work [downloaded from https://en.wikipedia.org/wiki/Lava_lamp#/media/File:Blue_Lava_lamp.JPG]

In any event, the ‘black rose’ collects and purifies water according to a May 29, 2019 University of Texas at Austin news release (also on EurekAlert),

The rose may be one of the most iconic symbols of the fragility of love in popular culture, but now the flower could hold more than just symbolic value. A new device for collecting and purifying water, developed at The University of Texas at Austin, was inspired by a rose and, while more engineered than enchanted, is a dramatic improvement on current methods. Each flower-like structure costs less than 2 cents and can produce more than half a gallon of water per hour per square meter.

A team led by associate professor Donglei (Emma) Fan in the Cockrell School of Engineering’s Walker Department of Mechanical Engineering developed a new approach to solar steaming for water production – a technique that uses energy from sunlight to separate salt and other impurities from water through evaporation.

In a paper published in the most recent issue of the journal Advanced Materials, the authors outline how an origami rose provided the inspiration for developing a new kind of solar-steaming system made from layered, black paper sheets shaped into petals. Attached to a stem-like tube that collects untreated water from any water source, the 3D rose shape makes it easier for the structure to collect and retain more liquid.

Current solar-steaming technologies are usually expensive, bulky and produce limited results. The team’s method uses inexpensive materials that are portable and lightweight. Oh, and it also looks just like a black-petaled rose in a glass jar.

Those in the know would more accurately describe it as a portable low-pressure controlled solar-steaming-collection “unisystem.” But its resemblance to a flower is no coincidence.

“We were searching for more efficient ways to apply the solar-steaming technique for water production by using black filtered paper coated with a special type of polymer, known as polypyrrole,” Fan said.

Polypyrrole is a material known for its photothermal properties, meaning it’s particularly good at converting solar light into thermal heat.

Fan and her team experimented with a number of different ways to shape the paper to see what was best for achieving optimal water retention levels. They began by placing single, round layers of the coated paper flat on the ground under direct sunlight. The single sheets showed promise as water collectors but not in sufficient amounts. After toying with a few other shapes, Fan was inspired by a book she read in high school. Although not about roses per se, “The Black Tulip” by Alexandre Dumas gave her the idea to try using a flower-like shape, and she discovered the rose to be ideal. Its structure allowed more direct sunlight to hit the photothermic material – with more internal reflections – than other floral shapes and also provided enlarged surface area for water vapor to dissipate from the material.

The device collects water through its stem-like tube – feeding it to the flower-shaped structure on top. It can also collect rain drops coming from above. Water finds its way to the petals where the polypyrrole material coating the flower turns the water into steam. Impurities naturally separate from water when condensed in this way.

“We designed the purification-collection unisystem to include a connection point for a low-pressure pump to help condense the water more effectively,” said Weigu Li, a Ph.D. candidate in Fan’s lab and lead author on the paper. “Once it is condensed, the glass jar is designed to be compact, sturdy and secure for storing clean water.”

The device removes any contamination from heavy metals and bacteria, and it removes salt from seawater, producing clean water that meets drinking standard requirements set by the World Health Organization.

“Our rational design and low-cost fabrication of 3D origami photothermal materials represents a first-of-its-kind portable low-pressure solar-steaming-collection system,” Li said. “This could inspire new paradigms of solar-steaming technologies in clean water production for individuals and homes.”

Here’s a citation and another link to the paper,

Portable Low‐Pressure Solar Steaming‐Collection Unisystem with Polypyrrole Origamis by Weigu Li, Zheng Li, Karina Bertelsmann, Donglei Emma Fan. Advanced Materials DOI: https://doi.org/10.1002/adma.201900720 First published: 28 May 2019

This paper is behind a paywall.

Harvesting the heart’s kinetic energy to power implants

This work comes from Dartmouth College, an educational institution based on the US east coast in the state of New Hampshire. I hardly ever stumble across research from Dartmouth and I assume that’s because they usually focus their interests in areas that are not of direct interest to me,

Rendering of the two designs of the cardiac energy harvesting device. (Cover art by Patricio Sarzosa) Courtesy: Dartmouth College

For a change, we have a point of connection (harvesting biokinetic energy) according to a February 4, 2019 news item on ScienceDaily,

The heart’s motion is so powerful that it can recharge devices that save our lives, according to new research from Dartmouth College.

Using a dime-sized invention developed by engineers at the Thayer School of Engineering at Dartmouth, the kinetic energy of the heart can be converted into electricity to power a wide-range of implantable devices, according to the study funded by the National Institutes of Health.

A February 4, 2019 Dartmouth College news release, which originated the news item, describes the problem and the proposed solution,

Millions of people rely on pacemakers, defibrillators and other live-saving implantable devices powered by batteries that need to be replaced every five to 10 years. Those replacements require surgery which can be costly and create the possibility of complications and infections.

“We’re trying to solve the ultimate problem for any implantable biomedical device,” says Dartmouth engineering professor John X.J. Zhang, a lead researcher on the study his team completed alongside clinicians at the University of Texas in San Antonio. “How do you create an effective energy source so the device will do its job during the entire life span of the patient, without the need for surgery to replace the battery?”

“Of equal importance is that the device not interfere with the body’s function,” adds Dartmouth research associate Lin Dong, first author on the paper. “We knew it had to be biocompatible, lightweight, flexible, and low profile, so it not only fits into the current pacemaker structure but is also scalable for future multi-functionality.”

The team’s work proposes modifying pacemakers to harness the kinetic energy of the lead wire that’s attached to the heart, converting it into electricity to continually charge the batteries. The added material is a type of thin polymer piezoelectric film called “PVDF” and, when designed with porous structures — either an array of small buckle beams or a flexible cantilever — it can convert even small mechanical motion to electricity. An added benefit: the same modules could potentially be used as sensors to enable data collection for real-time monitoring of patients.

The results of the three-year study, completed by Dartmouth’s engineering researchers along with clinicians at UT Health San Antonio, were just published in the cover story for Advanced Materials Technologies.

The two remaining years of NIH funding plus time to finish the pre-clinical process and obtain regulatory approval puts a self-charging pacemaker approximately five years out from commercialization, according to Zhang

“We’ve completed the first round of animal studies with great results which will be published soon,” says Zhang. “There is already a lot of expressed interest from the major medical technology companies, and Andrew Closson, one of the study’s authors working with Lin Dong and an engineering PhD Innovation Program student at Dartmouth, is learning the business and technology transfer skills to be a cohort in moving forward with the entrepreneurial phase of this effort.”

Other key collaborators on the study include Dartmouth engineering professor Zi Chen, an expert on thin structure mechanics, and Dr. Marc Feldman, professor and clinical cardiologist at UT [University of Texas] Health San Antonio.

Here’s a citation and another link for the paper,

Energy Harvesting: Flexible Porous Piezoelectric Cantilever on a Pacemaker Lead for Compact Energy Harvesting by Lin Dong, Xiaomin Han, Zhe Xu, Andrew B. Closson, Yin Liu, Chunsheng Wen, Xi Liu, Gladys Patricia Escobar, Meagan Oglesby, Marc Feldman, Zi Chen, John X. J. Zhang. Adv. Mater. Technol. 1/2019 https://doi.org/10.1002/admt.201970002 First published: 08 January 2019

This paper is open access.

Being smart about using artificial intelligence in the field of medicine

Since my August 20, 2018 post featured an opinion piece about the possibly imminent replacement of radiologists with artificial intelligence systems and the latest research about employing them for diagnosing eye diseases, it seems like a good time to examine some of the mythology embedded in the discussion about AI and medicine.

Imperfections in medical AI systems

An August 15, 2018 article for Slate.com by W. Nicholson Price II (who teaches at the University of Michigan School of Law; in addition to his law degree he has a PhD in Biological Sciences from Columbia University) begins with the peppy, optimistic view before veering into more critical territory (Note: Links have been removed),

For millions of people suffering from diabetes, new technology enabled by artificial intelligence promises to make management much easier. Medtronic’s Guardian Connect system promises to alert users 10 to 60 minutes before they hit high or low blood sugar level thresholds, thanks to IBM Watson, “the same supercomputer technology that can predict global weather patterns.” Startup Beta Bionics goes even further: In May, it received Food and Drug Administration approval to start clinical trials on what it calls a “bionic pancreas system” powered by artificial intelligence, capable of “automatically and autonomously managing blood sugar levels 24/7.”

An artificial pancreas powered by artificial intelligence represents a huge step forward for the treatment of diabetes—but getting it right will be hard. Artificial intelligence (also known in various iterations as deep learning and machine learning) promises to automatically learn from patterns in medical data to help us do everything from managing diabetes to finding tumors in an MRI to predicting how long patients will live. But the artificial intelligence techniques involved are typically opaque. We often don’t know how the algorithm makes the eventual decision. And they may change and learn from new data—indeed, that’s a big part of the promise. But when the technology is complicated, opaque, changing, and absolutely vital to the health of a patient, how do we make sure it works as promised?

Price describes how a ‘closed loop’ artificial pancreas with AI would automate insulin levels for diabetic patients, flaws in the automated system, and how companies like to maintain a competitive advantage (Note: Links have been removed),

[…] a “closed loop” artificial pancreas, where software handles the whole issue, receiving and interpreting signals from the monitor, deciding when and how much insulin is needed, and directing the insulin pump to provide the right amount. The first closed-loop system was approved in late 2016. The system should take as much of the issue off the mind of the patient as possible (though, of course, that has limits). Running a close-loop artificial pancreas is challenging. The way people respond to changing levels of carbohydrates is complicated, as is their response to insulin; it’s hard to model accurately. Making it even more complicated, each individual’s body reacts a little differently.

Here’s where artificial intelligence comes into play. Rather than trying explicitly to figure out the exact model for how bodies react to insulin and to carbohydrates, machine learning methods, given a lot of data, can find patterns and make predictions. And existing continuous glucose monitors (and insulin pumps) are excellent at generating a lot of data. The idea is to train artificial intelligence algorithms on vast amounts of data from diabetic patients, and to use the resulting trained algorithms to run a closed-loop artificial pancreas. Even more exciting, because the system will keep measuring blood glucose, it can learn from the new data and each patient’s artificial pancreas can customize itself over time as it acquires new data from that patient’s particular reactions.

Here’s the tough question: How will we know how well the system works? Diabetes software doesn’t exactly have the best track record when it comes to accuracy. A 2015 study found that among smartphone apps for calculating insulin doses, two-thirds of the apps risked giving incorrect results, often substantially so. … And companies like to keep their algorithms proprietary for a competitive advantage, which makes it hard to know how they work and what flaws might have gone unnoticed in the development process.

There’s more,

These issues aren’t unique to diabetes care—other A.I. algorithms will also be complicated, opaque, and maybe kept secret by their developers. The potential for problems multiplies when an algorithm is learning from data from an entire hospital, or hospital system, or the collected data from an entire state or nation, not just a single patient. …

The [US Food and Drug Administraiont] FDA is working on this problem. The head of the agency has expressed his enthusiasm for bringing A.I. safely into medical practice, and the agency has a new Digital Health Innovation Action Plan to try to tackle some of these issues. But they’re not easy, and one thing making it harder is a general desire to keep the algorithmic sauce secret. The example of IBM Watson for Oncology has given the field a bit of a recent black eye—it turns out that the company knew the algorithm gave poor recommendations for cancer treatment but kept that secret for more than a year. …

While Price focuses on problems with algorithms and with developers and their business interests, he also hints at some of the body’s complexities.

Can AI systems be like people?

Susan Baxter, a medical writer with over 20 years experience, a PhD in health economics, and author of countless magazine articles and several books, offers a more person-centered approach to the discussion in her July 6, 2018 posting on susanbaxter.com,

The fascination with AI continues to irk, given that every second thing I read seems to be extolling the magic of AI and medicine and how It Will Change Everything. Which it will not, trust me. The essential issue of illness remains perennial and revolves around an individual for whom no amount of technology will solve anything without human contact. …

But in this world, or so we are told by AI proponents, radiologists will soon be obsolete. [my August 20, 2018 post] The adaptational learning capacities of AI mean that reading a scan or x-ray will soon be more ably done by machines than humans. The presupposition here is that we, the original programmers of this artificial intelligence, understand the vagaries of real life (and real disease) so wonderfully that we can deconstruct these much as we do the game of chess (where, let’s face it, Big Blue ate our lunch) and that analyzing a two-dimensional image of a three-dimensional body, already problematic, can be reduced to a series of algorithms.

Attempting to extrapolate what some “shadow” on a scan might mean in a flesh and blood human isn’t really quite the same as bishop to knight seven. Never mind the false positive/negatives that are considered an acceptable risk or the very real human misery they create.

Moravec called it

It’s called Moravec’s paradox, the inability of humans to realize just how complex basic physical tasks are – and the corresponding inability of AI to mimic it. As you walk across the room, carrying a glass of water, talking to your spouse/friend/cat/child; place the glass on the counter and open the dishwasher door with your foot as you open a jar of pickles at the same time, take a moment to consider just how many concurrent tasks you are doing and just how enormous the computational power these ostensibly simple moves would require.

Researchers in Singapore taught industrial robots to assemble an Ikea chair. Essentially, screw in the legs. A person could probably do this in a minute. Maybe two. The preprogrammed robots took nearly half an hour. And I suspect programming those robots took considerably longer than that.

Ironically, even Elon Musk, who has had major production problems with the Tesla cars rolling out of his high tech factory, has conceded (in a tweet) that “Humans are underrated.”

I wouldn’t necessarily go that far given the political shenanigans of Trump & Co. but in the grand scheme of things I tend to agree. …

Is AI going the way of gene therapy?

Susan draws a parallel between the AI and medicine discussion with the discussion about genetics and medicine (Note: Links have been removed),

On a somewhat similar note – given the extent to which genetics discourse has that same linear, mechanistic  tone [as AI and medicine] – it turns out all this fine talk of using genetics to determine health risk and whatnot is based on nothing more than clever marketing, since a lot of companies are making a lot of money off our belief in DNA. Truth is half the time we don’t even know what a gene is never mind what it actually does;  geneticists still can’t agree on how many genes there are in a human genome, as this article in Nature points out.

Along the same lines, I was most amused to read about something called the Super Seniors Study, research following a group of individuals in their 80’s, 90’s and 100’s who seem to be doing really well. Launched in 2002 and headed by Angela Brooks Wilson, a geneticist at the BC [British Columbia] Cancer Agency and SFU [Simon Fraser University] Chair of biomedical physiology and kinesiology, this longitudinal work is examining possible factors involved in healthy ageing.

Turns out genes had nothing to do with it, the title of the Globe and Mail article notwithstanding. (“Could the DNA of these super seniors hold the secret to healthy aging?” The answer, a resounding “no”, well hidden at the very [end], the part most people wouldn’t even get to.) All of these individuals who were racing about exercising and working part time and living the kind of life that makes one tired just reading about it all had the same “multiple (genetic) factors linked to a high probability of disease”. You know, the gene markers they tell us are “linked” to cancer, heart disease, etc., etc. But these super seniors had all those markers but none of the diseases, demonstrating (pretty strongly) that the so-called genetic links to disease are a load of bunkum. Which (she said modestly) I have been saying for more years than I care to remember. You’re welcome.

The fundamental error in this type of linear thinking is in allowing our metaphors (genes are the “blueprint” of life) and propensity towards social ideas of determinism to overtake common sense. Biological and physiological systems are not static; they respond to and change to life in its entirety, whether it’s diet and nutrition to toxic or traumatic insults. Immunity alters, endocrinology changes, – even how we think and feel affects the efficiency and effectiveness of physiology. Which explains why as we age we become increasingly dissimilar.

If you have the time, I encourage to read Susan’s comments in their entirety.

Scientific certainties

Following on with genetics, gene therapy dreams, and the complexity of biology, the June 19, 2018 Nature article by Cassandra Willyard (mentioned in Susan’s posting) highlights an aspect of scientific research not often mentioned in public,

One of the earliest attempts to estimate the number of genes in the human genome involved tipsy geneticists, a bar in Cold Spring Harbor, New York, and pure guesswork.

That was in 2000, when a draft human genome sequence was still in the works; geneticists were running a sweepstake on how many genes humans have, and wagers ranged from tens of thousands to hundreds of thousands. Almost two decades later, scientists armed with real data still can’t agree on the number — a knowledge gap that they say hampers efforts to spot disease-related mutations.

In 2000, with the genomics community abuzz over the question of how many human genes would be found, Ewan Birney launched the GeneSweep contest. Birney, now co-director of the European Bioinformatics Institute (EBI) in Hinxton, UK, took the first bets at a bar during an annual genetics meeting, and the contest eventually attracted more than 1,000 entries and a US$3,000 jackpot. Bets on the number of genes ranged from more than 312,000 to just under 26,000, with an average of around 40,000. These days, the span of estimates has shrunk — with most now between 19,000 and 22,000 — but there is still disagreement (See ‘Gene Tally’).

… the inconsistencies in the number of genes from database to database are problematic for researchers, Pruitt says. “People want one answer,” she [Kim Pruitt, a genome researcher at the US National Center for Biotechnology Information {NCB}] in Bethesda, Maryland] adds, “but biology is complex.”

I wanted to note that scientists do make guesses and not just with genetics. For example, Gina Mallet’s 2005 book ‘Last Chance to Eat: The Fate of Taste in a Fast Food World’ recounts the story of how good and bad levels of cholesterol were established—the experts made some guesses based on their experience. That said, Willyard’s article details the continuing effort to nail down the number of genes almost 20 years after the human genome project was completed and delves into the problems the scientists have uncovered.

Final comments

In addition to opaque processes with developers/entrepreneurs wanting to maintain their secrets for competitive advantages and in addition to our own poor understanding of the human body (how many genes are there anyway?), there are same major gaps (reflected in AI) in our understanding of various diseases. Angela Lashbrook’s August 16, 2018 article for The Atlantic highlights some issues with skin cancer and shade of your skin (Note: Links have been removed),

… While fair-skinned people are at the highest risk for contracting skin cancer, the mortality rate for African Americans is considerably higher: Their five-year survival rate is 73 percent, compared with 90 percent for white Americans, according to the American Academy of Dermatology.

As the rates of melanoma for all Americans continue a 30-year climb, dermatologists have begun exploring new technologies to try to reverse this deadly trend—including artificial intelligence. There’s been a growing hope in the field that using machine-learning algorithms to diagnose skin cancers and other skin issues could make for more efficient doctor visits and increased, reliable diagnoses. The earliest results are promising—but also potentially dangerous for darker-skinned patients.

… Avery Smith, … a software engineer in Baltimore, Maryland, co-authored a paper in JAMA [Journal of the American Medical Association] Dermatology that warns of the potential racial disparities that could come from relying on machine learning for skin-cancer screenings. Smith’s co-author, Adewole Adamson of the University of Texas at Austin, has conducted multiple studies on demographic imbalances in dermatology. “African Americans have the highest mortality rate [for skin cancer], and doctors aren’t trained on that particular skin type,” Smith told me over the phone. “When I came across the machine-learning software, one of the first things I thought was how it will perform on black people.”

Recently, a study that tested machine-learning software in dermatology, conducted by a group of researchers primarily out of Germany, found that “deep-learning convolutional neural networks,” or CNN, detected potentially cancerous skin lesions better than the 58 dermatologists included in the study group. The data used for the study come from the International Skin Imaging Collaboration, or ISIC, an open-source repository of skin images to be used by machine-learning algorithms. Given the rise in melanoma cases in the United States, a machine-learning algorithm that assists dermatologists in diagnosing skin cancer earlier could conceivably save thousands of lives each year.

… Chief among the prohibitive issues, according to Smith and Adamson, is that the data the CNN relies on come from primarily fair-skinned populations in the United States, Australia, and Europe. If the algorithm is basing most of its knowledge on how skin lesions appear on fair skin, then theoretically, lesions on patients of color are less likely to be diagnosed. “If you don’t teach the algorithm with a diverse set of images, then that algorithm won’t work out in the public that is diverse,” says Adamson. “So there’s risk, then, for people with skin of color to fall through the cracks.”

As Adamson and Smith’s paper points out, racial disparities in artificial intelligence and machine learning are not a new issue. Algorithms have mistaken images of black people for gorillas, misunderstood Asians to be blinking when they weren’t, and “judged” only white people to be attractive. An even more dangerous issue, according to the paper, is that decades of clinical research have focused primarily on people with light skin, leaving out marginalized communities whose symptoms may present differently.

The reasons for this exclusion are complex. According to Andrew Alexis, a dermatologist at Mount Sinai, in New York City, and the director of the Skin of Color Center, compounding factors include a lack of medical professionals from marginalized communities, inadequate information about those communities, and socioeconomic barriers to participating in research. “In the absence of a diverse study population that reflects that of the U.S. population, potential safety or efficacy considerations could be missed,” he says.

Adamson agrees, elaborating that with inadequate data, machine learning could misdiagnose people of color with nonexistent skin cancers—or miss them entirely. But he understands why the field of dermatology would surge ahead without demographically complete data. “Part of the problem is that people are in such a rush. This happens with any new tech, whether it’s a new drug or test. Folks see how it can be useful and they go full steam ahead without thinking of potential clinical consequences. …

Improving machine-learning algorithms is far from the only method to ensure that people with darker skin tones are protected against the sun and receive diagnoses earlier, when many cancers are more survivable. According to the Skin Cancer Foundation, 63 percent of African Americans don’t wear sunscreen; both they and many dermatologists are more likely to delay diagnosis and treatment because of the belief that dark skin is adequate protection from the sun’s harmful rays. And due to racial disparities in access to health care in America, African Americans are less likely to get treatment in time.

Happy endings

I’ll add one thing to Price’s article, Susan’s posting, and Lashbrook’s article about the issues with AI , certainty, gene therapy, and medicine—the desire for a happy ending prefaced with an easy solution. If the easy solution isn’t possible accommodations will be made but that happy ending is a must. All disease will disappear and there will be peace on earth. (Nod to Susan Baxter and her many discussions with me about disease processes and happy endings.)

The solutions, for the most part, are seen as technological despite the mountain of evidence suggesting that technology reflects our own imperfect understanding of health and disease therefore providing what is at best an imperfect solution.

Also, we tend to underestimate just how complex humans are not only in terms of disease and health but also with regard to our skills, understanding, and, perhaps not often enough, our ability to respond appropriately in the moment.

There is much to celebrate in what has been accomplished: no more black death, no more smallpox, hip replacements, pacemakers, organ transplants, and much more. Yes, we should try to improve our medicine. But, maybe alongside the celebration we can welcome AI and other technologies with a lot less hype and a lot more skepticism.

Quantum computing and more at SXSW (South by Southwest) 2018

It’s that time of year again. The entertainment conference such as South by South West (SXSW) is being held from March 9-18, 2018. The science portion of the conference can be found in the Intelligent Future sessions, from the description,

AI and new technologies embody the realm of possibilities where intelligence empowers and enables technology while sparking legitimate concerns about its uses. Highlighted Intelligent Future sessions include New Mobility and the Future of Our Cities, Mental Work: Moving Beyond Our Carbon Based Minds, Can We Create Consciousness in a Machine?, and more.

Intelligent Future Track sessions are held March 9-15 at the Fairmont.

Last year I focused on the conference sessions on robots, Hiroshi Ishiguro’s work, and artificial intelligence in a  March 27, 2017 posting. This year I’m featuring one of the conference’s quantum computing session, from a March 9, 2018 University of Texas at Austin news release  (also on EurekAlert),

Imagine a new kind of computer that can quickly solve problems that would stump even the world’s most powerful supercomputers. Quantum computers are fundamentally different. They can store information as not only just ones and zeros, but in all the shades of gray in-between. Several companies and government agencies are investing billions of dollars in the field of quantum information. But what will quantum computers be used for?

South by Southwest 2018 hosts a panel on March 10th [2018] called Quantum Computing: Science Fiction to Science Fact. Experts on quantum computing make up the panel, including Jerry Chow of IBM; Bo Ewald of D-Wave Systems; Andrew Fursman of 1QBit; and Antia Lamas-Linares of the Texas Advanced Computing Center at UT Austin.

Antia Lamas-Linares is a Research Associate in the High Performance Computing group at TACC. Her background is as an experimentalist with quantum computing systems, including work done with them at the Centre for Quantum Technologies in Singapore. She joins podcast host Jorge Salazar to talk about her South by Southwest panel and about some of her latest research on quantum information.

Lamas-Linares co-authored a study (doi: 10.1117/12.2290561) in the Proceedings of the SPIE, The International Society for Optical Engineering, that published in February of 2018. The study, “Secure Quantum Clock Synchronization,” proposed a protocol to verify and secure time synchronization of distant atomic clocks, such as those used for GPS signals in cell phone towers and other places. “It’s important work,” explained Lamas-Linares, “because people are worried about malicious parties messing with the channels of GPS. What James Troupe (Applied Research Laboratories, UT Austin) and I looked at was whether we can use techniques from quantum cryptography and quantum information to make something that is inherently unspoofable.”

Antia Lamas-Linares: The most important thing is that quantum technologies is a really exciting field. And it’s exciting in a fundamental sense. We don’t quite know what we’re going to get out of it. We know a few things, and that’s good enough to drive research. But the things we don’t know are much broader than the things we know, and it’s going to be really interesting. Keep your eyes open for this.

Quantum Computing: Science Fiction to Science Fact, March 10, 2018 | 11:00AM – 12:00PM, Fairmont Manchester EFG, SXSW 2018, Austin, TX.

If you look up the session, you will find,

Quantum Computing: Science Fiction to Science Fact

Quantum Computing: Science Fiction to Science Fact


Bo Ewald

D-Wave Systems

Antia Lamas-Linares

Texas Advanced Computing Center at University of Texas

Startups and established players have sold 2000 Qubit systems, made freely available cloud access to quantum computer processors, and created large scale open source initiatives, all taking quantum computing from science fiction to science fact. Government labs and others like IBM, Microsoft, Google are developing software for quantum computers. What problems will be solved with this quantum leap in computing power that cannot be solved today with the world’s most powerful supercomputers?

[Programming descriptions are generated by participants and do not necessarily reflect the opinions of SXSW.]

Favorited by (1128)

View all

Primary Entry: Platinum Badge, Interactive Badge

Secondary Entry: Music Badge, Film Badge

Format: Panel

Event Type: Session

Track: Intelligent Future

Level: Intermediate

I wonder what ‘level’ means? I was not able to find an answer (quickly).

It’s was a bit surprising to find someone from D-Wave Systems (a Vancouver-based quantum computing based enterprise) at an entertainment conference. Still, it shouldn’t have been. Two other examples immediately come to mind, the TED (technology, entertainment, and design) conferences have been melding technology, if not science, with creative activities of all kinds for many years (TED 2018: The Age of Amazement, April 10 -14, 2018 in Vancouver [Canada]) and Beakerhead (2018 dates: Sept. 19 – 23) has been melding art, science, and engineering in a festival held in Calgary (Canada) since 2013. One comment about TED, it was held for several years in California (1984, 1990 – 2013) and moved to Vancouver in 2014.

For anyone wanting to browse the 2018 SxSW Intelligent Future sessions online, go here. or wanting to hear Antia Lamas-Linares talk about quantum computing, there’s the interview with Jorge Salazar (mentioned in the news release),