Tag Archives: US National Science Foundation

Could buckyballs and carbon nanotubes come from the dust and gas of dying stars?

In this picture of the Spirograph Nebula, a dying star about 2,000 light-years from Earth, NASA’s Hubble Space Telescope revealed some remarkable textures weaving through the star’s envelope of dust and gas. UArizona researchers have now found evidence that complex carbon nanotubes could be forged in such environments.. Credit: NASA and The Hubble Heritage Team (STScI/AURA)

It’s always interesting to come across different news releases announcing the same research. In this case I have two news releases, one from the US National Science Foundation (NSF) and one from the University of Arizona. Let’s start with the July 19, 2022 news item on phys.org (originated by the US NSF),

Astronomers at the University of Arizona have developed a theory to explain the presence of the largest molecules known to exist in interstellar gas.

The team simulated the environment of dying stars and observed the formation of buckyballs (carbon atoms linked to three other carbon atoms by covalent bonds) and carbon nanotubes (rolled up sheets of single-layer carbon atoms). The findings indicate that buckyballs and carbon nanotubes can form when silicon carbide dust — known to be proximate to dying stars — releases carbon in reaction to intense heat, shockwaves and high energy particles.

Here’s the rest of the July 18, 2022 NSF news release, Note: A link has been removed,

“We know from infrared observations that buckyballs populate the interstellar medium,” said Jacob Bernal, who led the research. “The big problem has been explaining how these massive, complex carbon molecules could possibly form in an environment saturated with hydrogen, which is what you typically have around a dying star.”

Rearranging the structure of graphene (a sheet of single-layer carbon atoms) could create buckyballs and nanotubes. Building on that, the team heated silicon carbide samples to temperatures that would mimic the aura of a dying star and observed the formation of nanotubes.

“We were surprised we could make these extraordinary structures,” Bernal said. “Chemically, our nanotubes are very simple, but they are extremely beautiful.”

Buckyballs are the largest molecules currently known to occur in interstellar space. It is now known that buckyballs containing 60 to 70 carbon atoms are common.

“We know the raw material is there, and we know the conditions are very close to what you’d see near the envelope of a dying star,” study co-author Lucy Ziurys said. “Shock waves pass through the envelope, and the temperature and pressure conditions have been shown to exist in space. We also see buckyballs in planetary nebulae — in other words, we see the beginning and the end products you would expect in our experiments.”

A June 16, 2022 University of Arizona news release by Daniel Stolte (also on EurekAlert) takes a context-rich approach to writing up the proposed theory for how buckyballs and carbon nanotubes (CNTs) form (Note: Links have been removed),

In the mid-1980s, the discovery of complex carbon molecules drifting through the interstellar medium garnered significant attention, with possibly the most famous examples being Buckminsterfullerene, or “buckyballs” – spheres consisting of 60 or 70 carbon atoms. However, scientists have struggled to understand how these molecules can form in space.

In a paper accepted for publication in the Journal of Physical Chemistry A, researchers from the University of Arizona suggest a surprisingly simple explanation. After exposing silicon carbide – a common ingredient of dust grains in planetary nebulae – to conditions similar to those found around dying stars, the researchers observed the spontaneous formation of carbon nanotubes, which are highly structured rod-like molecules consisting of multiple layers of carbon sheets. The findings were presented on June 16 [2022] at the 240th Meeting of the American Astronomical Society in Pasadena, California.

Led by UArizona researcher Jacob Bernal, the work builds on research published in 2019, when the group showed that they could create buckyballs using the same experimental setup. The work suggests that buckyballs and carbon nanotubes could form when the silicon carbide dust made by dying stars is hit by high temperatures, shock waves and high-energy particles, leaching silicon from the surface and leaving carbon behind.

The findings support the idea that dying stars may seed the interstellar medium with nanotubes and possibly other complex carbon molecules. The results have implications for astrobiology, as they provide a mechanism for concentrating carbon that could then be transported to planetary systems.

“We know from infrared observations that buckyballs populate the interstellar medium,” said Bernal, a postdoctoral research associate in the UArizona Lunar and Planetary Laboratory. “The big problem has been explaining how these massive, complex carbon molecules could possibly form in an environment saturated with hydrogen, which is what you typically have around a dying star.”

The formation of carbon-rich molecules, let alone species containing purely carbon, in the presence of hydrogen is virtually impossible due to thermodynamic laws. The new study findings offer an alternative scenario: Instead of assembling individual carbon atoms, buckyballs and nanotubes could result from simply rearranging the structure of graphene – single-layered carbon sheets that are known to form on the surface of heated silicon carbide grains.

This is exactly what Bernal and his co-authors observed when they heated commercially available silicon carbide samples to temperatures occurring in dying or dead stars and imaged them. As the temperature approached 1,050 degreesCelsius, small hemispherical structures with the approximate size of about 1 nanometer were observed at the grain surface. Within minutes of continued heating, the spherical buds began to grow into rod-like structures, containing several graphene layers with curvature and dimensions indicating a tubular form. The resulting nanotubules ranged from about 3 to 4 nanometers in length and width, larger than buckyballs. The largest imaged specimens were comprised of more than four layers of graphitic carbon. During the heating experiment, the tubes were observed to wiggle before budding off the surface and getting sucked into the vacuum surrounding the sample.

“We were surprised we could make these extraordinary structures,” Bernal said. “Chemically, our nanotubes are very simple, but they are extremely beautiful.”

Named after their resemblance to architectural works by Richard Buckminster Fuller, fullerenes are the largest molecules currently known to occur in interstellar space, which for decades was believed to be devoid of any molecules containing more than a few atoms, 10 at most. It is now well established that the fullerenes C60 and C70, which contain 60 or 70 carbon atoms, respectively, are common ingredients of the interstellar medium.

One of the first of its kind in the world, the transmission electron microscope housed at the Kuiper Materials Imaging and Characterization Facility at UArizona is uniquely suited to simulate the planetary nebula environment. Its 200,000-volt electron beam can probe matter down to 78 picometers – the distance of two hydrogen atoms in a water molecule – making it possible to see individual atoms. The instrument operates in a vacuum closely resembling the pressure – or lack thereof – thought to exist in circumstellar environments.

While a spherical C60 molecule measures 0.7 nanometers in diameter, the nanotube structures formed in this experiment measured several times the size of C60, easily exceeding 1,000 carbon atoms. The study authors are confident their experiments accurately replicated the temperature and density conditions that would be expected in a planetary nebula, said co-author Lucy Ziurys, a UArizona Regents Professor of Astronomy, Chemistry and Biochemistry.

“We know the raw material is there, and we know the conditions are very close to what you’d see near the envelope of a dying star,” she said. “There are shock waves that pass through the envelope, so the temperature and pressure conditions have been shown to exist in space. We also see buckyballs in these planetary nebulae – in other words, we see the beginning and the end products you would expect in our experiments.”

These experimental simulations suggest that carbon nanotubes, along with the smaller fullerenes, are subsequently injected into the interstellar medium. Carbon nanotubes are known to have high stability against radiation, and fullerenes are able to survive for millions of years when adequately shielded from high-energy cosmic radiation. Carbon-rich meteorites, such as carbonaceous chondrites, could contain these structures as well, the researchers propose.

According to study co-author Tom Zega, a professor in the UArizona Lunar and Planetary Lab, the challenge is finding nanotubes in these meteorites, because of the very small grain sizes and because the meteorites are a complex mix of organic and inorganic materials, some with sizes similar to those of nanotubes.

“Nonetheless, our experiments suggest that such materials could have formed in interstellar space,” Zega said. “If they survived the journey to our local part of the galaxy where our solar system formed some 4.5 billion years ago, then they could be preserved inside of the material that was left over.”

Zega said a prime example of such leftover material is Bennu, a carbonaceous near-Earth asteroid from which NASA’s UArizona-led OSIRIS-REx mission scooped up a sample in October 2020. Scientists are eagerly awaiting the arrival of that sample, scheduled for 2023.  

“Asteroid Bennu could have preserved these materials, so it is possible we may find nanotubes in them,” Zega said.

Here’s a link to and a citation for the paper,

Destructive Processing of Silicon Carbide Grains: Experimental Insights into the Formation of Interstellar Fullerenes and Carbon Nanotubes by Jacob J. Bernal, Thomas J. Zega, and Lucy M. Ziurys. J. Phys. Chem. A 2022, XXXX, XXX, XXX-XXX DOI: https://doi.org/10.1021/acs.jpca.2c01441 Publication Date:June 27, 2022 © 2022 American Chemical Society

This paper is behind a paywall.

Internet of living things (IoLT)?

It’s not here yet but there are scientists working on an internet of living things (IoLT). There are some details (see the fourth paragraph from the bottom of the news release excerpt) about how an IoLT would be achieved but it seems these are early days. From a September 9, 2021 University of Illinois news release (also on EurekAlert), Note: Links have been removed,

The National Science Foundation (NSF) announced today an investment of $25 million to launch the Center for Research on Programmable Plant Systems (CROPPS). The center, a partnership among the University of Illinois at Urbana-Champaign, Cornell University, the Boyce Thompson Institute, and the University of Arizona, aims to develop tools to listen and talk to plants and their associated organisms.

“CROPPS will create systems where plants communicate their hidden biology to sensors, optimizing plant growth to the local environment. This Internet of Living Things (IoLT) will enable breakthrough discoveries, offer new educational opportunities, and open transformative opportunities for productive, sustainable, and profitable management of crops,” says Steve Moose (BSD/CABBI/GEGC), the grant’s principal investigator at Illinois. Moose is a genomics professor in the Department of Crop Sciences, part of the College of Agricultural, Consumer and Environmental Sciences (ACES). 

As an example of what’s possible, CROPPS scientists could deploy armies of autonomous rovers to monitor and modify crop growth in real time. The researchers created leaf sensors to report on belowground processes in roots. This combination of machine and living sensors will enable completely new ways of decoding the language of plants, allowing researchers to teach plants how to better handle environmental challenges. 

“Right now, we’re working to program a circuit that responds to low-nitrogen stress, where the plant growth rate is ‘slowed down’ to give farmers more time to apply fertilizer during the window that is the most efficient at increasing yield,” Moose explains.

With 150+ years of global leadership in crop sciences and agricultural engineering, along with newer transdisciplinary research units such as the National Center for Supercomputing Applications (NCSA) and the Center for Digital Agriculture (CDA), Illinois is uniquely positioned to take on the technical challenges associated with CROPPS.

But U of I scientists aren’t working alone. For years, they’ve collaborated with partner institutions to conceptualize the future of digital agriculture and bring it into reality. For example, researchers at Illinois’ CDA and Cornell’s Initiative for Digital Agriculture jointly proposed the first IoLT for agriculture, laying the foundation for CROPPS.

“CROPPS represents a significant win from having worked closely with our partners at Cornell and other institutions. We’re thrilled to move forward with our colleagues to shift paradigms in agriculture,” says Vikram Adve, Donald B. Gillies Professor in computer science at Illinois and co-director of the CDA.

CROPPS research may sound futuristic, and that’s the point.

The researchers say new tools are needed to make crops productive, flexible, and sustainable enough to feed our growing global population under a changing climate. Many of the tools under development – biotransducers small enough to fit between soil particles, dexterous and highly autonomous field robots, field-applied gene editing nanoparticles, IoLT clouds, and more – have been studied in the proof-of-concept phase, and are ready to be scaled up.

“One of the most exciting goals of CROPPS is to apply recent advances in sensing and data analytics to understand the rules of life, where plants have much to teach us. What we learn will bring a stronger biological dimension to the next phase of digital agriculture,” Moose says. 

CROPPS will also foster innovations in STEM [science, technology[ engineering, and mathematics] education through programs that involve students at all levels, and each partner institution will share courses in digital agriculture topics. CROPPS also aims to engage professionals in digital agriculture at any career stage, and learn how the public views innovations in this emerging technology area.

“Along with cutting-edge research, CROPPS coordinated educational programs will address the future of work in plant sciences and agriculture,” says Germán Bollero, associate dean for research in the College of ACES.

I look forward to hearing more about IoLT.

Nanopore-tal enables cells to talk to computers?

An August 25, 2021 news item on ScienceDaily announced research that will allow more direct communication between cells and computers,

Genetically encoded reporter proteins have been a mainstay of biotechnology research, allowing scientists to track gene expression, understand intracellular processes and debug engineered genetic circuits.

But conventional reporting schemes that rely on fluorescence and other optical approaches come with practical limitations that could cast a shadow over the field’s future progress. Now, researchers at the University of Washington and Microsoft have created a “nanopore-tal” into what is happening inside these complex biological systems, allowing scientists to see reporter proteins in a whole new light.

The team introduced a new class of reporter proteins that can be directly read by a commercially available nanopore sensing device. The new system ― dubbed “Nanopore-addressable protein Tags Engineered as Reporters” or “NanoporeTERs” ― can detect multiple protein expression levels from bacterial and human cell cultures far beyond the capacity of existing techniques.

An August 12, 2021 University of Washington news release (also on EurekAlert but published August 24, 2021), which originated the news item, provides more detail (Note: Links have been removed),

“NanoporeTERs offer a new and richer lexicon for engineered cells to express themselves and shed new light on the factors they are designed to track. They can tell us a lot more about what is happening in their environment all at once,” said co-lead author Nicolas Cardozo, a doctoral student with the UW Molecular Engineering and Sciences Institute. “We’re essentially making it possible for these cells to ‘talk’ to computers about what’s happening in their surroundings at a new level of detail, scale and efficiency that will enable deeper analysis than what we could do before.”

For conventional labeling methods, researchers can track only a few optical reporter proteins, such as green fluorescent protein, simultaneously because of their overlapping spectral properties. For example, it’s difficult to distinguish between more than three different colors of fluorescent proteins at once. In contrast, NanoporeTERs were designed to carry distinct protein “barcodes” composed of strings of amino acids that, when used in combination, allow at least ten times more multiplexing possibilities. 

These synthetic proteins are secreted outside of a cell into the surrounding environment, where researchers can collect and analyze them using a commercially available nanopore array. Here, the team used the Oxford Nanopore Technologies MinION device. 

The researchers engineered the NanoporeTER proteins with charged “tails” so that they can be pulled into the nanopore sensors by an electric field. Then the team uses machine learning to classify the electrical signals for each NanoporeTER barcode in order to determine each protein’s output levels.

“This is a fundamentally new interface between cells and computers,” said senior author Jeff Nivala, a UW research assistant professor in the Paul G. Allen School of Computer Science & Engineering. “One analogy I like to make is that fluorescent protein reporters are like lighthouses, and NanoporeTERs are like messages in a bottle. 

“Lighthouses are really useful for communicating a physical location, as you can literally see where the signal is coming from, but it’s hard to pack more information into that kind of signal. A message in a bottle, on the other hand, can pack a lot of information into a very small vessel, and you can send many of them off to another location to be read. You might lose sight of the precise physical location where the messages were sent, but for many applications that’s not going to be an issue.”

As a proof of concept, the team developed a library of more than 20 distinct NanoporeTERs tags. But the potential is significantly greater, according to co-lead author Karen Zhang, now a doctoral student in the UC Berkeley-UCSF bioengineering graduate program.

“We are currently working to scale up the number of NanoporeTERs to hundreds, thousands, maybe even millions more,” said Zhang, who graduated this year from the UW with bachelor’s degrees in both biochemistry and microbiology. “The more we have, the more things we can track.

“We’re particularly excited about the potential in single-cell proteomics, but this could also be a game-changer in terms of our ability to do multiplexed biosensing to diagnose disease and even target therapeutics to specific areas inside the body. And debugging complicated genetic circuit designs would become a whole lot easier and much less time-consuming if we could measure the performance of all the components in parallel instead of by trial and error.”

These researchers have made novel use of the MinION device before, when they developed a molecular tagging system to replace conventional inventory control methods. That system relied on barcodes comprising synthetic strands of DNA that could be decoded on demand using the portable reader. 

This time, the team went a step farther.

“This is the first paper to show how a commercial nanopore sensor device can be repurposed for applications other than the DNA and RNA sequencing for which they were originally designed,” said co-author Kathryn Doroschak, a computational biologist at Adaptive Biotechnologies who completed this work as a doctoral student at the Allen School. “This is exciting as a precursor for nanopore technology becoming more accessible and ubiquitous in the future. You can already plug a nanopore device into your cell phone. I could envision someday having a choice of ‘molecular apps’ that will be relatively inexpensive and widely available outside of traditional genomics.”

Additional co-authors of the paper are Aerilynn Nguyen at Northeastern University and Zoheb Siddiqui at Amazon, both former UW undergraduate students; Nicholas Bogard at Patch Biosciences, a former UW postdoctoral research associate; Luis Ceze, an Allen School professor; and Karin Strauss, an Allen School affiliate professor and a senior principal research manager at Microsoft. This research was funded by the National Science Foundation, the National Institutes of Health and a sponsored research agreement from Oxford Nanopore Technologies. 

Here’s a link to and a citation for the paper,

Multiplexed direct detection of barcoded protein reporters on a nanopore array by Nicolas Cardozo, Karen Zhang, Kathryn Doroschak, Aerilynn Nguyen, Zoheb Siddiqui, Nicholas Bogard, Karin Strauss, Luis Ceze & Jeff Nivala. Nature Biotechnology (2021) DOI: https://doi.org/10.1038/s41587-021-01002-6 Published: 12 August 2021

This paper is behind a paywall.

Some amusements in the time of COVID-19

Gold stars for everyone who recognized the loose paraphrasing of the title, Love in the Time of Cholera, for Gabrial Garcia Marquez’s 1985 novel.

I wrote my headline and first paragraph yesterday and found this in my email box this morning, from a March 25, 2020 University of British Columbia news release, which compares times, diseases, and scares of the past with today’s COVID-19 (Perhaps politicians and others could read this piece and stop using the word ‘unprecedented’ when discussing COVID-19?),

How globalization stoked fear of disease during the Romantic era

In the late 18th and early 19th centuries, the word “communication” had several meanings. People used it to talk about both media and the spread of disease, as we do today, but also to describe transport—via carriages, canals and shipping.

Miranda Burgess, an associate professor in UBC’s English department, is working on a book called Romantic Transport that covers these forms of communication in the Romantic era and invites some interesting comparisons to what the world is going through today.

We spoke with her about the project.

What is your book about?

It’s about global infrastructure at the dawn of globalization—in particular the extension of ocean navigation through man-made inland waterways like canals and ship’s canals. These canals of the late 18th and early 19th century were like today’s airline routes, in that they brought together places that were formerly understood as far apart, and shrunk time because they made it faster to get from one place to another.

This book is about that history, about the fears that ordinary people felt in response to these modernizations, and about the way early 19th-century poets and novelists expressed and responded to those fears.

What connections did those writers make between transportation and disease?

In the 1810s, they don’t have germ theory yet, so there’s all kinds of speculation about how disease happens. Works of tropical medicine, which is rising as a discipline, liken the human body to the surface of the earth. They talk about nerves as canals that convey information from the surface to the depths, and the idea that somehow disease spreads along those pathways.

When the canals were being built, some writers opposed them on the grounds that they could bring “strangers” through the heart of the city, and that standing water would become a breeding ground for disease. Now we worry about people bringing disease on airplanes. It’s very similar to that.

What was the COVID-19 of that time?

Probably epidemic cholera [emphasis mine], from about the 1820s onward. The Quarterly Review, a journal that novelist Walter Scott was involved in editing, ran long articles that sought to trace the map of cholera along rivers from South Asia, to Southeast Asia, across Europe and finally to Britain. And in the way that its spread is described, many of the same fears that people are evincing now about COVID-19 were visible then, like the fear of clothes. Is it in your clothes? Do we have to burn our clothes? People were concerned.

What other comparisons can be drawn between those times and what is going on now?

Now we worry about the internet and “fake news.” In the 19th century, they worried about what William Wordsworth called “the rapid communication of intelligence,” which was the daily newspaper. Not everybody had access to newspapers, but each newspaper was read by multiple families and newspapers were available in taverns and coffee shops. So if you were male and literate, you had access to a newspaper, and quite a lot of women did, too.

Paper was made out of rags—discarded underwear. Because of the French Revolution and Napoleonic Wars that followed, France blockaded Britain’s coast and there was a desperate shortage of rags to make paper, which had formerly come from Europe. And so Britain started to import rags from the Caribbean that had been worn by enslaved people.

Papers of the time are full of descriptions of the high cost of rags, how they’re getting their rags from prisons, from prisoners’ underwear, and fear about the kinds of sweat and germs that would have been harboured in those rags—and also discussions of scarcity, as people stole and hoarded those rags. It rings very well with what the internet is telling us now about a bunch of things around COVID-19.

Plus ça change, n’est-ce pas?

And now for something completely different

Kudos to all who recognized the Monty Python reference. Now, onto the frogfish,

Thank you to the Monterey Bay Aquarium (in California, US).

A March 22, 2020 University of Washington (state) news release features an interview with the author of a new book on frogfishes,

Any old fish can swim. But what fish can walk, scoot, clamber over rocks, change color or pattern and even fight? That would be the frogfish.

The latest book by Ted Pietsch, UW professor emeritus of aquatic and fishery sciences, explores the lives and habits of these unusual marine shorefishes. “Frogfishes: Biodiversity, Zoogeography, and Behavioral Ecology” was published in March [2020] by Johns Hopkins University Press.

Pietsch, who is also curator emeritus of fishes at the Burke Museum of Natural History and Culture, has published over 200 articles and a dozen books on the biology and behavior of marine fishes. He wrote this book with Rachel J. Arnold, a faculty member at Northwest Indian College in Bellingham and its Salish Sea Research Center.

These walking fishes have stepped into the spotlight lately, with interest growing in recent decades. And though these predatory fishes “will almost certainly devour anything else that moves in a home aquarium,” Pietsch writes, “a cadre of frogfish aficionados around the world has grown within the dive community and among aquarists.” In fact, Pietsch said, there are three frogfish public groups on Facebook, with more than 6,000 members.

First, what is a frogfish?

Ted Pietsch: A member of a family of bony fishes, containing 52 species, all of which are highly camouflaged and whose feeding strategy consists of mimicking the immobile, inert, and benign appearance of a sponge or an algae-encrusted rock, while wiggling a highly conspicuous lure to attract prey.

This is a fish that “walks” and “hops” across the sea bottom, and clambers about over rocks and coral like a four-legged terrestrial animal but, at the same time, can jet-propel itself through open water. Some lay their eggs encapsulated in a complex, floating, mucus mass, called an “egg raft,” while some employ elaborate forms of parental care, carrying their eggs around until they hatch.

They are among the most colorful of nature’s productions, existing in nearly every imaginable color and color pattern, with an ability to completely alter their color and pattern in a matter of days or seconds. All these attributes combined make them one of the most intriguing groups of aquatic vertebrates for the aquarist, diver, and underwater photographer as well as the professional zoologist.

I couldn’t resist the ‘frog’ reference and I’m glad since this is a good read with a number of fascinating photographs and illustrations.,

An illustration of the frogfish Antennarius pictus, published by George Shaw in 1794. From a new book by Ted Pietsch, UW professor of emeritus of aquatic and fishery sciences. Courtesy: University of Washington (state)

h/t phys.org March 24, 2020 news item

Building with bacteria

A block of sand particles held together by living cells. Credit: The University of Colorado Boulder College of Engineering and Applied Science

A March 24, 2020 news item on phys.org features the future of building construction as perceived by synthetic biologists,

Buildings are not unlike a human body. They have bones and skin; they breathe. Electrified, they consume energy, regulate temperature and generate waste. Buildings are organisms—albeit inanimate ones.

But what if buildings—walls, roofs, floors, windows—were actually alive—grown, maintained and healed by living materials? Imagine architects using genetic tools that encode the architecture of a building right into the DNA of organisms, which then grow buildings that self-repair, interact with their inhabitants and adapt to the environment.

A March 23, 2020 essay by Wil Srubar (Professor of Architectural Engineering and Materials Science, University of Colorado Boulder), which originated the news item, provides more insight,

Living architecture is moving from the realm of science fiction into the laboratory as interdisciplinary teams of researchers turn living cells into microscopic factories. At the University of Colorado Boulder, I lead the Living Materials Laboratory. Together with collaborators in biochemistry, microbiology, materials science and structural engineering, we use synthetic biology toolkits to engineer bacteria to create useful minerals and polymers and form them into living building blocks that could, one day, bring buildings to life.

In one study published in Scientific Reports, my colleagues and I genetically programmed E. coli to create limestone particles with different shapes, sizes, stiffnesses and toughness. In another study, we showed that E. coli can be genetically programmed to produce styrene – the chemical used to make polystyrene foam, commonly known as Styrofoam.

Green cells for green building

In our most recent work, published in Matter, we used photosynthetic cyanobacteria to help us grow a structural building material – and we kept it alive. Similar to algae, cyanobacteria are green microorganisms found throughout the environment but best known for growing on the walls in your fish tank. Instead of emitting CO2, cyanobacteria use CO2 and sunlight to grow and, in the right conditions, create a biocement, which we used to help us bind sand particles together to make a living brick.

By keeping the cyanobacteria alive, we were able to manufacture building materials exponentially. We took one living brick, split it in half and grew two full bricks from the halves. The two full bricks grew into four, and four grew into eight. Instead of creating one brick at a time, we harnessed the exponential growth of bacteria to grow many bricks at once – demonstrating a brand new method of manufacturing materials.

Researchers have only scratched the surface of the potential of engineered living materials. Other organisms could impart other living functions to material building blocks. For example, different bacteria could produce materials that heal themselves, sense and respond to external stimuli like pressure and temperature, or even light up. If nature can do it, living materials can be engineered to do it, too.

It also take less energy to produce living buildings than standard ones. Making and transporting today’s building materials uses a lot of energy and emits a lot of CO2. For example, limestone is burned to make cement for concrete. Metals and sand are mined and melted to make steel and glass. The manufacture, transport and assembly of building materials account for 11% of global CO2 emissions. Cement production alone accounts for 8%. In contrast, some living materials, like our cyanobacteria bricks, could actually sequester CO2.

The field of engineered living materials is in its infancy, and further research and development is needed to bridge the gap between laboratory research and commercial availability. Challenges include cost, testing, certification and scaling up production. Consumer acceptance is another issue. For example, the construction industry has a negative perception of living organisms. Think mold, mildew, spiders, ants and termites. We’re hoping to shift that perception. Researchers working on living materials also need to address concerns about safety and biocontamination.

The [US] National Science Foundation recently named engineered living materials one of the country’s key research priorities. Synthetic biology and engineered living materials will play a critical role in tackling the challenges humans will face in the 2020s and beyond: climate change, disaster resilience, aging and overburdened infrastructure, and space exploration.

If you have time and interest, this is fascinating. Strubar is a little exuberant and, at this point, I welcome it.

Fitness

The Lithuanians are here for us. Scientists from the Kaunas University of Technology have just published a paper on better exercises for lower back pain in our increasingly sedentary times, from a March 23, 2020 Kaunas University of Technology press release (also on EurekAlert) Note: There are a few minor grammatical issues,

With the significant part of the global population forced to work from home, the occurrence of lower back pain may increase. Lithuanian scientists have devised a spinal stabilisation exercise programme for managing lower back pain for people who perform a sedentary job. After testing the programme with 70 volunteers, the researchers have found that the exercises are not only efficient in diminishing the non-specific lower back pain, but their effect lasts 3 times longer than that of a usual muscle strengthening exercise programme.

According to the World Health Organisation, lower back pain is among the top 10 diseases and injuries that are decreasing the quality of life across the global population. It is estimated that non-specific low back pain is experienced by 60% to 70% of people in industrialised societies. Moreover, it is the leading cause of activity limitation and work absence throughout much of the world. For example, in the United Kingdom, low back pain causes more than 100 million workdays lost per year, in the United States – an estimated 149 million.

Chronic lower back pain, which starts from long-term irritation or nerve injury affects the emotions of the afflicted. Anxiety, bad mood and even depression, also the malfunctioning of the other bodily systems – nausea, tachycardia, elevated arterial blood pressure – are among the conditions, which may be caused by lower back pain.

During the coronavirus disease (COVID-19) outbreak, with a significant part of the global population working from home and not always having a properly designed office space, the occurrence of lower back pain may increase.

“Lower back pain is reaching epidemic proportions. Although it is usually clear what is causing the pain and its chronic nature, people tend to ignore these circumstances and are not willing to change their lifestyle. Lower back pain usually comes away itself, however, the chances of the recurring pain are very high”, says Dr Irina Klizienė, a researcher at Kaunas University of Technology (KTU) Faculty of Social Sciences, Humanities and Arts.

Dr Klizienė, together with colleagues from KTU and from Lithuanian Sports University has designed a set of stabilisation exercises aimed at strengthening the muscles which support the spine at the lower back, i.e. lumbar area. The exercise programme is based on Pilates methodology.

According to Dr Klizienė, the stability of lumbar segments is an essential element of body biomechanics. Previous research evidence shows that in order to avoid the lower back pain it is crucial to strengthen the deep muscles, which are stabilising the lumbar area of the spine. One of these muscles is multifidus muscle.

“Human central nervous system is using several strategies, such as preparing for keeping the posture, preliminary adjustment to the posture, correcting the mistakes of the posture, which need to be rectified by specific stabilising exercises. Our aim was to design a set of exercises for this purpose”, explains Dr Klizienė.

The programme, designed by Dr Klizienė and her colleagues is comprised of static and dynamic exercises, which train the muscle strength and endurance. The static positions are to be held from 6 to 20 seconds; each exercise to be repeated 8 to 16 times.

Caption: The static positions are to be held from 6 to 20 seconds; each exercise to be repeated 8 to 16 times. Credit: KTU

The previous set is a little puzzling but perhaps you’ll find these ones below easier to follow,

Caption: The exercises are aimed at strengthening the muscles which support the spine at the lower back. Credit: KTU

I think more pictures of intervening moves would have been useful. Now. getting back to the press release,

In order to check the efficiency of the programme, 70 female volunteers were randomly enrolled either to the lumbar stabilisation exercise programme or to a usual muscle strengthening exercise programme. Both groups were exercising twice a week for 45 minutes for 20 weeks. During the experiment, ultrasound scanning of the muscles was carried out.

As soon as 4 weeks in lumbar stabilisation programme, it was observed that the cross-section area of the multifidus muscle of the subjects of the stabilisation group has increased; after completing the programme, this increase was statistically significant (p < 0,05). This change was not observed in the strengthening group.

Moreover, although both sets of exercises were efficient in eliminating lower back pain and strengthening the muscles of the lower back area, the effect of stabilisation exercises lasted 3 times longer – 12 weeks after the completion of the stabilisation programme against 4 weeks after the completion of the muscle strengthening programme.

“There are only a handful of studies, which have directly compared the efficiency of stabilisation exercises against other exercises in eliminating lower back pain”, says Dr Klizienė, “however, there are studies proving that after a year, lower back pain returned only to 30% of people who have completed a stabilisation exercise programme, and to 84% of people who haven’t taken these exercises. After three years these proportions are 35% and 75%.”

According to her, research shows that the spine stabilisation exercises are more efficient than medical intervention or usual physical activities in curing the lower back pain and avoiding the recurrence of the symptoms in the future.

Here’s a link to and a citation for the paper,

Effect of different exercise programs on non-specific chronic low back pain and disability in people who perform sedentary work by Saule Sipavicienea, Irina Klizieneb. Clinical Biomechanics March 2020 Volume 73, Pages 17–27 DOI: https://doi.org/10.1016/j.clinbiomech.2019.12.028

This paper is behind a paywall.

Touchy robots and prosthetics

I have briefly speculated about the importance of touch elsewhere (see my July 19, 2019 posting regarding BlocKit and blockchain; scroll down about 50% of the way) but this upcoming news bit and the one following it put a different spin on the importance of touch.

Exceptional sense of touch

Robots need a sense of touch to perform their tasks and a July 18, 2019 National University of Singapore press release (also on EurekAlert) announces work on an improved sense of touch,

Robots and prosthetic devices may soon have a sense of touch equivalent to, or better than, the human skin with the Asynchronous Coded Electronic Skin (ACES), an artificial nervous system developed by a team of researchers at the National University of Singapore (NUS).

The new electronic skin system achieved ultra-high responsiveness and robustness to damage, and can be paired with any kind of sensor skin layers to function effectively as an electronic skin.

The innovation, achieved by Assistant Professor Benjamin Tee and his team from the Department of Materials Science and Engineering at the NUS Faculty of Engineering, was first reported in prestigious scientific journal Science Robotics on 18 July 2019.

Faster than the human sensory nervous system

“Humans use our sense of touch to accomplish almost every daily task, such as picking up a cup of coffee or making a handshake. Without it, we will even lose our sense of balance when walking. Similarly, robots need to have a sense of touch in order to interact better with humans, but robots today still cannot feel objects very well,” explained Asst Prof Tee, who has been working on electronic skin technologies for over a decade in hope of giving robots and prosthetic devices a better sense of touch.

Drawing inspiration from the human sensory nervous system, the NUS team spent a year and a half developing a sensor system that could potentially perform better. While the ACES electronic nervous system detects signals like the human sensor nervous system, it is made up of a network of sensors connected via a single electrical conductor, unlike the nerve bundles in the human skin. It is also unlike existing electronic skins which have interlinked wiring systems that can make them sensitive to damage and difficult to scale up.

Elaborating on the inspiration, Asst Prof Tee, who also holds appointments in the NUS Department of Electrical and Computer Engineering, NUS Institute for Health Innovation & Technology (iHealthTech), N.1 Institute for Health and the Hybrid Integrated Flexible Electronic Systems (HiFES) programme, said, “The human sensory nervous system is extremely efficient, and it works all the time to the extent that we often take it for granted. It is also very robust to damage. Our sense of touch, for example, does not get affected when we suffer a cut. If we can mimic how our biological system works and make it even better, we can bring about tremendous advancements in the field of robotics where electronic skins are predominantly applied.”

ACES can detect touches more than 1,000 times faster than the human sensory nervous system. For example, it is capable of differentiating physical contacts between different sensors in less than 60 nanoseconds – the fastest ever achieved for an electronic skin technology – even with large numbers of sensors. ACES-enabled skin can also accurately identify the shape, texture and hardness of objects within 10 milliseconds, ten times faster than the blinking of an eye. This is enabled by the high fidelity and capture speed of the ACES system.

The ACES platform can also be designed to achieve high robustness to physical damage, an important property for electronic skins because they come into the frequent physical contact with the environment. Unlike the current system used to interconnect sensors in existing electronic skins, all the sensors in ACES can be connected to a common electrical conductor with each sensor operating independently. This allows ACES-enabled electronic skins to continue functioning as long as there is one connection between the sensor and the conductor, making them less vulnerable to damage.

Smart electronic skins for robots and prosthetics

ACES’ simple wiring system and remarkable responsiveness even with increasing numbers of sensors are key characteristics that will facilitate the scale-up of intelligent electronic skins for Artificial Intelligence (AI) applications in robots, prosthetic devices and other human machine interfaces.

“Scalability is a critical consideration as big pieces of high performing electronic skins are required to cover the relatively large surface areas of robots and prosthetic devices,” explained Asst Prof Tee. “ACES can be easily paired with any kind of sensor skin layers, for example, those designed to sense temperatures and humidity, to create high performance ACES-enabled electronic skin with an exceptional sense of touch that can be used for a wide range of purposes,” he added.

For instance, pairing ACES with the transparent, self-healing and water-resistant sensor skin layer also recently developed by Asst Prof Tee’s team, creates an electronic skin that can self-repair, like the human skin. This type of electronic skin can be used to develop more realistic prosthetic limbs that will help disabled individuals restore their sense of touch.

Other potential applications include developing more intelligent robots that can perform disaster recovery tasks or take over mundane operations such as packing of items in warehouses. The NUS team is therefore looking to further apply the ACES platform on advanced robots and prosthetic devices in the next phase of their research.

For those who like videos, the researchers have prepared this,

Here’s a link to and a citation for the paper,

A neuro-inspired artificial peripheral nervous system for scalable electronic skins by Wang Wei Lee, Yu Jun Tan, Haicheng Yao, Si Li, Hian Hian See, Matthew Hon, Kian Ann Ng, Betty Xiong, John S. Ho and Benjamin C. K. Tee. Science Robotics Vol 4, Issue 32 31 July 2019 eaax2198 DOI: 10.1126/scirobotics.aax2198 Published online first: 17 Jul 2019:

This paper is behind a paywall.

Picking up a grape and holding his wife’s hand

This story comes from the Canadian Broadcasting Corporation (CBC) Radio with a six minute story embedded in the text, from a July 25, 2019 CBC Radio ‘As It Happens’ article by Sheena Goodyear,

The West Valley City, Utah, real estate agent [Keven Walgamott] lost his left hand in an electrical accident 17 years ago. Since then, he’s tried out a few different prosthetic limbs, but always found them too clunky and uncomfortable.

Then he decided to work with the University of Utah in 2016 to test out new prosthetic technology that mimics the sensation of human touch, allowing Walgamott to perform delicate tasks with precision — including shaking his wife’s hand. 

“I extended my left hand, she came and extended hers, and we were able to feel each other with the left hand for the first time in 13 years, and it was just a marvellous and wonderful experience,” Walgamott told As It Happens guest host Megan Williams. 

Walgamott, one of seven participants in the University of Utah study, was able to use an advanced prosthetic hand called the LUKE Arm to pick up an egg without cracking it, pluck a single grape from a bunch, hammer a nail, take a ring on and off his finger, fit a pillowcase over a pillow and more. 

While performing the tasks, Walgamott was able to actually feel the items he was holding and correctly gauge the amount of pressure he needed to exert — mimicking a process the human brain does automatically.

“I was able to feel something in each of my fingers,” he said. “What I feel, I guess the easiest way to explain it, is little electrical shocks.”

Those shocks — which he describes as a kind of a tingling sensation — intensify as he tightens his grip.

“Different variations of the intensity of the electricity as I move my fingers around and as I touch things,” he said. 

To make that [sense of touch] happen, the researchers implanted electrodes into the nerves on Walgamott’s forearm, allowing his brain to communicate with his prosthetic through a computer outside his body. That means he can move the hand just by thinking about it.

But those signals also work in reverse.

The team attached sensors to the hand of a LUKE Arm. Those sensors detect touch and positioning, and send that information to the electrodes so it can be interpreted by the brain.

For Walgamott, performing a series of menial tasks as a team of scientists recorded his progress was “fun to do.”

“I’d forgotten how well two hands work,” he said. “That was pretty cool.”

But it was also a huge relief from the phantom limb pain he has experienced since the accident, which he describes as a “burning sensation” in the place where his hand used to be.

A July 24, 2019 University of Utah news release (also on EurekAlert) provides more detail about the research,

Keven Walgamott had a good “feeling” about picking up the egg without crushing it.

What seems simple for nearly everyone else can be more of a Herculean task for Walgamott, who lost his left hand and part of his arm in an electrical accident 17 years ago. But he was testing out the prototype of a high-tech prosthetic arm with fingers that not only can move, they can move with his thoughts. And thanks to a biomedical engineering team at the University of Utah, he “felt” the egg well enough so his brain could tell the prosthetic hand not to squeeze too hard.

That’s because the team, led by U biomedical engineering associate professor Gregory Clark, has developed a way for the “LUKE Arm” (so named after the robotic hand that Luke Skywalker got in “The Empire Strikes Back”) to mimic the way a human hand feels objects by sending the appropriate signals to the brain. Their findings were published in a new paper co-authored by U biomedical engineering doctoral student Jacob George, former doctoral student David Kluger, Clark and other colleagues in the latest edition of the journal Science Robotics. A copy of the paper may be obtained by emailing robopak@aaas.org.

“We changed the way we are sending that information to the brain so that it matches the human body. And by matching the human body, we were able to see improved benefits,” George says. “We’re making more biologically realistic signals.”

That means an amputee wearing the prosthetic arm can sense the touch of something soft or hard, understand better how to pick it up and perform delicate tasks that would otherwise be impossible with a standard prosthetic with metal hooks or claws for hands.

“It almost put me to tears,” Walgamott says about using the LUKE Arm for the first time during clinical tests in 2017. “It was really amazing. I never thought I would be able to feel in that hand again.”

Walgamott, a real estate agent from West Valley City, Utah, and one of seven test subjects at the U, was able to pluck grapes without crushing them, pick up an egg without cracking it and hold his wife’s hand with a sensation in the fingers similar to that of an able-bodied person.

“One of the first things he wanted to do was put on his wedding ring. That’s hard to do with one hand,” says Clark. “It was very moving.”

Those things are accomplished through a complex series of mathematical calculations and modeling.

The LUKE Arm

The LUKE Arm has been in development for some 15 years. The arm itself is made of mostly metal motors and parts with a clear silicon “skin” over the hand. It is powered by an external battery and wired to a computer. It was developed by DEKA Research & Development Corp., a New Hampshire-based company founded by Segway inventor Dean Kamen.

Meanwhile, the U’s team has been developing a system that allows the prosthetic arm to tap into the wearer’s nerves, which are like biological wires that send signals to the arm to move. It does that thanks to an invention by U biomedical engineering Emeritus Distinguished Professor Richard A. Normann called the Utah Slanted Electrode Array. The array is a bundle of 100 microelectrodes and wires that are implanted into the amputee’s nerves in the forearm and connected to a computer outside the body. The array interprets the signals from the still-remaining arm nerves, and the computer translates them to digital signals that tell the arm to move.

But it also works the other way. To perform tasks such as picking up objects requires more than just the brain telling the hand to move. The prosthetic hand must also learn how to “feel” the object in order to know how much pressure to exert because you can’t figure that out just by looking at it.

First, the prosthetic arm has sensors in its hand that send signals to the nerves via the array to mimic the feeling the hand gets upon grabbing something. But equally important is how those signals are sent. It involves understanding how your brain deals with transitions in information when it first touches something. Upon first contact of an object, a burst of impulses runs up the nerves to the brain and then tapers off. Recreating this was a big step.

“Just providing sensation is a big deal, but the way you send that information is also critically important, and if you make it more biologically realistic, the brain will understand it better and the performance of this sensation will also be better,” says Clark.

To achieve that, Clark’s team used mathematical calculations along with recorded impulses from a primate’s arm to create an approximate model of how humans receive these different signal patterns. That model was then implemented into the LUKE Arm system.

Future research

In addition to creating a prototype of the LUKE Arm with a sense of touch, the overall team is already developing a version that is completely portable and does not need to be wired to a computer outside the body. Instead, everything would be connected wirelessly, giving the wearer complete freedom.

Clark says the Utah Slanted Electrode Array is also capable of sending signals to the brain for more than just the sense of touch, such as pain and temperature, though the paper primarily addresses touch. And while their work currently has only involved amputees who lost their extremities below the elbow, where the muscles to move the hand are located, Clark says their research could also be applied to those who lost their arms above the elbow.

Clark hopes that in 2020 or 2021, three test subjects will be able to take the arm home to use, pending federal regulatory approval.

The research involves a number of institutions including the U’s Department of Neurosurgery, Department of Physical Medicine and Rehabilitation and Department of Orthopedics, the University of Chicago’s Department of Organismal Biology and Anatomy, the Cleveland Clinic’s Department of Biomedical Engineering and Utah neurotechnology companies Ripple Neuro LLC and Blackrock Microsystems. The project is funded by the Defense Advanced Research Projects Agency and the National Science Foundation.

“This is an incredible interdisciplinary effort,” says Clark. “We could not have done this without the substantial efforts of everybody on that team.”

Here’s a link to and a citation for the paper,

Biomimetic sensory feedback through peripheral nerve stimulation improves dexterous use of a bionic hand by J. A. George, D. T. Kluger, T. S. Davis, S. M. Wendelken, E. V. Okorokova, Q. He, C. C. Duncan, D. T. Hutchinson, Z. C. Thumser, D. T. Beckler, P. D. Marasco, S. J. Bensmaia and G. A. Clark. Science Robotics Vol. 4, Issue 32, eaax2352 31 July 2019 DOI: 10.1126/scirobotics.aax2352 Published online first: 24 Jul 2019

This paper is definitely behind a paywall.

The University of Utah researchers have produced a video highlighting their work,

Smartphone as augmented reality system with software from Brown University

You need to see this,

Amazing, eh? The researchers are scheduled to present this work sometime this week at the ACM Symposium on User Interface Software and Technology (UIST) being held in New Orleans, US, from October 20-23, 2019.

Here’s more about ‘Portal-ble’ in an October 16, 2019 news item on ScienceDaily,

A new software system developed by Brown University [US] researchers turns cell phones into augmented reality portals, enabling users to place virtual building blocks, furniture and other objects into real-world backdrops, and use their hands to manipulate those objects as if they were really there.

The developers hope the new system, called Portal-ble, could be a tool for artists, designers, game developers and others to experiment with augmented reality (AR). The team will present the work later this month at the ACM Symposium on User Interface Software and Technology (UIST 2019) in New Orleans. The source code for Andriod is freely available for download on the researchers’ website, and iPhone code will follow soon.

“AR is going to be a great new mode of interaction,” said Jeff Huang, an assistant professor of computer science at Brown who developed the system with his students. “We wanted to make something that made AR portable so that people could use anywhere without any bulky headsets. We also wanted people to be able to interact with the virtual world in a natural way using their hands.”

An October 16, 2019 Brown University news release (also on EurekAlert), which originated the news item, provides more detail,

Huang said the idea for Portal-ble’s “hands-on” interaction grew out of some frustration with AR apps like Pokemon GO. AR apps use smartphones to place virtual objects (like Pokemon characters) into real-world scenes, but interacting with those objects requires users to swipe on the screen.

“Swiping just wasn’t a satisfying way of interacting,” Huang said. “In the real world, we interact with objects with our hands. We turn doorknobs, pick things up and throw things. So we thought manipulating virtual objects by hand would be much more powerful than swiping. That’s what’s different about Portal-ble.”

The platform makes use of a small infrared sensor mounted on the back of a phone. The sensor tracks the position of people’s hands in relation to virtual objects, enabling users to pick objects up, turn them, stack them or drop them. It also lets people use their hands to virtually “paint” onto real-world backdrops. As a demonstration, Huang and his students used the system to paint a virtual garden into a green space on Brown’s College Hill campus.

Huang says the main technical contribution of the work was developing the right accommodations and feedback tools to enable people to interact intuitively with virtual objects.

“It turns out that picking up a virtual object is really hard if you try to apply real-world physics,” Huang said. “People try to grab in the wrong place, or they put their fingers through the objects. So we had to observe how people tried to interact with these objects and then make our system able accommodate those tendencies.”

To do that, Huang enlisted students in a class he was teaching to come up with tasks they might want to do in the AR world — stacking a set of blocks, for example. The students then asked other people to try performing those tasks using Portal-ble, while recording what people were able to do and what they couldn’t. They could then adjust the system’s physics and user interface to make interactions more successful.

“It’s a little like what happens when people draw lines in Photoshop,” Huang said. “The lines people draw are never perfect, but the program can smooth them out and make them perfectly straight. Those were the kinds of accommodations we were trying to make with these virtual objects.”

The team also added sensory feedback — visual highlights on objects and phone vibrations — to make interactions easier. Huang said he was somewhat surprised that phone vibrations helped users to interact. Users feel the vibrations in the hand they’re using to hold the phone, not in the hand that’s actually grabbing for the virtual object. Still, Huang said, vibration feedback still helped users to more successfully interact with objects.

In follow-up studies, users reported that the accommodations and feedback used by the system made tasks significantly easier, less time-consuming and more satisfying.

Huang and his students plan to continue working with Portal-ble — expanding its object library, refining interactions and developing new activities. They also hope to streamline the system to make it run entirely on a phone. Currently the infrared sensor requires an infrared sensor and external compute stick for extra processing power.

Huang hopes people will download the freely available source code and try it for themselves. 
“We really just want to put this out there and see what people do with it,” he said. “The code is on our website for people to download, edit and build off of. It will be interesting to see what people do with it.

Co-authors on the research paper were Jing Qian, Jiaju Ma, Xiangyu Li, Benjamin Attal, Haoming Lai, James Tompkin and John Hughes. The work was supported by the National Science Foundation (IIS-1552663) and by a gift from Pixar.

You can find the conference paper here on jeffhuang.com,

Portal-ble: Intuitive Free-hand Manipulationin Unbounded Smartphone-based Augmented Reality by Jing Qian, Jiaju Ma, Xiangyu Li∗, Benjamin Attal, Haoming Lai,James Tompkin, John F. Hughes, Jeff Huang. Brown University, Providence RI, USA; Southeast University, Nanjing, China. Presented at ACM Symposium on User Interface Software and Technology (UIST) being held in New Orleans, US

This is the first time I’ve seen an augmented reality system that seems accessible, i.e., affordable. You can find out more on the Portal-ble ‘resource’ page where you’ll also find a link to the source code repository. The researchers, as noted in the news release, have an Android version available now with an iPhone version to be released in the future.

Turn yourself into a robot

Turning yourself into a robot is a little easier than I would have thought,

William Weir’s September 19, 2018 Yale University news release (also on EurekAlert) covers some of the same ground and fills in a few details,

When you think of robotics, you likely think of something rigid, heavy, and built for a specific purpose. New “Robotic Skins” technology developed by Yale researchers flips that notion on its head, allowing users to animate the inanimate and turn everyday objects into robots.

Developed in the lab of Rebecca Kramer-Bottiglio, assistant professor of mechanical engineering & materials science, robotic skins enable users to design their own robotic systems. Although the skins are designed with no specific task in mind, Kramer-Bottiglio said, they could be used for everything from search-and-rescue robots to wearable technologies. The results of the team’s work are published today in Science Robotics.

The skins are made from elastic sheets embedded with sensors and actuators developed in Kramer-Bottiglio’s lab. Placed on a deformable object — a stuffed animal or a foam tube, for instance — the skins animate these objects from their surfaces. The makeshift robots can perform different tasks depending on the properties of the soft objects and how the skins are applied.

We can take the skins and wrap them around one object to perform a task — locomotion, for example — and then take them off and put them on a different object to perform a different task, such as grasping and moving an object,” she said. “We can then take those same skins off that object and put them on a shirt to make an active wearable device.”

Robots are typically built with a single purpose in mind. The robotic skins, however, allow users to create multi-functional robots on the fly. That means they can be used in settings that hadn’t even been considered when they were designed, said Kramer-Bottiglio.

Additionally, using more than one skin at a time allows for more complex movements. For instance, Kramer-Bottiglio said, you can layer the skins to get different types of motion. “Now we can get combined modes of actuation — for example, simultaneous compression and bending.”

To demonstrate the robotic skins in action, the researchers created a handful of prototypes. These include foam cylinders that move like an inchworm, a shirt-like wearable device designed to correct poor posture, and a device with a gripper that can grasp and move objects.

Kramer-Bottiglio said she came up with the idea for the devices a few years ago when NASA  [US National Aeronautics and Space Administration] put out a call for soft robotic systems. The technology was designed in partnership with NASA, and its multifunctional and reusable nature would allow astronauts to accomplish an array of tasks with the same reconfigurable material. The same skins used to make a robotic arm out of a piece of foam could be removed and applied to create a soft Mars rover that can roll over rough terrain. With the robotic skins on board, the Yale scientist said, anything from balloons to balls of crumpled paper could potentially be made into a robot with a purpose.

One of the main things I considered was the importance of multifunctionality, especially for deep space exploration where the environment is unpredictable,” she said. “The question is: How do you prepare for the unknown unknowns?”

For the same line of research, Kramer-Bottiglio was recently awarded a $2 million grant from the National Science Foundation, as part of its Emerging Frontiers in Research and Innovation program.

Next, she said, the lab will work on streamlining the devices and explore the possibility of 3D printing the components.

Just in case the link to the paper becomes obsolete, here’s a citation for the paper,

OmniSkins: Robotic skins that turn inanimate objects into multifunctional robots by Joran W. Booth, Dylan Shah, Jennifer C. Case, Edward L. White, Michelle C. Yuen, Olivier Cyr-Choiniere, and Rebecca Kramer-Bottiglio. Science Robotics 19 Sep 2018: Vol. 3, Issue 22, eaat1853 DOI: 10.1126/scirobotics.aat1853

This paper is behind a paywall.

Bringing memristors to the masses and cutting down on energy use

One of my earliest posts featuring memristors (May 9, 2008) focused on their potential for energy savings but since then most of my postings feature research into their application in the field of neuromorphic (brainlike) computing. (For a description and abbreviated history of the memristor go to this page on my Nanotech Mysteries Wiki.)

In a sense this July 30, 2018 news item on Nanowerk is a return to the beginning,

A new way of arranging advanced computer components called memristors on a chip could enable them to be used for general computing, which could cut energy consumption by a factor of 100.

This would improve performance in low power environments such as smartphones or make for more efficient supercomputers, says a University of Michigan researcher.

“Historically, the semiconductor industry has improved performance by making devices faster. But although the processors and memories are very fast, they can’t be efficient because they have to wait for data to come in and out,” said Wei Lu, U-M professor of electrical and computer engineering and co-founder of memristor startup Crossbar Inc.

Memristors might be the answer. Named as a portmanteau of memory and resistor, they can be programmed to have different resistance states–meaning they store information as resistance levels. These circuit elements enable memory and processing in the same device, cutting out the data transfer bottleneck experienced by conventional computers in which the memory is separate from the processor.

A July 30, 2018 University of Michigan news release (also on EurekAlert), which originated the news item, expands on the theme,

… unlike ordinary bits, which are 1 or 0, memristors can have resistances that are on a continuum. Some applications, such as computing that mimics the brain (neuromorphic), take advantage of the analog nature of memristors. But for ordinary computing, trying to differentiate among small variations in the current passing through a memristor device is not precise enough for numerical calculations.

Lu and his colleagues got around this problem by digitizing the current outputs—defining current ranges as specific bit values (i.e., 0 or 1). The team was also able to map large mathematical problems into smaller blocks within the array, improving the efficiency and flexibility of the system.

Computers with these new blocks, which the researchers call “memory-processing units,” could be particularly useful for implementing machine learning and artificial intelligence algorithms. They are also well suited to tasks that are based on matrix operations, such as simulations used for weather prediction. The simplest mathematical matrices, akin to tables with rows and columns of numbers, can map directly onto the grid of memristors.

The memristor array situated on a circuit board.

The memristor array situated on a circuit board. Credit: Mohammed Zidan, Nanoelectronics group, University of Michigan.

Once the memristors are set to represent the numbers, operations that multiply and sum the rows and columns can be taken care of simultaneously, with a set of voltage pulses along the rows. The current measured at the end of each column contains the answers. A typical processor, in contrast, would have to read the value from each cell of the matrix, perform multiplication, and then sum up each column in series.

“We get the multiplication and addition in one step. It’s taken care of through physical laws. We don’t need to manually multiply and sum in a processor,” Lu said.

His team chose to solve partial differential equations as a test for a 32×32 memristor array—which Lu imagines as just one block of a future system. These equations, including those behind weather forecasting, underpin many problems science and engineering but are very challenging to solve. The difficulty comes from the complicated forms and multiple variables needed to model physical phenomena.

When solving partial differential equations exactly is impossible, solving them approximately can require supercomputers. These problems often involve very large matrices of data, so the memory-processor communication bottleneck is neatly solved with a memristor array. The equations Lu’s team used in their demonstration simulated a plasma reactor, such as those used for integrated circuit fabrication.

This work is described in a study, “A general memristor-based partial differential equation solver,” published in the journal Nature Electronics.

It was supported by the Defense Advanced Research Projects Agency (DARPA) (grant no. HR0011-17-2-0018) and by the National Science Foundation (NSF) (grant no. CCF-1617315).

Here’s a link and a citation for the paper,

A general memristor-based partial differential equation solver by Mohammed A. Zidan, YeonJoo Jeong, Jihang Lee, Bing Chen, Shuo Huang, Mark J. Kushner & Wei D. Lu. Nature Electronicsvolume 1, pages411–420 (2018) DOI: https://doi.org/10.1038/s41928-018-0100-6 Published: 13 July 2018

This paper is behind a paywall.

For the curious, Dr. Lu’s startup company, Crossbar can be found here.

Injectable bandages for internal bleeding and hydrogel for the brain

This injectable bandage could be a gamechanger (as they say) if it can be taken beyond the ‘in vitro’ (i.e., petri dish) testing stage. A May 22, 2018 news item on Nanowerk makes the announcement (Note: A link has been removed),

While several products are available to quickly seal surface wounds, rapidly stopping fatal internal bleeding has proven more difficult. Now researchers from the Department of Biomedical Engineering at Texas A&M University are developing an injectable hydrogel bandage that could save lives in emergencies such as penetrating shrapnel wounds on the battlefield (Acta Biomaterialia, “Nanoengineered injectable hydrogels for wound healing application”).

A May 22, 2018 US National Institute of Biomedical Engineering and Bioengiineering news release, which originated the news item, provides more detail (Note: Links have been removed),

The researchers combined a hydrogel base (a water-swollen polymer) and nanoparticles that interact with the body’s natural blood-clotting mechanism. “The hydrogel expands to rapidly fill puncture wounds and stop blood loss,” explained Akhilesh Gaharwar, Ph.D., assistant professor and senior investigator on the work. “The surface of the nanoparticles attracts blood platelets that become activated and start the natural clotting cascade of the body.”

Enhanced clotting when the nanoparticles were added to the hydrogel was confirmed by standard laboratory blood clotting tests. Clotting time was reduced from eight minutes to six minutes when the hydrogel was introduced into the mixture. When nanoparticles were added, clotting time was significantly reduced, to less than three minutes.

In addition to the rapid clotting mechanism of the hydrogel composite, the engineers took advantage of special properties of the nanoparticle component. They found they could use the electric charge of the nanoparticles to add growth factors that efficiently adhered to the particles. “Stopping fatal bleeding rapidly was the goal of our work,” said Gaharwar. “However, we found that we could attach growth factors to the nanoparticles. This was an added bonus because the growth factors act to begin the body’s natural wound healing process—the next step needed after bleeding has stopped.”

The researchers were able to attach vascular endothelial growth factor (VEGF) to the nanoparticles. They tested the hydrogel/nanoparticle/VEGF combination in a cell culture test that mimics the wound healing process. The test uses a petri dish with a layer of endothelial cells on the surface that create a solid skin-like sheet. The sheet is then scratched down the center creating a rip or hole in the sheet that resembles a wound.

When the hydrogel containing VEGF bound to the nanoparticles was added to the damaged endothelial cell wound, the cells were induced to grow back and fill-in the scratched region—essentially mimicking the healing of a wound.

“Our laboratory experiments have verified the effectiveness of the hydrogel for initiating both blood clotting and wound healing,” said Gaharwar. “We are anxious to begin tests in animals with the hope of testing and eventual use in humans where we believe our formulation has great potential to have a significant impact on saving lives in critical situations.”

The work was funded by grant EB023454 from the National Institute of Biomedical Imaging and Bioengineering (NIBIB), and the National Science Foundation. The results were reported in the February issue of the journal Acta Biomaterialia.

The paper was published back in April 2018 and there was an April 2, 2018 Texas A&M University news release on EurekAlert making the announcement (and providing a few unique details),

A penetrating injury from shrapnel is a serious obstacle in overcoming battlefield wounds that can ultimately lead to death.Given the high mortality rates due to hemorrhaging, there is an unmet need to quickly self-administer materials that prevent fatality due to excessive blood loss.

With a gelling agent commonly used in preparing pastries, researchers from the Inspired Nanomaterials and Tissue Engineering Laboratory have successfully fabricated an injectable bandage to stop bleeding and promote wound healing.

In a recent article “Nanoengineered Injectable Hydrogels for Wound Healing Application” published in Acta Biomaterialia, Dr. Akhilesh K. Gaharwar, assistant professor in the Department of Biomedical Engineering at Texas A&M University, uses kappa-carrageenan and nanosilicates to form injectable hydrogels to promote hemostasis (the process to stop bleeding) and facilitate wound healing via a controlled release of therapeutics.

“Injectable hydrogels are promising materials for achieving hemostasis in case of internal injuries and bleeding, as these biomaterials can be introduced into a wound site using minimally invasive approaches,” said Gaharwar. “An ideal injectable bandage should solidify after injection in the wound area and promote a natural clotting cascade. In addition, the injectable bandage should initiate wound healing response after achieving hemostasis.”

The study uses a commonly used thickening agent known as kappa-carrageenan, obtained from seaweed, to design injectable hydrogels. Hydrogels are a 3-D water swollen polymer network, similar to Jell-O, simulating the structure of human tissues.

When kappa-carrageenan is mixed with clay-based nanoparticles, injectable gelatin is obtained. The charged characteristics of clay-based nanoparticles provide hemostatic ability to the hydrogels. Specifically, plasma protein and platelets form blood adsorption on the gel surface and trigger a blood clotting cascade.

“Interestingly, we also found that these injectable bandages can show a prolonged release of therapeutics that can be used to heal the wound” said Giriraj Lokhande, a graduate student in Gaharwar’s lab and first author of the paper. “The negative surface charge of nanoparticles enabled electrostatic interactions with therapeutics thus resulting in the slow release of therapeutics.”

Nanoparticles that promote blood clotting and wound healing (red discs), attached to the wound-filling hydrogel component (black) form a nanocomposite hydrogel. The gel is designed to be self-administered to stop bleeding and begin wound-healing in emergency situations. Credit: Lokhande, et al. 1

Here’s a link to and a citation for the paper,

Nanoengineered injectable hydrogels for wound healing application by Giriraj Lokhande, James K. Carrow, Teena Thakur, Janet R. Xavier, Madasamy Parani, Kayla J. Bayless, Akhilesh K. Gaharwar. Acta Biomaterialia Volume 70, 1 April 2018, Pages 35-47
https://doi.org/10.1016/j.actbio.2018.01.045

This paper is behind a paywall.

Hydrogel and the brain

It’s been an interesting week for hydrogels. On May 21, 2018 there was a news item on ScienceDaily about a bioengineered hydrogel which stimulated brain tissue growth after a stroke (mouse model),

In a first-of-its-kind finding, a new stroke-healing gel helped regrow neurons and blood vessels in mice with stroke-damaged brains, UCLA researchers report in the May 21 issue of Nature Materials.

“We tested this in laboratory mice to determine if it would repair the brain in a model of stroke, and lead to recovery,” said Dr. S. Thomas Carmichael, Professor and Chair of neurology at UCLA. “This study indicated that new brain tissue can be regenerated in what was previously just an inactive brain scar after stroke.”

The brain has a limited capacity for recovery after stroke and other diseases. Unlike some other organs in the body, such as the liver or skin, the brain does not regenerate new connections, blood vessels or new tissue structures. Tissue that dies in the brain from stroke is absorbed, leaving a cavity, devoid of blood vessels, neurons or axons, the thin nerve fibers that project from neurons.

After 16 weeks, stroke cavities in mice contained regenerated brain tissue, including new neural networks — a result that had not been seen before. The mice with new neurons showed improved motor behavior, though the exact mechanism wasn’t clear.

Remarkable stuff.