Tag Archives: University of Bath

‘SWEET’ (smart, wearable, and eco-friendly electronic textiles)

I always appreciate a good acronym and this one is pretty good. (From my perspective, a good acronym is memorable and doesn’t involve tortured terminology such as CRISPR-Cas9, which stands for clustered regularly interspaced short palindromic repeats-CRISPR-associated protein 9).

On to ‘SWEET’ and a January 2, 2025 news item on ScienceDaily announcing a new UK study on wearable e-textiles,

A research team led by the University of Southampton and UWE Bristol [University of the West of England Bristol] has shown wearable electronic textiles (e-textiles) can be both sustainable and biodegradable.

A new study, which also involved the universities of Exeter, Cambridge, Leeds and Bath, describes and tests a new sustainable approach for fully inkjet-printed, eco-friendly e-textiles named ‘Smart, Wearable, and Eco-friendly Electronic Textiles’, or ‘SWEET’.

A January 2, 2025 University of Southampton press release (also on EurekAlert), which originated the news item, describes e-textiles and how this latest work represents a step forward in making them environmentally friendly,

E-textiles are those with embedded electrical components, such as sensors, batteries or lights. They might be used in fashion, for performance sportwear, or for medical purposes as garments that monitor people’s vital signs.

Such textiles need to be durable, safe to wear and comfortable, but also, in an industry which is increasingly concerned with clothing waste, they need to be kind to the environment when no longer required.

Professor Nazmul Karim at the University of Southampton’s Winchester School of Art, who led the study, explains: “Integrating electrical components into conventional textiles complicates the recycling of the material because it often contains metals, such as silver, that don’t easily biodegrade. Our potential ecofriendly approach for selecting sustainable materials and manufacturing overcomes this, enabling the fabric to decompose when it is disposed of.”

The team’s design has three layers, a sensing layer, a layer to interface with the sensors and a base fabric. It uses a textile called Tencel for the base, which is made from renewable wood and is biodegradable. The active electronics in the design are made from graphene, along with a polymer called PEDOT: PSS. These conductive materials are precision inkjet-printed onto the fabric.

The researchers tested samples of the material for continuous monitoring of human physiology using five volunteers. Swatches of the fabric, connected to monitoring equipment, were attached to gloves worn by the participants. Results confirmed the material can effectively and reliably measure both heart rate and temperature at the industry standard level.

Dr Shaila Afroj, an Associate Professor of Sustainable Materials from the University of Exeter and a co-author of the study, highlighted the importance of this performance: “Achieving reliable, industry-standard monitoring with eco-friendly materials is a significant milestone. It demonstrates that sustainability doesn’t have to come at the cost of functionality, especially in critical applications like healthcare.”

The project team then buried the e-textiles in soil to measure its biodegradable properties. After four months, the fabric had lost 48 percent of its weight and 98 percent of its strength, suggesting relatively rapid and also effective decomposition. Furthermore, a life cycle assessment revealed the graphene-based electrodes had up to 40 times less impact on the environment than standard electrodes.

Marzia Dulal from UWE Bristol, a Commonwealth PhD Scholar and the first author of the study, highlighted the environmental impact: “Our life cycle analysis shows that graphene-based e-textiles have a fraction of the environmental footprint compared to traditional electronics. This makes them a more responsible choice for industries looking to reduce their ecological impact.”

The ink-jet printing process is also a more sustainable approach for e-textile fabrications, depositing exact numbers of functional materials on textiles as needed, with almost no material waste and less use of water and energy than conventional screen printing.

Professor Karim concludes: “ Amid rising pollution from landfill sites, our study helps to address a lack of research in the area of biodegradation of e-textiles. These materials will become increasingly more important in our lives, particularly in the area of healthcare, so it’s really important we consider how to make them more eco-friendly, both in their manufacturing and disposal.”

The researchers hope they can now move forward with designing wearable garments made from SWEET for potential use in the healthcare sector, particularly in the area of early detection and prevention of heart-related diseases that 640 million people (source: BHF [British Heart Foundation]) suffer from worldwide.

Here’s a link to and a citation for the paper,

Sustainable, Wearable, and Eco-Friendly Electronic Textiles by Marzia Dulal, Harsh Rajesh Mansukhlal Modha, Jingqi Liu, Md Rashedul Islam, Chris Carr, Tawfique Hasan, Robin Michael Statham Thorn, Shaila Afroj, Nazmul Karim. Energy & Enviornmental Materials DOI: https://doi.org/10.1002/eem2.12854 First published: 18 December 2024

This paper is open access.

AI and the 2024 Nobel prizes

Artificial intelligence made a splash when the 2024 Nobel Prize announcements were made as they were a key factor in both the prizes for physics and for chemistry.

Where do physics, chemistry, and AI go from here?

I have a few speculative pieces about physics, chemistry, and AI. First off we have Nello Cristianini’s (Professor of Artificial Intelligence at the University of Bath (England) October 10, 2024 essay for The Conversation, Note: Links have been removed,

The 2024 Nobel Prizes in physics and chemistry have given us a glimpse of the future of science. Artificial intelligence (AI) was central to the discoveries honoured by both awards. You have to wonder what Alfred Nobel, who founded the prizes, would think of it all.

We are certain to see many more Nobel medals handed to researchers who made use of AI tools. As this happens, we may find the scientific methods honoured by the Nobel committee depart from straightforward categories like “physics”, “chemistry” and “physiology or medicine”.

We may also see the scientific backgrounds of recipients retain a looser connection with these categories. This year’s physics prize was awarded to the American John Hopfield, at Princeton University, and British-born Geoffrey Hinton, from the University of Toronto. While Hopfield is a physicist, Hinton studied experimental psychology before gravitating to AI.

The chemistry prize was shared between biochemist David Baker, from the University of Washington, and the computer scientists Demis Hassabis and John Jumper, who are both at Google DeepMind in the UK.

There is a close connection between the AI-based advances honoured in the physics and chemistry categories. Hinton helped develop an approach used by DeepMind to make its breakthrough in predicting the shapes of proteins.

The physics laureates, Hinton in particular, laid the foundations of the powerful field known as machine learning. This is a subset of AI that’s concerned with algorithms, sets of rules for performing specific computational tasks.

Hopfield’s work is not particularly in use today, but the backpropagation algorithm (co-invented by Hinton) has had a tremendous impact on many different sciences and technologies. This is concerned with neural networks, a model of computing that mimics the human brain’s structure and function to process data. Backpropagation allows scientists to “train” enormous neural networks. While the Nobel committee did its best to connect this influential algorithm to physics, it’s fair to say that the link is not a direct one.

Every two years, since 1994, scientists have been holding a contest to find the best ways to predict protein structures and shapes from the sequences of their amino acids. The competition is called Critical Assessment of Structure Prediction (CASP).

For the past few contests, CASP winners have used some version of DeepMind’s AlphaFold. There is, therefore, a direct line to be drawn from Hinton’s backpropagation to Google DeepMind’s AlphaFold 2 breakthrough.

Attributing credit has always been controversial aspect of the Nobel prizes. A maximum of three researchers can share a Nobel. But big advances in science are collaborative. Scientific papers may have 10, 20, 30 authors or more. More than one team might contribute to the discoveries honoured by the Nobel committee.

This year we may have further discussions about the attribution of the research on backpropagation algorithm, which has been claimed by various researchers, as well as for the general attribution of a discovery to a field like physics.

We now have a new dimension to the attribution problem. It’s increasingly unclear whether we will always be able to distinguish between the contributions of human scientists and those of their artificial collaborators – the AI tools that are already helping push forward the boundaries of our knowledge.

This November 26, 2024 news item on ScienceDaily, which is a little repetitive, considers interdisciplinarity in relation to the 2024 Nobel prizes,

In 2024, the Nobel Prize in physics was awarded to John Hopfield and Geoffrey Hinton for their foundational work in artificial intelligence (AI), and the Nobel Prize in chemistry went to David Baker, Demis Hassabis, and John Jumper for using AI to solve the protein-folding problem, a 50-year grand challenge problem in science.

A new article, written by researchers at Carnegie Mellon University and Calculation Consulting, examines the convergence of physics, chemistry, and AI, highlighted by recent Nobel Prizes. It traces the historical development of neural networks, emphasizing the role of interdisciplinary research in advancing AI. The authors advocate for nurturing AI-enabled polymaths to bridge the gap between theoretical advancements and practical applications, driving progress toward artificial general intelligence. The article is published in Patterns.

“With AI being recognized in connections to both physics and chemistry, practitioners of machine learning may wonder how these sciences relate to AI and how these awards might influence their work,” explained Ganesh Mani, Professor of Innovation Practice and Director of Collaborative AI at Carnegie Mellon’s Tepper School of Business, who coauthored the article. “As we move forward, it is crucial to recognize the convergence of different approaches in shaping modern AI systems based on generative AI.”

A November 25, 2024 Carnegie Mellon University (CMU) news release, which originated the news item, describes the paper,

In their article, the authors explore the historical development of neural networks. By examining the history of AI development, they contend, we can understand more thoroughly the connections among computer science, theoretical chemistry, theoretical physics, and applied mathematics. The historical perspective illuminates how foundational discoveries and inventions across these disciplines have enabled modern machine learning with artificial neural networks. 

Then they turn to key breakthroughs and challenges in this field, starting with Hopfield’s work, and go on to explain how engineering has at times preceded scientific understanding, as is the case with the work of Jumper and Hassabis.

The authors conclude with a call to action, suggesting that the rapid progress of AI across diverse sectors presents both unprecedented opportunities and significant challenges. To bridge the gap between hype and tangible development, they say, a new generation of interdisciplinary thinkers must be cultivated.

These “modern-day Leonardo da Vincis,” as the authors call them, will be crucial in developing practical learning theories that can be applied immediately by engineers, propelling the field toward the ambitious goal of artificial general intelligence.

This calls for a paradigm shift in how scientific inquiry and problem solving are approached, say the authors, one that embraces holistic, cross-disciplinary collaboration and learns from nature to understand nature. By breaking down silos between fields and fostering a culture of intellectual curiosity that spans multiple domains, innovative solutions can be identified to complex global challenges like climate change. Through this synthesis of diverse knowledge and perspectives, catalyzed by AI, meaningful progress can be made and the field can realize the full potential of technological aspirations.

“This interdisciplinary approach is not just beneficial but essential for addressing the many complex challenges that lie ahead,” suggests Charles Martin, Principal Consultant at Calculation Consulting, who coauthored the article. “We need to harness the momentum of current advancements while remaining grounded in practical realities.”

The authors acknowledge the contributions of Scott E. Fahlman, Professor Emeritus in Carnegie Mellon’s School of Computer Science.

Here’s a link to and a citation for the paper,

The recent Physics and Chemistry Nobel Prizes, AI, and the convergence of knowledge fields by Charles H. Martin, Ganesh Mani. Patterns, 2024 DOI: 10.1016/j.patter.2024.101099 Published online November 25, 2024 Copyright: © 2024 The Author(s). Published by Elsevier Inc.

This paper is open access under Creative Commons Attribution (CC BY 4.0.

A scientific enthusiast: “I was a beta tester for the Nobel prize-winning AlphaFold AI”

From an October 11, 2024 essay by Rivka Isaacson (Professor of Molecular Biophysics, King’s College London) for The Conversation, Note: Links have been removed,

The deep learning machine AlphaFold, which was created by Google’s AI research lab DeepMind, is already transforming our understanding of the molecular biology that underpins health and disease.

One half of the 2024 Nobel prize in chemistry went to David Baker from the University of Washington in the US, with the other half jointly awarded to Demis Hassabis and John M. Jumper, both from London-based Google DeepMind.

If you haven’t heard of AlphaFold, it may be difficult to appreciate how important it is becoming to researchers. But as a beta tester for the software, I got to see first-hand how this technology can reveal the molecular structures of different proteins in minutes. It would take researchers months or even years to unpick these structures in laboratory experiments.

This technology could pave the way for revolutionary new treatments and drugs. But first, it’s important to understand what AlphaFold does.

Proteins are produced by series of molecular “beads”, created from a selection of the human body’s 20 different amino acids. These beads form a long chain that folds up into a mechanical shape that is crucial for the protein’s function.

Their sequence is determined by DNA. And while DNA research means we know the order of the beads that build most proteins, it’s always been a challenge to predict how the chain folds up into each “3D machine”.

These protein structures underpin all of biology. Scientists study them in the same way you might take a clock apart to understand how it works. Comprehend the parts and put together the whole: it’s the same with the human body.

Proteins are tiny, with a huge number located inside each of our 30 trillion cells. This meant for decades, the only way to find out their shape was through laborious experimental methods – studies that could take years.

Throughout my career I, along with many other scientists, have been engaged in such pursuits. Every time we solve a protein structure, we deposit it in a global database called the Protein Data Bank, which is free for anyone to use.

AlphaFold was trained on these structures, the majority of which were found using X-ray crystallography. For this technique, proteins are tested under thousands of different chemical states, with variations in temperature, density and pH. Researchers use a microscope to identify the conditions under which each protein lines up in a particular formation. These are then shot with X-rays to work out the spatial arrangement of all the atoms in that protein.

Addictive experience

In March 2024, researchers at DeepMind approached me to beta test AlphaFold3, the latest incarnation of the software, which was close to release at the time.

I’ve never been a gamer but I got a taste of the addictive experience as, once I got access, all I wanted to do was spend hours trying out molecular combinations. As well as lightning speed, this new version introduced the option to include bigger and more varied molecules, including DNA and metals, and the opportunity to modify amino acids to mimic chemical signalling in cells.

Understanding the moving parts and dynamics of proteins is the next frontier, now that we can predict static protein shapes with AlphaFold. Proteins come in a huge variety of shapes and sizes. They can be rigid or flexible, or made of neatly structured units connected by bendy loops.

You can read Isaacson’s entire October 11, 2024 essay on The Conversation or in an October 14, 2024 news item on phys.org.

Preventing warmed-up vaccines from becoming useless

One of the major problems with vaccines is that they need to be refrigerated. (The Nanopatch, which additionally wouldn’t require needles or syringes, is my favourite proposed solution and it comes from Australia.) This latest research into making vaccines more long-lasting is from the UK and takes a different approach to the problem.

From a June 8, 2020 news item on phys.org,

Vaccines are notoriously difficult to transport to remote or dangerous places, as they spoil when not refrigerated. Formulations are safe between 2°C and 8°C, but at other temperatures the proteins start to unravel, making the vaccines ineffective. As a result, millions of children around the world miss out on life-saving inoculations.

However, scientists have now found a way to prevent warmed-up vaccines from degrading. By encasing protein molecules in a silica shell, the structure remains intact even when heated to 100°C, or stored at room temperature for up to three years.

The technique for tailor-fitting a vaccine with a silica coat—known as ensilication—was developed by a Bath [University] team in collaboration with the University of Newcastle. This pioneering technology was seen to work in the lab two years ago, and now it has demonstrated its effectiveness in the real world too.

Here’s the lead researcher describing her team’s work

Ensilication: success in animal trials from University of Bath on Vimeo.

A June 8, 2020 University of Bath press release (also on EurekAlert) fills in more details about the research,

In their latest study, published in the journal Scientific Reports, the researchers sent both ensilicated and regular samples of the tetanus vaccine from Bath to Newcastle by ordinary post (a journey time of over 300 miles, which by post takes a day or two). When doses of the ensilicated vaccine were subsequently injected into mice, an immune response was triggered, showing the vaccine to be active. No immune response was detected in mice injected with unprotected doses of the vaccine, indicating the medicine had been damaged in transit.

Dr Asel Sartbaeva, who led the project from the University of Bath’s Department of Chemistry, said: “This is really exciting data because it shows us that ensilication preserves not just the structure of the vaccine proteins but also the function – the immunogenicity.”

“This project has focused on tetanus, which is part of the DTP (diphtheria, tetanus and pertussis) vaccine given to young children in three doses. Next, we will be working on developing a thermally-stable vaccine for diphtheria, and then pertussis. Eventually we want to create a silica cage for the whole DTP trivalent vaccine, so that every child in the world can be given DTP without having to rely on cold chain distribution.”

Cold chain distribution requires a vaccine to be refrigerated from the moment of manufacturing to the endpoint destination.

Silica is an inorganic, non-toxic material, and Dr Sartbaeva estimates that ensilicated vaccines could be used for humans within five to 15 years. She hopes the technology to silica-wrap proteins will eventually be adopted to store and transport all childhood vaccines, as well as other protein-based products, such as antibodies and enzymes.

“Ultimately, we want to make important medicines stable so they can be more widely available,” she said. “The aim is to eradicate vaccine-preventable diseases in low income countries by using thermally stable vaccines and cutting out dependence on cold chain.”

Currently, up to 50% of vaccine doses are discarded before use due to exposure to suboptimal temperatures. According to the World Health Organisation (WHO), 19.4 million infants did not receive routine life-saving vaccinations in 2018.

Here’s a link to and a citation for the paper,

Ensilicated tetanus antigen retains immunogenicity: in vivo study and time-resolved SAXS characterization by A. Doekhie, R. Dattani, Y-C. Chen, Y. Yang, A. Smith, A. P. Silve, F. Koumanov, S. A. Wells, K. J. Edler, K. J. Marchbank, J. M. H. van den Elsen & A. Sartbaeva. Scientific Reports volume 10, Article number: 9243 (2020) DOI: https://doi.org/10.1038/s41598-020-65876-3 Published 08 June 2020

This paper is open access

Nanopatch update

I tend to lose track as a science gets closer to commercialization since the science news becomes business news and I almost never scan that sector. It’s been about two-and-half years since I featured research that suggested Nanopatch provided more effective polio vaccination than the standard needle and syringe method in a December 20, 2017 post. The latest bits of news have an interesting timeline.

March 2020

Mark Kendal (Wikipedia entry) is the researcher behind the Nanopatch. He’s interviewed in a March 5, 2020 episode (about 20 mins.) in the Pioneers Series (bankrolled by Rolex [yes, the watch company]) on Monocle.com. Coincidentally or not, a new piece of research funded by Vaxxas (the nanopatch company founded by Mark Kendall; on the website you will find a ‘front’ page and a ‘Contact us’ page only) was announced in a March 17, 2020 news item on medical.net,

Vaxxas, a clinical-stage biotechnology company commercializing a novel vaccination platform, today announced the publication in the journal PLoS Medicine of groundbreaking clinical research indicating the broad immunological and commercial potential of Vaxxas’ novel high-density microarray patch (HD-MAP). Using influenza vaccine, the clinical study of Vaxxas’ HD-MAP demonstrated significantly enhanced immune response compared to vaccination by needle/syringe. This is the largest microarray patch clinical vaccine study ever performed.

“With vaccine coated onto Vaxxas HD-MAPs shown to be stable for up to a year at 40°C [emphasis mine], we can offer a truly differentiated platform with a global reach, particularly into low and middle income countries or in emergency use and pandemic situations,” said Angus Forster, Chief Development and Operations Officer of Vaxxas and lead author of the PLoS Medicine publication. “Vaxxas’ HD-MAP is readily fabricated by injection molding to produce a 10 x 10 mm square with more than 3,000 microprojections that are gamma-irradiated before aseptic dry application of vaccine to the HD-MAP’s tips. All elements of device design, as well as coating and QC, have been engineered to enable small, modular, aseptic lines to make millions of vaccine products per week.”

The PLoS publication reported results and analyses from a clinical study involving 210 clinical subjects [emphasis mine]. The clinical study was a two-part, randomized, partially double-blind, placebo-controlled trial conducted at a single Australian clinical site. The clinical study’s primary objective was to measure the safety and tolerability of A/Singapore/GP1908/2015 H1N1 (A/Sing) monovalent vaccine delivered by Vaxxas HD-MAP in comparison to an uncoated Vaxxas HD-MAP and IM [intramuscular] injection of a quadrivalent seasonal influenza vaccine (QIV) delivering approximately the same dose of A/Sing HA protein. Exploratory outcomes were: to evaluate the immune responses to HD-MAP application to the forearm with A/Sing at 4 dose levels in comparison to IM administration of A/Sing at the standard 15 μg HA per dose per strain, and to assess further measures of immune response through additional assays and assessment of the local skin response via punch biopsy of the HD-MAP application sites. Local skin response, serological, mucosal and cellular immune responses were assessed pre- and post-vaccination.

Here’s a link to and a citation for the latest ‘nanopatch’ paper,

Safety, tolerability, and immunogenicity of influenza vaccination with a high-density microarray patch: Results from a randomized, controlled phase I clinical trial by Angus H. Forster, Katey Witham, Alexandra C. I. Depelsenaire, Margaret Veitch, James W. Wells, Adam Wheatley, Melinda Pryor, Jason D. Lickliter, Barbara Francis, Steve Rockman, Jesse Bodle, Peter Treasure, Julian Hickling, Germain J. P. Fernando. DOI: https://doi.org/10.1371/journal.pmed.1003024 PLOS (Public Library of Science) Published: March 17, 2020

This is an open access paper.

May 2020

Two months later, Merck, an American multinational pharmaceutical company, showed some serious interest in the ‘nanopatch’. A May 28, 2020 article by Chris Newmarker for drugdelvierybusiness.com announces the news (Note: Links have been removed),

Merck has exercised its option to use Vaxxas‘ High Density Microarray Patch (HD-MAP) platform as a delivery platform for a vaccine candidate, the companies announced today [Thursday, May 28, 2020].

Also today, Vaxxas announced that German manufacturing equipment maker Harro Höfliger will help Vaxxas develop a high-throughput, aseptic manufacturing line to make vaccine products based on Vaxxas’ HD-MAP technology. Initial efforts will focus on having a pilot line operating in 2021 to support late-stage clinical studies — with a goal of single, aseptic-based lines being able to churn out 5 million vaccine products a week.

“A major challenge in commercializing microarray patches — like Vaxxas’ HD-MAP — for vaccination is the ability to manufacture at industrially-relevant scale, while meeting stringent sterility and quality standards. Our novel device design along with our innovative vaccine coating and quality verification technologies are an excellent fit for integration with Harro Höfliger’s aseptic process automation platforms. Adopting a modular approach, it will be possible to achieve output of tens-of-millions of vaccine-HD-MAP products per week,” Hoey [David L. Hoey, President and CEO of Vaxxas] said.

Vaxxas also claims that the patches can deliver vaccine more efficiently — a positive when people around the world are clamoring for a vaccine against COVID-19. The company points to a recent [March 17, 2020] clinical study in which their micropatch delivering a sixth of an influenza vaccine dose produced an immune response comparable to a full dose by intramuscular injection. A two-thirds dose by HD-MAP generated significantly faster and higher overall antibody responses.

As I noted earlier, this is an interesting timeline.

Final comment

In the end, what all of this means is that there may be more than one way to deal with vaccines and medicines that deteriorate all too quickly unless refrigerated. I wish all of these researchers the best.

Nanodevices show (from the inside) how cells change

Embryo cells + nanodevices from University of Bath on Vimeo.

Caption: Five mouse embryos, each containing a nanodevice that is 22-millionths of a metre long. The film begins when the embryos are 2-hours old and continues for 5 hours. Each embryo is about 100-millionths of a metre in diameter. Credit: Professor Tony Perry

Fascinating, yes? As I often watch before reading the caption, these were mysterious grey blobs moving around was my first impression. Given the headline for the May 26, 2020 news item on ScienceDaily, I was expecting the squarish-shaped devices inside,

For the first time, scientists have introduced minuscule tracking devices directly into the interior of mammalian cells, giving an unprecedented peek into the processes that govern the beginning of development.

This work on one-cell embryos is set to shift our understanding of the mechanisms that underpin cellular behaviour in general, and may ultimately provide insights into what goes wrong in ageing and disease.

The research, led by Professor Tony Perry from the Department of Biology and Biochemistry at the University of Bath [UK], involved injecting a silicon-based nanodevice together with sperm into the egg cell of a mouse. The result was a healthy, fertilised egg containing a tracking device.

This image looks to have been enhanced with colour,

Fluorescence of an embryo containing a nanodevice. Courtesy: University of Bath

A May 25, 2020 University of Bath press release (also on EurekAlert but published May 26, 2020)

The tiny devices are a little like spiders, complete with eight highly flexible ‘legs’. The legs measure the ‘pulling and pushing’ forces exerted in the cell interior to a very high level of precision, thereby revealing the cellular forces at play and showing how intracellular matter rearranged itself over time.

The nanodevices are incredibly thin – similar to some of the cell’s structural components, and measuring 22 nanometres, making them approximately 100,000 times thinner than a pound coin. This means they have the flexibility to register the movement of the cell’s cytoplasm as the one-cell embryo embarks on its voyage towards becoming a two-cell embryo.

“This is the first glimpse of the physics of any cell on this scale from within,” said Professor Perry. “It’s the first time anyone has seen from the inside how cell material moves around and organises itself.”

WHY PROBE A CELL’S MECHANICAL BEHAVIOUR?

The activity within a cell determines how that cell functions, explains Professor Perry. “The behaviour of intracellular matter is probably as influential to cell behaviour as gene expression,” he said. Until now, however, this complex dance of cellular material has remained largely unstudied. As a result, scientists have been able to identify the elements that make up a cell, but not how the cell interior behaves as a whole.

“From studies in biology and embryology, we know about certain molecules and cellular phenomena, and we have woven this information into a reductionist narrative of how things work, but now this narrative is changing,” said Professor Perry. The narrative was written largely by biologists, who brought with them the questions and tools of biology. What was missing was physics. Physics asks about the forces driving a cell’s behaviour, and provides a top-down approach to finding the answer.

“We can now look at the cell as a whole, not just the nuts and bolts that make it.”

Mouse embryos were chosen for the study because of their relatively large size (they measure 100 microns, or 100-millionths of a metre, in diameter, compared to a regular cell which is only 10 microns [10-millionths of a metre] in diameter). This meant that inside each embryo, there was space for a tracking device.

The researchers made their measurements by examining video recordings taken through a microscope as the embryo developed. “Sometimes the devices were pitched and twisted by forces that were even greater than those inside muscle cells,” said Professor Perry. “At other times, the devices moved very little, showing the cell interior had become calm. There was nothing random about these processes – from the moment you have a one-cell embryo, everything is done in a predictable way. The physics is programmed.”

The results add to an emerging picture of biology that suggests material inside a living cell is not static, but instead changes its properties in a pre-ordained way as the cell performs its function or responds to the environment. The work may one day have implications for our understanding of how cells age or stop working as they should, which is what happens in disease.

The study is published this week in Nature Materials and involved a trans-disciplinary partnership between biologists, materials scientists and physicists based in the UK, Spain and the USA.

Here’s a link to and a citation for the paper,

Tracking intracellular forces and mechanical property changes in mouse one-cell embryo development by Marta Duch, Núria Torras, Maki Asami, Toru Suzuki, María Isabel Arjona, Rodrigo Gómez-Martínez, Matthew D. VerMilyea, Robert Castilla, José Antonio Plaza & Anthony C. F. Perry. Nature Materials (2020) DOI: https://doi.org/10.1038/s41563-020-0685-9 Published 25 May 2020

This paper is behind a paywall.

Bloodless diabetes monitor enabled by nanotechnology

There have been some remarkable advances in the treatment of many diseases, diabetes being one of them. Of course, we can always make things better.and monitoring a diabetic patient’s glucose without have to draw blood is an improvement that may occur sooner rather than later as an April 9,2018 news item on Nanowerk suggests,

Scientists have created a non-invasive, adhesive patch, which promises the measurement of glucose levels through the skin without a finger-prick blood test, potentially removing the need for millions of diabetics to frequently carry out the painful and unpopular tests.

The patch does not pierce the skin, instead it draws glucose out from fluid between cells across hair follicles, which are individually accessed via an array of miniature sensors using a small electric current. The glucose collects in tiny reservoirs and is measured. Readings can be taken every 10 to 15 minutes over several hours.

Crucially, because of the design of the array of sensors and reservoirs, the patch does not require calibration with a blood sample — meaning that finger prick blood tests are unnecessary.

The device can measure glucose levels without piercing the skin Courtesy: University of Bath

An April 9, 2018 University of Bath press release, which originated the news item, expands on the theme,

Having established proof of the concept behind the device in a study published in Nature Nanotechnology, the research team from the University of Bath hopes that it can eventually become a low-cost, wearable sensor that sends regular, clinically relevant glucose measurements to the wearer’s phone or smartwatch wirelessly, alerting them when they may need to take action.

An important advantage of this device over others is that each miniature sensor of the array can operate on a small area over an individual hair follicle – this significantly reduces inter- and intra-skin variability in glucose extraction and increases the accuracy of the measurements taken such that calibration via a blood sample is not required.

The project is a multidisciplinary collaboration between scientists from the Departments of Physics, Pharmacy & Pharmacology, and Chemistry at the University of Bath.

Professor Richard Guy, from the Department of Pharmacy & Pharmacology, said: “A non-invasive – that is, needle-less – method to monitor blood sugar has proven a difficult goal to attain. The closest that has been achieved has required either at least a single-point calibration with a classic ‘finger-stick’, or the implantation of a pre-calibrated sensor via a single needle insertion. The monitor developed at Bath promises a truly calibration-free approach, an essential contribution in the fight to combat the ever-increasing global incidence of diabetes.”

Dr Adelina Ilie, from the Department of Physics, said: “The specific architecture of our array permits calibration-free operation, and it has the further benefit of allowing realisation with a variety of materials in combination. We utilised graphene as one of the components as it brings important advantages: specifically, it is strong, conductive, flexible, and potentially low-cost and environmentally friendly. In addition, our design can be implemented using high-throughput fabrication techniques like screen printing, which we hope will ultimately support a disposable, widely affordable device.”

In this study the team tested the patch on both pig skin, where they showed it could accurately track glucose levels across the range seen in diabetic human patients, and on healthy human volunteers, where again the patch was able to track blood sugar variations throughout the day.

The next steps include further refinement of the design of the patch to optimise the number of sensors in the array, to demonstrate full functionality over a 24-hour wear period, and to undertake a number of key clinical trials.

Diabetes is a serious public health problem which is increasing. The World Health Organization predicts the world-wide incidence of diabetes to rise from 171M in 2000 to 366M in 2030. In the UK, just under six per cent of adults have diabetes and the NHS spends around 10% of its budget on diabetes monitoring and treatments. Up to 50% of adults with diabetes are undiagnosed.

An effective, non-invasive way of monitoring blood glucose could both help diabetics, as well as those at risk of developing diabetes, make the right choices to either manage the disease well or reduce their risk of developing the condition. The work was funded by the Engineering and Physical Sciences Research Council (EPSRC), the Medical Research Council (MRC), and the Sir Halley Stewart Trust.

Here’s a link to and a citation for the paper,

Non-invasive, transdermal, path-selective and specific glucose monitoring via a graphene-based platform by Luca Lipani, Bertrand G. R. Dupont, Floriant Doungmene, Frank Marken, Rex M. Tyrrell, Richard H. Guy, & Adelina Ilie. Nature Nanotechnology (2018) doi:10.1038/s41565-018-0112-4 Published online: 09 April 2018

This paper is behind a paywall.

Training drugs

This summarizes some of what’s happening in nanomedicine and provides a plug (boost) for the  University of Cambridge’s nanotechnology programmes (from a June 26, 2017 news item on Nanowerk),

Nanotechnology is creating new opportunities for fighting disease – from delivering drugs in smart packaging to nanobots powered by the world’s tiniest engines.

Chemotherapy benefits a great many patients but the side effects can be brutal.
When a patient is injected with an anti-cancer drug, the idea is that the molecules will seek out and destroy rogue tumour cells. However, relatively large amounts need to be administered to reach the target in high enough concentrations to be effective. As a result of this high drug concentration, healthy cells may be killed as well as cancer cells, leaving many patients weak, nauseated and vulnerable to infection.

One way that researchers are attempting to improve the safety and efficacy of drugs is to use a relatively new area of research known as nanothrapeutics to target drug delivery just to the cells that need it.

Professor Sir Mark Welland is Head of the Electrical Engineering Division at Cambridge. In recent years, his research has focused on nanotherapeutics, working in collaboration with clinicians and industry to develop better, safer drugs. He and his colleagues don’t design new drugs; instead, they design and build smart packaging for existing drugs.

The University of Cambridge has produced a video interview (referencing a 1966 movie ‘Fantastic Voyage‘ in its title)  with Sir Mark Welland,

A June 23, 2017 University of Cambridge press release, which originated the news item, delves further into the topic of nanotherapeutics (nanomedicine) and nanomachines,

Nanotherapeutics come in many different configurations, but the easiest way to think about them is as small, benign particles filled with a drug. They can be injected in the same way as a normal drug, and are carried through the bloodstream to the target organ, tissue or cell. At this point, a change in the local environment, such as pH, or the use of light or ultrasound, causes the nanoparticles to release their cargo.

Nano-sized tools are increasingly being looked at for diagnosis, drug delivery and therapy. “There are a huge number of possibilities right now, and probably more to come, which is why there’s been so much interest,” says Welland. Using clever chemistry and engineering at the nanoscale, drugs can be ‘taught’ to behave like a Trojan horse, or to hold their fire until just the right moment, or to recognise the target they’re looking for.

“We always try to use techniques that can be scaled up – we avoid using expensive chemistries or expensive equipment, and we’ve been reasonably successful in that,” he adds. “By keeping costs down and using scalable techniques, we’ve got a far better chance of making a successful treatment for patients.”

In 2014, he and collaborators demonstrated that gold nanoparticles could be used to ‘smuggle’ chemotherapy drugs into cancer cells in glioblastoma multiforme, the most common and aggressive type of brain cancer in adults, which is notoriously difficult to treat. The team engineered nanostructures containing gold and cisplatin, a conventional chemotherapy drug. A coating on the particles made them attracted to tumour cells from glioblastoma patients, so that the nanostructures bound and were absorbed into the cancer cells.

Once inside, these nanostructures were exposed to radiotherapy. This caused the gold to release electrons that damaged the cancer cell’s DNA and its overall structure, enhancing the impact of the chemotherapy drug. The process was so effective that 20 days later, the cell culture showed no evidence of any revival, suggesting that the tumour cells had been destroyed.

While the technique is still several years away from use in humans, tests have begun in mice. Welland’s group is working with MedImmune, the biologics R&D arm of pharmaceutical company AstraZeneca, to study the stability of drugs and to design ways to deliver them more effectively using nanotechnology.

“One of the great advantages of working with MedImmune is they understand precisely what the requirements are for a drug to be approved. We would shut down lines of research where we thought it was never going to get to the point of approval by the regulators,” says Welland. “It’s important to be pragmatic about it so that only the approaches with the best chance of working in patients are taken forward.”

The researchers are also targeting diseases like tuberculosis (TB). With funding from the Rosetrees Trust, Welland and postdoctoral researcher Dr Íris da luz Batalha are working with Professor Andres Floto in the Department of Medicine to improve the efficacy of TB drugs.

Their solution has been to design and develop nontoxic, biodegradable polymers that can be ‘fused’ with TB drug molecules. As polymer molecules have a long, chain-like shape, drugs can be attached along the length of the polymer backbone, meaning that very large amounts of the drug can be loaded onto each polymer molecule. The polymers are stable in the bloodstream and release the drugs they carry when they reach the target cell. Inside the cell, the pH drops, which causes the polymer to release the drug.

In fact, the polymers worked so well for TB drugs that another of Welland’s postdoctoral researchers, Dr Myriam Ouberaï, has formed a start-up company, Spirea, which is raising funding to develop the polymers for use with oncology drugs. Ouberaï is hoping to establish a collaboration with a pharma company in the next two years.

“Designing these particles, loading them with drugs and making them clever so that they release their cargo in a controlled and precise way: it’s quite a technical challenge,” adds Welland. “The main reason I’m interested in the challenge is I want to see something working in the clinic – I want to see something working in patients.”

Could nanotechnology move beyond therapeutics to a time when nanomachines keep us healthy by patrolling, monitoring and repairing the body?

Nanomachines have long been a dream of scientists and public alike. But working out how to make them move has meant they’ve remained in the realm of science fiction.

But last year, Professor Jeremy Baumberg and colleagues in Cambridge and the University of Bath developed the world’s tiniest engine – just a few billionths of a metre [nanometre] in size. It’s biocompatible, cost-effective to manufacture, fast to respond and energy efficient.

The forces exerted by these ‘ANTs’ (for ‘actuating nano-transducers’) are nearly a hundred times larger than those for any known device, motor or muscle. To make them, tiny charged particles of gold, bound together with a temperature-responsive polymer gel, are heated with a laser. As the polymer coatings expel water from the gel and collapse, a large amount of elastic energy is stored in a fraction of a second. On cooling, the particles spring apart and release energy.

The researchers hope to use this ability of ANTs to produce very large forces relative to their weight to develop three-dimensional machines that swim, have pumps that take on fluid to sense the environment and are small enough to move around our bloodstream.

Working with Cambridge Enterprise, the University’s commercialisation arm, the team in Cambridge’s Nanophotonics Centre hopes to commercialise the technology for microfluidics bio-applications. The work is funded by the Engineering and Physical Sciences Research Council and the European Research Council.

“There’s a revolution happening in personalised healthcare, and for that we need sensors not just on the outside but on the inside,” explains Baumberg, who leads an interdisciplinary Strategic Research Network and Doctoral Training Centre focused on nanoscience and nanotechnology.

“Nanoscience is driving this. We are now building technology that allows us to even imagine these futures.”

I have featured Welland and his work here before and noted his penchant for wanting to insert nanodevices into humans as per this excerpt from an April 30, 2010 posting,
Getting back to the Cambridge University video, do go and watch it on the Nanowerk site. It is fun and very informative and approximately 17 mins. I noticed that they reused part of their Nokia morph animation (last mentioned on this blog here) and offered some thoughts from Professor Mark Welland, the team leader on that project. Interestingly, Welland was talking about yet another possibility. (Sometimes I think nano goes too far!) He was suggesting that we could have chips/devices in our brains that would allow us to think about phoning someone and an immediate connection would be made to that person. Bluntly—no. Just think what would happen if the marketers got access and I don’t even want to think what a person who suffers psychotic breaks (i.e., hearing voices) would do with even more input. Welland starts to talk at the 11 minute mark (I think). For an alternative take on the video and more details, visit Dexter Johnson’s blog, Nanoclast, for this posting. Hint, he likes the idea of a phone in the brain much better than I do.

I’m not sure what could have occasioned this latest press release and related video featuring Welland and nanotherapeutics other than guessing that it was a slow news period.

Machine learning programs learn bias

The notion of bias in artificial intelligence (AI)/algorithms/robots is gaining prominence (links to other posts featuring algorithms and bias are at the end of this post). The latest research concerns machine learning where an artificial intelligence system trains itself with ordinary human language from the internet. From an April 13, 2017 American Association for the Advancement of Science (AAAS) news release on EurekAlert,

As artificial intelligence systems “learn” language from existing texts, they exhibit the same biases that humans do, a new study reveals. The results not only provide a tool for studying prejudicial attitudes and behavior in humans, but also emphasize how language is intimately intertwined with historical biases and cultural stereotypes. A common way to measure biases in humans is the Implicit Association Test (IAT), where subjects are asked to pair two concepts they find similar, in contrast to two concepts they find different; their response times can vary greatly, indicating how well they associated one word with another (for example, people are more likely to associate “flowers” with “pleasant,” and “insects” with “unpleasant”). Here, Aylin Caliskan and colleagues developed a similar way to measure biases in AI systems that acquire language from human texts; rather than measuring lag time, however, they used the statistical number of associations between words, analyzing roughly 2.2 million words in total. Their results demonstrate that AI systems retain biases seen in humans. For example, studies of human behavior show that the exact same resume is 50% more likely to result in an opportunity for an interview if the candidate’s name is European American rather than African-American. Indeed, the AI system was more likely to associate European American names with “pleasant” stimuli (e.g. “gift,” or “happy”). In terms of gender, the AI system also reflected human biases, where female words (e.g., “woman” and “girl”) were more associated than male words with the arts, compared to mathematics. In a related Perspective, Anthony G. Greenwald discusses these findings and how they could be used to further analyze biases in the real world.

There are more details about the research in this April 13, 2017 Princeton University news release on EurekAlert (also on ScienceDaily),

In debates over the future of artificial intelligence, many experts think of the new systems as coldly logical and objectively rational. But in a new study, researchers have demonstrated how machines can be reflections of us, their creators, in potentially problematic ways. Common machine learning programs, when trained with ordinary human language available online, can acquire cultural biases embedded in the patterns of wording, the researchers found. These biases range from the morally neutral, like a preference for flowers over insects, to the objectionable views of race and gender.

Identifying and addressing possible bias in machine learning will be critically important as we increasingly turn to computers for processing the natural language humans use to communicate, for instance in doing online text searches, image categorization and automated translations.

“Questions about fairness and bias in machine learning are tremendously important for our society,” said researcher Arvind Narayanan, an assistant professor of computer science and an affiliated faculty member at the Center for Information Technology Policy (CITP) at Princeton University, as well as an affiliate scholar at Stanford Law School’s Center for Internet and Society. “We have a situation where these artificial intelligence systems may be perpetuating historical patterns of bias that we might find socially unacceptable and which we might be trying to move away from.”

The paper, “Semantics derived automatically from language corpora contain human-like biases,” published April 14  [2017] in Science. Its lead author is Aylin Caliskan, a postdoctoral research associate and a CITP fellow at Princeton; Joanna Bryson, a reader at University of Bath, and CITP affiliate, is a coauthor.

As a touchstone for documented human biases, the study turned to the Implicit Association Test, used in numerous social psychology studies since its development at the University of Washington in the late 1990s. The test measures response times (in milliseconds) by human subjects asked to pair word concepts displayed on a computer screen. Response times are far shorter, the Implicit Association Test has repeatedly shown, when subjects are asked to pair two concepts they find similar, versus two concepts they find dissimilar.

Take flower types, like “rose” and “daisy,” and insects like “ant” and “moth.” These words can be paired with pleasant concepts, like “caress” and “love,” or unpleasant notions, like “filth” and “ugly.” People more quickly associate the flower words with pleasant concepts, and the insect terms with unpleasant ideas.

The Princeton team devised an experiment with a program where it essentially functioned like a machine learning version of the Implicit Association Test. Called GloVe, and developed by Stanford University researchers, the popular, open-source program is of the sort that a startup machine learning company might use at the heart of its product. The GloVe algorithm can represent the co-occurrence statistics of words in, say, a 10-word window of text. Words that often appear near one another have a stronger association than those words that seldom do.

The Stanford researchers turned GloVe loose on a huge trawl of contents from the World Wide Web, containing 840 billion words. Within this large sample of written human culture, Narayanan and colleagues then examined sets of so-called target words, like “programmer, engineer, scientist” and “nurse, teacher, librarian” alongside two sets of attribute words, such as “man, male” and “woman, female,” looking for evidence of the kinds of biases humans can unwittingly possess.

In the results, innocent, inoffensive biases, like for flowers over bugs, showed up, but so did examples along lines of gender and race. As it turned out, the Princeton machine learning experiment managed to replicate the broad substantiations of bias found in select Implicit Association Test studies over the years that have relied on live, human subjects.

For instance, the machine learning program associated female names more with familial attribute words, like “parents” and “wedding,” than male names. In turn, male names had stronger associations with career attributes, like “professional” and “salary.” Of course, results such as these are often just objective reflections of the true, unequal distributions of occupation types with respect to gender–like how 77 percent of computer programmers are male, according to the U.S. Bureau of Labor Statistics.

Yet this correctly distinguished bias about occupations can end up having pernicious, sexist effects. An example: when foreign languages are naively processed by machine learning programs, leading to gender-stereotyped sentences. The Turkish language uses a gender-neutral, third person pronoun, “o.” Plugged into the well-known, online translation service Google Translate, however, the Turkish sentences “o bir doktor” and “o bir hem?ire” with this gender-neutral pronoun are translated into English as “he is a doctor” and “she is a nurse.”

“This paper reiterates the important point that machine learning methods are not ‘objective’ or ‘unbiased’ just because they rely on mathematics and algorithms,” said Hanna Wallach, a senior researcher at Microsoft Research New York City, who was not involved in the study. “Rather, as long as they are trained using data from society and as long as society exhibits biases, these methods will likely reproduce these biases.”

Another objectionable example harkens back to a well-known 2004 paper by Marianne Bertrand of the University of Chicago Booth School of Business and Sendhil Mullainathan of Harvard University. The economists sent out close to 5,000 identical resumes to 1,300 job advertisements, changing only the applicants’ names to be either traditionally European American or African American. The former group was 50 percent more likely to be offered an interview than the latter. In an apparent corroboration of this bias, the new Princeton study demonstrated that a set of African American names had more unpleasantness associations than a European American set.

Computer programmers might hope to prevent cultural stereotype perpetuation through the development of explicit, mathematics-based instructions for the machine learning programs underlying AI systems. Not unlike how parents and mentors try to instill concepts of fairness and equality in children and students, coders could endeavor to make machines reflect the better angels of human nature.

“The biases that we studied in the paper are easy to overlook when designers are creating systems,” said Narayanan. “The biases and stereotypes in our society reflected in our language are complex and longstanding. Rather than trying to sanitize or eliminate them, we should treat biases as part of the language and establish an explicit way in machine learning of determining what we consider acceptable and unacceptable.”

Here’s a link to and a citation for the Princeton paper,

Semantics derived automatically from language corpora contain human-like biases by Aylin Caliskan, Joanna J. Bryson, Arvind Narayanan. Science  14 Apr 2017: Vol. 356, Issue 6334, pp. 183-186 DOI: 10.1126/science.aal4230

This paper appears to be open access.

Links to more cautionary posts about AI,

Aug 5, 2009: Autonomous algorithms; intelligent windows; pretty nano pictures

June 14, 2016:  Accountability for artificial intelligence decision-making

Oct. 25, 2016 Removing gender-based stereotypes from algorithms

March 1, 2017: Algorithms in decision-making: a government inquiry in the UK

There’s also a book which makes some of the current use of AI programmes and big data quite accessible reading: Cathy O’Neil’s ‘Weapons of Math Destruction: How Big Data Increases Inequality and Threatens Democracy’.

Gold spring-shaped coils for detecting twisted molecules

An April 3, 2017 news item on ScienceDaily describes a technique that could improve nanorobotics and more,

University of Bath scientists have used gold spring-shaped coils 5,000 times thinner than human hairs with powerful lasers to enable the detection of twisted molecules, and the applications could improve pharmaceutical design, telecommunications and nanorobotics.

An April 3, 2017 University of Bath press release (also on EurekAlert), which originated the news item, provides more detail (Note: A link has been removed),

Molecules, including many pharmaceuticals, twist in certain ways and can exist in left or right ‘handed’ forms depending on how they twist. This twisting, called chirality, is crucial to understand because it changes the way a molecule behaves, for example within our bodies.

Scientists can study chiral molecules using particular laser light, which itself twists as it travels. Such studies get especially difficult for small amounts of molecules. This is where the minuscule gold springs can be helpful. Their shape twists the light and could better fit it to the molecules, making it easier to detect minute amounts.

Using some of the smallest springs ever created, the researchers from the University of Bath Department of Physics, working with colleagues from the Max Planck Institute for Intelligent Systems, examined how effective the gold springs could be at enhancing interactions between light and chiral molecules. They based their study on a colour-conversion method for light, known as Second Harmonic Generation (SHG), whereby the better the performance of the spring, the more red laser light converts into blue laser light.

They found that the springs were indeed very promising but that how well they performed depended on the direction they were facing.

Physics PhD student David Hooper who is the first author of the study, said: “It is like using a kaleidoscope to look at a picture; the picture becomes distorted when you rotate the kaleidoscope. We need to minimise the distortion.”

In order to reduce the distortions, the team is now working on ways to optimise the springs, which are known as chiral nanostructures.

“Closely observing the chirality of molecules has lots of potential applications, for example it could help improve the design and purity of pharmaceuticals and fine chemicals, help develop motion controls for nanorobotics and miniaturise components in telecommunications,” said Dr Ventsislav Valev who led the study and the University of Bath research team.

Gold spring shaped coils help reveal information about chiral molecules. Credit Ventsi Valev.

Here’s a link to and a citation for the paper,

Strong Rotational Anisotropies Affect Nonlinear Chiral Metamaterials by David C. Hooper, Andrew G. Mark, Christian Kuppe, Joel T. Collins, Peer Fischer, Ventsislav K. Valev. Advanced Materials DOI: 10.1002/adma.201605110  View/save citation First published: 31 January 2017

This is an open access paper.

Drip dry housing

This piece on new construction materials does have a nanotechnology aspect although it’s not made clear exactly how nanotechnology plays a role.

From a Dec. 28, 2016 news item on phys.org (Note: A link has been removed),

The construction industry is preparing to use textiles from the clothing and footwear industries. Gore-Tex-like membranes, which are usually found in weather-proof jackets and trekking shoes, are now being studied to build breathable, water-resistant walls. Tyvek is one such synthetic textile being used as a “raincoat” for homes.

You can find out more about Tyvek here.on the Dupont website.

A Dec. 21, 2016 press release by Chiara Cecchi for Youris ((European Research Media Center), which originated the news item, proceeds with more about textile-type construction materials,

Camping tents, which have been used for ages to protect against wind, ultra-violet rays and rain, have also inspired the modern construction industry, or “buildtech sector”. This new field of research focuses on the different fibres (animal-based such as wool or silk, plant-based such as linen and cotton and synthetic such as polyester and rayon) in order to develop technical or high-performance materials, thus improving the quality of construction, especially for buildings, dams, bridges, tunnels and roads. This is due to the fibres’ mechanical properties, such as lightness, strength, and also resistance to many factors like creep, deterioration by chemicals and pollutants in the air or rain.

“Textiles play an important role in the modernisation of infrastructure and in sustainable buildings”, explains Andrea Bassi, professor at the Department of Civil and Environmental Engineering (DICA), Politecnico of Milan, “Nylon and fiberglass are mixed with traditional fibres to control thermal and acoustic insulation in walls, façades and roofs. Technological innovation in materials, which includes nanotechnologies [emphasis mine] combined with traditional textiles used in clothes, enables buildings and other constructions to be designed using textiles containing steel polyvinyl chloride (PVC) or ethylene tetrafluoroethylene (ETFE). This gives the materials new antibacterial, antifungal and antimycotic properties in addition to being antistatic, sound-absorbing and water-resistant”.

Rooflys is another example. In this case, coated black woven textiles are placed under the roof to protect roof insulation from mould. These building textiles have also been tested for fire resistance, nail sealability, water and vapour impermeability, wind and UV resistance.

Photo: Production line at the co-operative enterprise CAVAC Biomatériaux, France. Natural fibres processed into a continuous mat (biofib) – Martin Ansell, BRE CICM, University of Bath, UK

In Spain three researchers from the Technical University of Madrid (UPM) have developed a new panel made with textile waste. They claim that it can significantly enhance both the thermal and acoustic conditions of buildings, while reducing greenhouse gas emissions and the energy impact associated with the development of construction materials.

Besides textiles, innovative natural fibre composite materials are a parallel field of the research on insulators that can preserve indoor air quality. These bio-based materials, such as straw and hemp, can reduce the incidence of mould growth because they breathe. The breathability of materials refers to their ability to absorb and desorb moisture naturally”, says expert Finlay White from Modcell, who contributed to the construction of what they claim are the world’s first commercially available straw houses, “For example, highly insulated buildings with poor ventilation can build-up high levels of moisture in the air. If the moisture meets a cool surface it will condensate and producing mould, unless it is managed. Bio-based materials have the means to absorb moisture so that the risk of condensation is reduced, preventing the potential for mould growth”.

The Bristol-based green technology firm [Modcell] is collaborating with the European Isobio project, which is testing bio-based insulators which perform 20% better than conventional materials. “This would lead to a 5% total energy reduction over the lifecycle of a building”, explains Martin Ansell, from BRE Centre for Innovative Construction Materials (BRE CICM), University of Bath, UK, another partner of the project.

“Costs would also be reduced. We are evaluating the thermal and hygroscopic properties of a range of plant-derived by-products including hemp, jute, rape and straw fibres plus corn cob residues. Advanced sol-gel coatings are being deposited on these fibres to optimise these properties in order to produce highly insulating and breathable construction materials”, Ansell concludes.

You can find Modcell here.

Here’s another image, which I believe is a closeup of the processed fibre shown in the above,

Production line at the co-operative enterprise CAVAC Biomatériaux, France. Natural fibres processed into a continuous mat (biofib) – Martin Ansell, BRE CICM, University of Bath, UK [Note: This caption appears to be a copy of the caption for the previous image]

Cardiac pacemakers: Korea’s in vivo demonstration of a self-powered one* and UK’s breath-based approach

As i best I can determine ,the last mention of a self-powered pacemaker and the like on this blog was in a Nov. 5, 2012 posting (Developing self-powered batteries for pacemakers). This latest news from The Korea Advanced Institute of Science and Technology (KAIST) is, I believe, the first time that such a device has been successfully tested in vivo. From a June 23, 2014 news item on ScienceDaily,

As the number of pacemakers implanted each year reaches into the millions worldwide, improving the lifespan of pacemaker batteries has been of great concern for developers and manufacturers. Currently, pacemaker batteries last seven years on average, requiring frequent replacements, which may pose patients to a potential risk involved in medical procedures.

A research team from the Korea Advanced Institute of Science and Technology (KAIST), headed by Professor Keon Jae Lee of the Department of Materials Science and Engineering at KAIST and Professor Boyoung Joung, M.D. of the Division of Cardiology at Severance Hospital of Yonsei University, has developed a self-powered artificial cardiac pacemaker that is operated semi-permanently by a flexible piezoelectric nanogenerator.

A June 23, 2014 KAIST news release on EurekAlert, which originated the news item, provides more details,

The artificial cardiac pacemaker is widely acknowledged as medical equipment that is integrated into the human body to regulate the heartbeats through electrical stimulation to contract the cardiac muscles of people who suffer from arrhythmia. However, repeated surgeries to replace pacemaker batteries have exposed elderly patients to health risks such as infections or severe bleeding during operations.

The team’s newly designed flexible piezoelectric nanogenerator directly stimulated a living rat’s heart using electrical energy converted from the small body movements of the rat. This technology could facilitate the use of self-powered flexible energy harvesters, not only prolonging the lifetime of cardiac pacemakers but also realizing real-time heart monitoring.

The research team fabricated high-performance flexible nanogenerators utilizing a bulk single-crystal PMN-PT thin film (iBULe Photonics). The harvested energy reached up to 8.2 V and 0.22 mA by bending and pushing motions, which were high enough values to directly stimulate the rat’s heart.

Professor Keon Jae Lee said:

“For clinical purposes, the current achievement will benefit the development of self-powered cardiac pacemakers as well as prevent heart attacks via the real-time diagnosis of heart arrhythmia. In addition, the flexible piezoelectric nanogenerator could also be utilized as an electrical source for various implantable medical devices.”

This image illustrating a self-powered nanogenerator for a cardiac pacemaker has been provided by KAIST,

This picture shows that a self-powered cardiac pacemaker is enabled by a flexible piezoelectric energy harvester. Credit: KAIST

This picture shows that a self-powered cardiac pacemaker is enabled by a flexible piezoelectric energy harvester.
Credit: KAIST

Here’s a link to and a citation for the paper,

Self-Powered Cardiac Pacemaker Enabled by Flexible Single Crystalline PMN-PT Piezoelectric Energy Harvester by Geon-Tae Hwang, Hyewon Park, Jeong-Ho Lee, SeKwon Oh, Kwi-Il Park, Myunghwan Byun, Hyelim Park, Gun Ahn, Chang Kyu Jeong, Kwangsoo No, HyukSang Kwon, Sang-Goo Lee, Boyoung Joung, and Keon Jae Lee. Advanced Materials DOI: 10.1002/adma.201400562
Article first published online: 17 APR 2014

© 2014 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim

This paper is behind a paywall.

There was a May 15, 2014 KAIST news release on EurekAlert announcing this same piece of research but from a technical perspective,

The energy efficiency of KAIST’s piezoelectric nanogenerator has increased by almost 40 times, one step closer toward the commercialization of flexible energy harvesters that can supply power infinitely to wearable, implantable electronic devices

NANOGENERATORS are innovative self-powered energy harvesters that convert kinetic energy created from vibrational and mechanical sources into electrical power, removing the need of external circuits or batteries for electronic devices. This innovation is vital in realizing sustainable energy generation in isolated, inaccessible, or indoor environments and even in the human body.

Nanogenerators, a flexible and lightweight energy harvester on a plastic substrate, can scavenge energy from the extremely tiny movements of natural resources and human body such as wind, water flow, heartbeats, and diaphragm and respiration activities to generate electrical signals. The generators are not only self-powered, flexible devices but also can provide permanent power sources to implantable biomedical devices, including cardiac pacemakers and deep brain stimulators.

However, poor energy efficiency and a complex fabrication process have posed challenges to the commercialization of nanogenerators. Keon Jae Lee, Associate Professor of Materials Science and Engineering at KAIST, and his colleagues have recently proposed a solution by developing a robust technique to transfer a high-quality piezoelectric thin film from bulk sapphire substrates to plastic substrates using laser lift-off (LLO).

Applying the inorganic-based laser lift-off (LLO) process, the research team produced a large-area PZT thin film nanogenerators on flexible substrates (2 cm x 2 cm).

“We were able to convert a high-output performance of ~250 V from the slight mechanical deformation of a single thin plastic substrate. Such output power is just enough to turn on 100 LED lights,” Keon Jae Lee explained.

The self-powered nanogenerators can also work with finger and foot motions. For example, under the irregular and slight bending motions of a human finger, the measured current signals had a high electric power of ~8.7 μA. In addition, the piezoelectric nanogenerator has world-record power conversion efficiency, almost 40 times higher than previously reported similar research results, solving the drawbacks related to the fabrication complexity and low energy efficiency.

Lee further commented,

“Building on this concept, it is highly expected that tiny mechanical motions, including human body movements of muscle contraction and relaxation, can be readily converted into electrical energy and, furthermore, acted as eternal power sources.”

The research team is currently studying a method to build three-dimensional stacking of flexible piezoelectric thin films to enhance output power, as well as conducting a clinical experiment with a flexible nanogenerator.

In addition to the 2012 posting I mentioned earlier, there was also this July 12, 2010 posting which described research on harvesting biomechanical movement ( heart beat, blood flow, muscle stretching, or even irregular vibration) at the Georgia (US) Institute of Technology where the lead researcher observed,

…  Wang [Professor Zhong Lin Wang at Georgia Tech] tells Nanowerk. “However, the applications of the nanogenerators under in vivo and in vitro environments are distinct. Some crucial problems need to be addressed before using these devices in the human body, such as biocompatibility and toxicity.”

Bravo to the KAIST researchers for getting this research to the in vivo testing stage.

Meanwhile at the University of Bristol and at the University of Bath, researchers have received funding for a new approach to cardiac pacemakers, designed them with the breath in mind. From a June 24, 2014 news item on Azonano,

Pacemaker research from the Universities of Bath and Bristol could revolutionise the lives of over 750,000 people who live with heart failure in the UK.

The British Heart Foundation (BHF) is awarding funding to researchers developing a new type of heart pacemaker that modulates its pulses to match breathing rates.

A June 23, 2014 University of Bristol press release, which originated the news item, provides some context,

During 2012-13 in England, more than 40,000 patients had a pacemaker fitted.

Currently, the pulses from pacemakers are set at a constant rate when fitted which doesn’t replicate the natural beating of the human heart.

The normal healthy variation in heart rate during breathing is lost in cardiovascular disease and is an indicator for sleep apnoea, cardiac arrhythmia, hypertension, heart failure and sudden cardiac death.

The device is then briefly described (from the press release),

The novel device being developed by scientists at the Universities of Bath and Bristol uses synthetic neural technology to restore this natural variation of heart rate with lung inflation, and is targeted towards patients with heart failure.

The device works by saving the heart energy, improving its pumping efficiency and enhancing blood flow to the heart muscle itself.  Pre-clinical trials suggest the device gives a 25 per cent increase in the pumping ability, which is expected to extend the life of patients with heart failure.

One aim of the project is to miniaturise the pacemaker device to the size of a postage stamp and to develop an implant that could be used in humans within five years.

Dr Alain Nogaret, Senior Lecturer in Physics at the University of Bath, explained“This is a multidisciplinary project with strong translational value.  By combining fundamental science and nanotechnology we will be able to deliver a unique treatment for heart failure which is not currently addressed by mainstream cardiac rhythm management devices.”

The research team has already patented the technology and is working with NHS consultants at the Bristol Heart Institute, the University of California at San Diego and the University of Auckland. [emphasis mine]

Professor Julian Paton, from the University of Bristol, added: “We’ve known for almost 80 years that the heart beat is modulated by breathing but we have never fully understood the benefits this brings. The generous new funding from the BHF will allow us to reinstate this natural occurring synchrony between heart rate and breathing and understand how it brings therapy to hearts that are failing.”

Professor Jeremy Pearson, Associate Medical Director at the BHF, said: “This study is a novel and exciting first step towards a new generation of smarter pacemakers. More and more people are living with heart failure so our funding in this area is crucial. The work from this innovative research team could have a real impact on heart failure patients’ lives in the future.”

Given some current events (‘Tesla opens up its patents’, Mike Masnick’s June 12, 2014 posting on Techdirt), I wonder what the situation will be vis à vis patents by the time this device gets to market.

* ‘one’ added to title on Aug. 13, 2014.