Tag Archives: University of Bath

Training drugs

This summarizes some of what’s happening in nanomedicine and provides a plug (boost) for the  University of Cambridge’s nanotechnology programmes (from a June 26, 2017 news item on Nanowerk),

Nanotechnology is creating new opportunities for fighting disease – from delivering drugs in smart packaging to nanobots powered by the world’s tiniest engines.

Chemotherapy benefits a great many patients but the side effects can be brutal.
When a patient is injected with an anti-cancer drug, the idea is that the molecules will seek out and destroy rogue tumour cells. However, relatively large amounts need to be administered to reach the target in high enough concentrations to be effective. As a result of this high drug concentration, healthy cells may be killed as well as cancer cells, leaving many patients weak, nauseated and vulnerable to infection.

One way that researchers are attempting to improve the safety and efficacy of drugs is to use a relatively new area of research known as nanothrapeutics to target drug delivery just to the cells that need it.

Professor Sir Mark Welland is Head of the Electrical Engineering Division at Cambridge. In recent years, his research has focused on nanotherapeutics, working in collaboration with clinicians and industry to develop better, safer drugs. He and his colleagues don’t design new drugs; instead, they design and build smart packaging for existing drugs.

The University of Cambridge has produced a video interview (referencing a 1966 movie ‘Fantastic Voyage‘ in its title)  with Sir Mark Welland,

A June 23, 2017 University of Cambridge press release, which originated the news item, delves further into the topic of nanotherapeutics (nanomedicine) and nanomachines,

Nanotherapeutics come in many different configurations, but the easiest way to think about them is as small, benign particles filled with a drug. They can be injected in the same way as a normal drug, and are carried through the bloodstream to the target organ, tissue or cell. At this point, a change in the local environment, such as pH, or the use of light or ultrasound, causes the nanoparticles to release their cargo.

Nano-sized tools are increasingly being looked at for diagnosis, drug delivery and therapy. “There are a huge number of possibilities right now, and probably more to come, which is why there’s been so much interest,” says Welland. Using clever chemistry and engineering at the nanoscale, drugs can be ‘taught’ to behave like a Trojan horse, or to hold their fire until just the right moment, or to recognise the target they’re looking for.

“We always try to use techniques that can be scaled up – we avoid using expensive chemistries or expensive equipment, and we’ve been reasonably successful in that,” he adds. “By keeping costs down and using scalable techniques, we’ve got a far better chance of making a successful treatment for patients.”

In 2014, he and collaborators demonstrated that gold nanoparticles could be used to ‘smuggle’ chemotherapy drugs into cancer cells in glioblastoma multiforme, the most common and aggressive type of brain cancer in adults, which is notoriously difficult to treat. The team engineered nanostructures containing gold and cisplatin, a conventional chemotherapy drug. A coating on the particles made them attracted to tumour cells from glioblastoma patients, so that the nanostructures bound and were absorbed into the cancer cells.

Once inside, these nanostructures were exposed to radiotherapy. This caused the gold to release electrons that damaged the cancer cell’s DNA and its overall structure, enhancing the impact of the chemotherapy drug. The process was so effective that 20 days later, the cell culture showed no evidence of any revival, suggesting that the tumour cells had been destroyed.

While the technique is still several years away from use in humans, tests have begun in mice. Welland’s group is working with MedImmune, the biologics R&D arm of pharmaceutical company AstraZeneca, to study the stability of drugs and to design ways to deliver them more effectively using nanotechnology.

“One of the great advantages of working with MedImmune is they understand precisely what the requirements are for a drug to be approved. We would shut down lines of research where we thought it was never going to get to the point of approval by the regulators,” says Welland. “It’s important to be pragmatic about it so that only the approaches with the best chance of working in patients are taken forward.”

The researchers are also targeting diseases like tuberculosis (TB). With funding from the Rosetrees Trust, Welland and postdoctoral researcher Dr Íris da luz Batalha are working with Professor Andres Floto in the Department of Medicine to improve the efficacy of TB drugs.

Their solution has been to design and develop nontoxic, biodegradable polymers that can be ‘fused’ with TB drug molecules. As polymer molecules have a long, chain-like shape, drugs can be attached along the length of the polymer backbone, meaning that very large amounts of the drug can be loaded onto each polymer molecule. The polymers are stable in the bloodstream and release the drugs they carry when they reach the target cell. Inside the cell, the pH drops, which causes the polymer to release the drug.

In fact, the polymers worked so well for TB drugs that another of Welland’s postdoctoral researchers, Dr Myriam Ouberaï, has formed a start-up company, Spirea, which is raising funding to develop the polymers for use with oncology drugs. Ouberaï is hoping to establish a collaboration with a pharma company in the next two years.

“Designing these particles, loading them with drugs and making them clever so that they release their cargo in a controlled and precise way: it’s quite a technical challenge,” adds Welland. “The main reason I’m interested in the challenge is I want to see something working in the clinic – I want to see something working in patients.”

Could nanotechnology move beyond therapeutics to a time when nanomachines keep us healthy by patrolling, monitoring and repairing the body?

Nanomachines have long been a dream of scientists and public alike. But working out how to make them move has meant they’ve remained in the realm of science fiction.

But last year, Professor Jeremy Baumberg and colleagues in Cambridge and the University of Bath developed the world’s tiniest engine – just a few billionths of a metre [nanometre] in size. It’s biocompatible, cost-effective to manufacture, fast to respond and energy efficient.

The forces exerted by these ‘ANTs’ (for ‘actuating nano-transducers’) are nearly a hundred times larger than those for any known device, motor or muscle. To make them, tiny charged particles of gold, bound together with a temperature-responsive polymer gel, are heated with a laser. As the polymer coatings expel water from the gel and collapse, a large amount of elastic energy is stored in a fraction of a second. On cooling, the particles spring apart and release energy.

The researchers hope to use this ability of ANTs to produce very large forces relative to their weight to develop three-dimensional machines that swim, have pumps that take on fluid to sense the environment and are small enough to move around our bloodstream.

Working with Cambridge Enterprise, the University’s commercialisation arm, the team in Cambridge’s Nanophotonics Centre hopes to commercialise the technology for microfluidics bio-applications. The work is funded by the Engineering and Physical Sciences Research Council and the European Research Council.

“There’s a revolution happening in personalised healthcare, and for that we need sensors not just on the outside but on the inside,” explains Baumberg, who leads an interdisciplinary Strategic Research Network and Doctoral Training Centre focused on nanoscience and nanotechnology.

“Nanoscience is driving this. We are now building technology that allows us to even imagine these futures.”

I have featured Welland and his work here before and noted his penchant for wanting to insert nanodevices into humans as per this excerpt from an April 30, 2010 posting,
Getting back to the Cambridge University video, do go and watch it on the Nanowerk site. It is fun and very informative and approximately 17 mins. I noticed that they reused part of their Nokia morph animation (last mentioned on this blog here) and offered some thoughts from Professor Mark Welland, the team leader on that project. Interestingly, Welland was talking about yet another possibility. (Sometimes I think nano goes too far!) He was suggesting that we could have chips/devices in our brains that would allow us to think about phoning someone and an immediate connection would be made to that person. Bluntly—no. Just think what would happen if the marketers got access and I don’t even want to think what a person who suffers psychotic breaks (i.e., hearing voices) would do with even more input. Welland starts to talk at the 11 minute mark (I think). For an alternative take on the video and more details, visit Dexter Johnson’s blog, Nanoclast, for this posting. Hint, he likes the idea of a phone in the brain much better than I do.

I’m not sure what could have occasioned this latest press release and related video featuring Welland and nanotherapeutics other than guessing that it was a slow news period.

Machine learning programs learn bias

The notion of bias in artificial intelligence (AI)/algorithms/robots is gaining prominence (links to other posts featuring algorithms and bias are at the end of this post). The latest research concerns machine learning where an artificial intelligence system trains itself with ordinary human language from the internet. From an April 13, 2017 American Association for the Advancement of Science (AAAS) news release on EurekAlert,

As artificial intelligence systems “learn” language from existing texts, they exhibit the same biases that humans do, a new study reveals. The results not only provide a tool for studying prejudicial attitudes and behavior in humans, but also emphasize how language is intimately intertwined with historical biases and cultural stereotypes. A common way to measure biases in humans is the Implicit Association Test (IAT), where subjects are asked to pair two concepts they find similar, in contrast to two concepts they find different; their response times can vary greatly, indicating how well they associated one word with another (for example, people are more likely to associate “flowers” with “pleasant,” and “insects” with “unpleasant”). Here, Aylin Caliskan and colleagues developed a similar way to measure biases in AI systems that acquire language from human texts; rather than measuring lag time, however, they used the statistical number of associations between words, analyzing roughly 2.2 million words in total. Their results demonstrate that AI systems retain biases seen in humans. For example, studies of human behavior show that the exact same resume is 50% more likely to result in an opportunity for an interview if the candidate’s name is European American rather than African-American. Indeed, the AI system was more likely to associate European American names with “pleasant” stimuli (e.g. “gift,” or “happy”). In terms of gender, the AI system also reflected human biases, where female words (e.g., “woman” and “girl”) were more associated than male words with the arts, compared to mathematics. In a related Perspective, Anthony G. Greenwald discusses these findings and how they could be used to further analyze biases in the real world.

There are more details about the research in this April 13, 2017 Princeton University news release on EurekAlert (also on ScienceDaily),

In debates over the future of artificial intelligence, many experts think of the new systems as coldly logical and objectively rational. But in a new study, researchers have demonstrated how machines can be reflections of us, their creators, in potentially problematic ways. Common machine learning programs, when trained with ordinary human language available online, can acquire cultural biases embedded in the patterns of wording, the researchers found. These biases range from the morally neutral, like a preference for flowers over insects, to the objectionable views of race and gender.

Identifying and addressing possible bias in machine learning will be critically important as we increasingly turn to computers for processing the natural language humans use to communicate, for instance in doing online text searches, image categorization and automated translations.

“Questions about fairness and bias in machine learning are tremendously important for our society,” said researcher Arvind Narayanan, an assistant professor of computer science and an affiliated faculty member at the Center for Information Technology Policy (CITP) at Princeton University, as well as an affiliate scholar at Stanford Law School’s Center for Internet and Society. “We have a situation where these artificial intelligence systems may be perpetuating historical patterns of bias that we might find socially unacceptable and which we might be trying to move away from.”

The paper, “Semantics derived automatically from language corpora contain human-like biases,” published April 14  [2017] in Science. Its lead author is Aylin Caliskan, a postdoctoral research associate and a CITP fellow at Princeton; Joanna Bryson, a reader at University of Bath, and CITP affiliate, is a coauthor.

As a touchstone for documented human biases, the study turned to the Implicit Association Test, used in numerous social psychology studies since its development at the University of Washington in the late 1990s. The test measures response times (in milliseconds) by human subjects asked to pair word concepts displayed on a computer screen. Response times are far shorter, the Implicit Association Test has repeatedly shown, when subjects are asked to pair two concepts they find similar, versus two concepts they find dissimilar.

Take flower types, like “rose” and “daisy,” and insects like “ant” and “moth.” These words can be paired with pleasant concepts, like “caress” and “love,” or unpleasant notions, like “filth” and “ugly.” People more quickly associate the flower words with pleasant concepts, and the insect terms with unpleasant ideas.

The Princeton team devised an experiment with a program where it essentially functioned like a machine learning version of the Implicit Association Test. Called GloVe, and developed by Stanford University researchers, the popular, open-source program is of the sort that a startup machine learning company might use at the heart of its product. The GloVe algorithm can represent the co-occurrence statistics of words in, say, a 10-word window of text. Words that often appear near one another have a stronger association than those words that seldom do.

The Stanford researchers turned GloVe loose on a huge trawl of contents from the World Wide Web, containing 840 billion words. Within this large sample of written human culture, Narayanan and colleagues then examined sets of so-called target words, like “programmer, engineer, scientist” and “nurse, teacher, librarian” alongside two sets of attribute words, such as “man, male” and “woman, female,” looking for evidence of the kinds of biases humans can unwittingly possess.

In the results, innocent, inoffensive biases, like for flowers over bugs, showed up, but so did examples along lines of gender and race. As it turned out, the Princeton machine learning experiment managed to replicate the broad substantiations of bias found in select Implicit Association Test studies over the years that have relied on live, human subjects.

For instance, the machine learning program associated female names more with familial attribute words, like “parents” and “wedding,” than male names. In turn, male names had stronger associations with career attributes, like “professional” and “salary.” Of course, results such as these are often just objective reflections of the true, unequal distributions of occupation types with respect to gender–like how 77 percent of computer programmers are male, according to the U.S. Bureau of Labor Statistics.

Yet this correctly distinguished bias about occupations can end up having pernicious, sexist effects. An example: when foreign languages are naively processed by machine learning programs, leading to gender-stereotyped sentences. The Turkish language uses a gender-neutral, third person pronoun, “o.” Plugged into the well-known, online translation service Google Translate, however, the Turkish sentences “o bir doktor” and “o bir hem?ire” with this gender-neutral pronoun are translated into English as “he is a doctor” and “she is a nurse.”

“This paper reiterates the important point that machine learning methods are not ‘objective’ or ‘unbiased’ just because they rely on mathematics and algorithms,” said Hanna Wallach, a senior researcher at Microsoft Research New York City, who was not involved in the study. “Rather, as long as they are trained using data from society and as long as society exhibits biases, these methods will likely reproduce these biases.”

Another objectionable example harkens back to a well-known 2004 paper by Marianne Bertrand of the University of Chicago Booth School of Business and Sendhil Mullainathan of Harvard University. The economists sent out close to 5,000 identical resumes to 1,300 job advertisements, changing only the applicants’ names to be either traditionally European American or African American. The former group was 50 percent more likely to be offered an interview than the latter. In an apparent corroboration of this bias, the new Princeton study demonstrated that a set of African American names had more unpleasantness associations than a European American set.

Computer programmers might hope to prevent cultural stereotype perpetuation through the development of explicit, mathematics-based instructions for the machine learning programs underlying AI systems. Not unlike how parents and mentors try to instill concepts of fairness and equality in children and students, coders could endeavor to make machines reflect the better angels of human nature.

“The biases that we studied in the paper are easy to overlook when designers are creating systems,” said Narayanan. “The biases and stereotypes in our society reflected in our language are complex and longstanding. Rather than trying to sanitize or eliminate them, we should treat biases as part of the language and establish an explicit way in machine learning of determining what we consider acceptable and unacceptable.”

Here’s a link to and a citation for the Princeton paper,

Semantics derived automatically from language corpora contain human-like biases by Aylin Caliskan, Joanna J. Bryson, Arvind Narayanan. Science  14 Apr 2017: Vol. 356, Issue 6334, pp. 183-186 DOI: 10.1126/science.aal4230

This paper appears to be open access.

Links to more cautionary posts about AI,

Aug 5, 2009: Autonomous algorithms; intelligent windows; pretty nano pictures

June 14, 2016:  Accountability for artificial intelligence decision-making

Oct. 25, 2016 Removing gender-based stereotypes from algorithms

March 1, 2017: Algorithms in decision-making: a government inquiry in the UK

There’s also a book which makes some of the current use of AI programmes and big data quite accessible reading: Cathy O’Neil’s ‘Weapons of Math Destruction: How Big Data Increases Inequality and Threatens Democracy’.

Gold spring-shaped coils for detecting twisted molecules

An April 3, 2017 news item on ScienceDaily describes a technique that could improve nanorobotics and more,

University of Bath scientists have used gold spring-shaped coils 5,000 times thinner than human hairs with powerful lasers to enable the detection of twisted molecules, and the applications could improve pharmaceutical design, telecommunications and nanorobotics.

An April 3, 2017 University of Bath press release (also on EurekAlert), which originated the news item, provides more detail (Note: A link has been removed),

Molecules, including many pharmaceuticals, twist in certain ways and can exist in left or right ‘handed’ forms depending on how they twist. This twisting, called chirality, is crucial to understand because it changes the way a molecule behaves, for example within our bodies.

Scientists can study chiral molecules using particular laser light, which itself twists as it travels. Such studies get especially difficult for small amounts of molecules. This is where the minuscule gold springs can be helpful. Their shape twists the light and could better fit it to the molecules, making it easier to detect minute amounts.

Using some of the smallest springs ever created, the researchers from the University of Bath Department of Physics, working with colleagues from the Max Planck Institute for Intelligent Systems, examined how effective the gold springs could be at enhancing interactions between light and chiral molecules. They based their study on a colour-conversion method for light, known as Second Harmonic Generation (SHG), whereby the better the performance of the spring, the more red laser light converts into blue laser light.

They found that the springs were indeed very promising but that how well they performed depended on the direction they were facing.

Physics PhD student David Hooper who is the first author of the study, said: “It is like using a kaleidoscope to look at a picture; the picture becomes distorted when you rotate the kaleidoscope. We need to minimise the distortion.”

In order to reduce the distortions, the team is now working on ways to optimise the springs, which are known as chiral nanostructures.

“Closely observing the chirality of molecules has lots of potential applications, for example it could help improve the design and purity of pharmaceuticals and fine chemicals, help develop motion controls for nanorobotics and miniaturise components in telecommunications,” said Dr Ventsislav Valev who led the study and the University of Bath research team.

Gold spring shaped coils help reveal information about chiral molecules. Credit Ventsi Valev.

Here’s a link to and a citation for the paper,

Strong Rotational Anisotropies Affect Nonlinear Chiral Metamaterials by David C. Hooper, Andrew G. Mark, Christian Kuppe, Joel T. Collins, Peer Fischer, Ventsislav K. Valev. Advanced Materials DOI: 10.1002/adma.201605110  View/save citation First published: 31 January 2017

This is an open access paper.

Drip dry housing

This piece on new construction materials does have a nanotechnology aspect although it’s not made clear exactly how nanotechnology plays a role.

From a Dec. 28, 2016 news item on phys.org (Note: A link has been removed),

The construction industry is preparing to use textiles from the clothing and footwear industries. Gore-Tex-like membranes, which are usually found in weather-proof jackets and trekking shoes, are now being studied to build breathable, water-resistant walls. Tyvek is one such synthetic textile being used as a “raincoat” for homes.

You can find out more about Tyvek here.on the Dupont website.

A Dec. 21, 2016 press release by Chiara Cecchi for Youris ((European Research Media Center), which originated the news item, proceeds with more about textile-type construction materials,

Camping tents, which have been used for ages to protect against wind, ultra-violet rays and rain, have also inspired the modern construction industry, or “buildtech sector”. This new field of research focuses on the different fibres (animal-based such as wool or silk, plant-based such as linen and cotton and synthetic such as polyester and rayon) in order to develop technical or high-performance materials, thus improving the quality of construction, especially for buildings, dams, bridges, tunnels and roads. This is due to the fibres’ mechanical properties, such as lightness, strength, and also resistance to many factors like creep, deterioration by chemicals and pollutants in the air or rain.

“Textiles play an important role in the modernisation of infrastructure and in sustainable buildings”, explains Andrea Bassi, professor at the Department of Civil and Environmental Engineering (DICA), Politecnico of Milan, “Nylon and fiberglass are mixed with traditional fibres to control thermal and acoustic insulation in walls, façades and roofs. Technological innovation in materials, which includes nanotechnologies [emphasis mine] combined with traditional textiles used in clothes, enables buildings and other constructions to be designed using textiles containing steel polyvinyl chloride (PVC) or ethylene tetrafluoroethylene (ETFE). This gives the materials new antibacterial, antifungal and antimycotic properties in addition to being antistatic, sound-absorbing and water-resistant”.

Rooflys is another example. In this case, coated black woven textiles are placed under the roof to protect roof insulation from mould. These building textiles have also been tested for fire resistance, nail sealability, water and vapour impermeability, wind and UV resistance.

Photo: Production line at the co-operative enterprise CAVAC Biomatériaux, France. Natural fibres processed into a continuous mat (biofib) – Martin Ansell, BRE CICM, University of Bath, UK

In Spain three researchers from the Technical University of Madrid (UPM) have developed a new panel made with textile waste. They claim that it can significantly enhance both the thermal and acoustic conditions of buildings, while reducing greenhouse gas emissions and the energy impact associated with the development of construction materials.

Besides textiles, innovative natural fibre composite materials are a parallel field of the research on insulators that can preserve indoor air quality. These bio-based materials, such as straw and hemp, can reduce the incidence of mould growth because they breathe. The breathability of materials refers to their ability to absorb and desorb moisture naturally”, says expert Finlay White from Modcell, who contributed to the construction of what they claim are the world’s first commercially available straw houses, “For example, highly insulated buildings with poor ventilation can build-up high levels of moisture in the air. If the moisture meets a cool surface it will condensate and producing mould, unless it is managed. Bio-based materials have the means to absorb moisture so that the risk of condensation is reduced, preventing the potential for mould growth”.

The Bristol-based green technology firm [Modcell] is collaborating with the European Isobio project, which is testing bio-based insulators which perform 20% better than conventional materials. “This would lead to a 5% total energy reduction over the lifecycle of a building”, explains Martin Ansell, from BRE Centre for Innovative Construction Materials (BRE CICM), University of Bath, UK, another partner of the project.

“Costs would also be reduced. We are evaluating the thermal and hygroscopic properties of a range of plant-derived by-products including hemp, jute, rape and straw fibres plus corn cob residues. Advanced sol-gel coatings are being deposited on these fibres to optimise these properties in order to produce highly insulating and breathable construction materials”, Ansell concludes.

You can find Modcell here.

Here’s another image, which I believe is a closeup of the processed fibre shown in the above,

Production line at the co-operative enterprise CAVAC Biomatériaux, France. Natural fibres processed into a continuous mat (biofib) – Martin Ansell, BRE CICM, University of Bath, UK [Note: This caption appears to be a copy of the caption for the previous image]

Cardiac pacemakers: Korea’s in vivo demonstration of a self-powered one* and UK’s breath-based approach

As i best I can determine ,the last mention of a self-powered pacemaker and the like on this blog was in a Nov. 5, 2012 posting (Developing self-powered batteries for pacemakers). This latest news from The Korea Advanced Institute of Science and Technology (KAIST) is, I believe, the first time that such a device has been successfully tested in vivo. From a June 23, 2014 news item on ScienceDaily,

As the number of pacemakers implanted each year reaches into the millions worldwide, improving the lifespan of pacemaker batteries has been of great concern for developers and manufacturers. Currently, pacemaker batteries last seven years on average, requiring frequent replacements, which may pose patients to a potential risk involved in medical procedures.

A research team from the Korea Advanced Institute of Science and Technology (KAIST), headed by Professor Keon Jae Lee of the Department of Materials Science and Engineering at KAIST and Professor Boyoung Joung, M.D. of the Division of Cardiology at Severance Hospital of Yonsei University, has developed a self-powered artificial cardiac pacemaker that is operated semi-permanently by a flexible piezoelectric nanogenerator.

A June 23, 2014 KAIST news release on EurekAlert, which originated the news item, provides more details,

The artificial cardiac pacemaker is widely acknowledged as medical equipment that is integrated into the human body to regulate the heartbeats through electrical stimulation to contract the cardiac muscles of people who suffer from arrhythmia. However, repeated surgeries to replace pacemaker batteries have exposed elderly patients to health risks such as infections or severe bleeding during operations.

The team’s newly designed flexible piezoelectric nanogenerator directly stimulated a living rat’s heart using electrical energy converted from the small body movements of the rat. This technology could facilitate the use of self-powered flexible energy harvesters, not only prolonging the lifetime of cardiac pacemakers but also realizing real-time heart monitoring.

The research team fabricated high-performance flexible nanogenerators utilizing a bulk single-crystal PMN-PT thin film (iBULe Photonics). The harvested energy reached up to 8.2 V and 0.22 mA by bending and pushing motions, which were high enough values to directly stimulate the rat’s heart.

Professor Keon Jae Lee said:

“For clinical purposes, the current achievement will benefit the development of self-powered cardiac pacemakers as well as prevent heart attacks via the real-time diagnosis of heart arrhythmia. In addition, the flexible piezoelectric nanogenerator could also be utilized as an electrical source for various implantable medical devices.”

This image illustrating a self-powered nanogenerator for a cardiac pacemaker has been provided by KAIST,

This picture shows that a self-powered cardiac pacemaker is enabled by a flexible piezoelectric energy harvester. Credit: KAIST

This picture shows that a self-powered cardiac pacemaker is enabled by a flexible piezoelectric energy harvester.
Credit: KAIST

Here’s a link to and a citation for the paper,

Self-Powered Cardiac Pacemaker Enabled by Flexible Single Crystalline PMN-PT Piezoelectric Energy Harvester by Geon-Tae Hwang, Hyewon Park, Jeong-Ho Lee, SeKwon Oh, Kwi-Il Park, Myunghwan Byun, Hyelim Park, Gun Ahn, Chang Kyu Jeong, Kwangsoo No, HyukSang Kwon, Sang-Goo Lee, Boyoung Joung, and Keon Jae Lee. Advanced Materials DOI: 10.1002/adma.201400562
Article first published online: 17 APR 2014

© 2014 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim

This paper is behind a paywall.

There was a May 15, 2014 KAIST news release on EurekAlert announcing this same piece of research but from a technical perspective,

The energy efficiency of KAIST’s piezoelectric nanogenerator has increased by almost 40 times, one step closer toward the commercialization of flexible energy harvesters that can supply power infinitely to wearable, implantable electronic devices

NANOGENERATORS are innovative self-powered energy harvesters that convert kinetic energy created from vibrational and mechanical sources into electrical power, removing the need of external circuits or batteries for electronic devices. This innovation is vital in realizing sustainable energy generation in isolated, inaccessible, or indoor environments and even in the human body.

Nanogenerators, a flexible and lightweight energy harvester on a plastic substrate, can scavenge energy from the extremely tiny movements of natural resources and human body such as wind, water flow, heartbeats, and diaphragm and respiration activities to generate electrical signals. The generators are not only self-powered, flexible devices but also can provide permanent power sources to implantable biomedical devices, including cardiac pacemakers and deep brain stimulators.

However, poor energy efficiency and a complex fabrication process have posed challenges to the commercialization of nanogenerators. Keon Jae Lee, Associate Professor of Materials Science and Engineering at KAIST, and his colleagues have recently proposed a solution by developing a robust technique to transfer a high-quality piezoelectric thin film from bulk sapphire substrates to plastic substrates using laser lift-off (LLO).

Applying the inorganic-based laser lift-off (LLO) process, the research team produced a large-area PZT thin film nanogenerators on flexible substrates (2 cm x 2 cm).

“We were able to convert a high-output performance of ~250 V from the slight mechanical deformation of a single thin plastic substrate. Such output power is just enough to turn on 100 LED lights,” Keon Jae Lee explained.

The self-powered nanogenerators can also work with finger and foot motions. For example, under the irregular and slight bending motions of a human finger, the measured current signals had a high electric power of ~8.7 μA. In addition, the piezoelectric nanogenerator has world-record power conversion efficiency, almost 40 times higher than previously reported similar research results, solving the drawbacks related to the fabrication complexity and low energy efficiency.

Lee further commented,

“Building on this concept, it is highly expected that tiny mechanical motions, including human body movements of muscle contraction and relaxation, can be readily converted into electrical energy and, furthermore, acted as eternal power sources.”

The research team is currently studying a method to build three-dimensional stacking of flexible piezoelectric thin films to enhance output power, as well as conducting a clinical experiment with a flexible nanogenerator.

In addition to the 2012 posting I mentioned earlier, there was also this July 12, 2010 posting which described research on harvesting biomechanical movement ( heart beat, blood flow, muscle stretching, or even irregular vibration) at the Georgia (US) Institute of Technology where the lead researcher observed,

…  Wang [Professor Zhong Lin Wang at Georgia Tech] tells Nanowerk. “However, the applications of the nanogenerators under in vivo and in vitro environments are distinct. Some crucial problems need to be addressed before using these devices in the human body, such as biocompatibility and toxicity.”

Bravo to the KAIST researchers for getting this research to the in vivo testing stage.

Meanwhile at the University of Bristol and at the University of Bath, researchers have received funding for a new approach to cardiac pacemakers, designed them with the breath in mind. From a June 24, 2014 news item on Azonano,

Pacemaker research from the Universities of Bath and Bristol could revolutionise the lives of over 750,000 people who live with heart failure in the UK.

The British Heart Foundation (BHF) is awarding funding to researchers developing a new type of heart pacemaker that modulates its pulses to match breathing rates.

A June 23, 2014 University of Bristol press release, which originated the news item, provides some context,

During 2012-13 in England, more than 40,000 patients had a pacemaker fitted.

Currently, the pulses from pacemakers are set at a constant rate when fitted which doesn’t replicate the natural beating of the human heart.

The normal healthy variation in heart rate during breathing is lost in cardiovascular disease and is an indicator for sleep apnoea, cardiac arrhythmia, hypertension, heart failure and sudden cardiac death.

The device is then briefly described (from the press release),

The novel device being developed by scientists at the Universities of Bath and Bristol uses synthetic neural technology to restore this natural variation of heart rate with lung inflation, and is targeted towards patients with heart failure.

The device works by saving the heart energy, improving its pumping efficiency and enhancing blood flow to the heart muscle itself.  Pre-clinical trials suggest the device gives a 25 per cent increase in the pumping ability, which is expected to extend the life of patients with heart failure.

One aim of the project is to miniaturise the pacemaker device to the size of a postage stamp and to develop an implant that could be used in humans within five years.

Dr Alain Nogaret, Senior Lecturer in Physics at the University of Bath, explained“This is a multidisciplinary project with strong translational value.  By combining fundamental science and nanotechnology we will be able to deliver a unique treatment for heart failure which is not currently addressed by mainstream cardiac rhythm management devices.”

The research team has already patented the technology and is working with NHS consultants at the Bristol Heart Institute, the University of California at San Diego and the University of Auckland. [emphasis mine]

Professor Julian Paton, from the University of Bristol, added: “We’ve known for almost 80 years that the heart beat is modulated by breathing but we have never fully understood the benefits this brings. The generous new funding from the BHF will allow us to reinstate this natural occurring synchrony between heart rate and breathing and understand how it brings therapy to hearts that are failing.”

Professor Jeremy Pearson, Associate Medical Director at the BHF, said: “This study is a novel and exciting first step towards a new generation of smarter pacemakers. More and more people are living with heart failure so our funding in this area is crucial. The work from this innovative research team could have a real impact on heart failure patients’ lives in the future.”

Given some current events (‘Tesla opens up its patents’, Mike Masnick’s June 12, 2014 posting on Techdirt), I wonder what the situation will be vis à vis patents by the time this device gets to market.

* ‘one’ added to title on Aug. 13, 2014.

Richard Van Duyne solves mystery of Renoir’s red with surface-enhanced Raman spectroscopy (SERS) and Canadian scientists uncover forgeries

The only things these two items have in common is that they are concerned with visual art. and with solving mysteries The first item concerns research by Richard Van Duyne into the nature of the red paint used in one of Renoir’s paintings. A February 14, 2014 news item on Azonano describes some of the art conservation work that Van Duyne’s (nanoish) technology has made possible along with details about this most recent work,

Scientists are using powerful analytical and imaging tools to study artworks from all ages, delving deep below the surface to reveal the process and materials used by some of the world’s greatest artists.

Northwestern University chemist Richard P. Van Duyne, in collaboration with conservation scientists at the Art Institute of Chicago, has been using a scientific method he discovered nearly four decades ago to investigate masterpieces by Pierre-Auguste Renoir, Winslow Homer and Mary Cassatt.

Van Duyne recently identified the chemical components of paint, now partially faded, used by Renoir in his oil painting “Madame Léon Clapisson.” Van Duyne discovered the artist used carmine lake, a brilliant but light-sensitive red pigment, on this colorful canvas. The scientific investigation is the cornerstone of a new exhibition at the Art Institute of Chicago.

The Art Institute of Chicago’s exhibition is called, Renoir’s True Colors: Science Solves a Mystery. being held from Feb. 12, 2014 – April 27, 2014. Here is an image of the Renoir painting in question and an image featuring the equipment being used,

Renoir-Madame-Leon-Clapisson.Art Institute of Chicago.

Renoir-Madame-Leon-Clapisson.Art Institute of Chicago.

Renoir and surface-enhanced Raman spectroscopy (SERS). Art Institute of Chicago

Renoir and surface-enhanced Raman spectroscopy (SERS). Art Institute of Chicago

The Feb. 13, 2014 Northwestern University news release (also on EurekAlert) by Megan Fellman, which originated the news item, gives a brief description of Van Duyne’s technique and its impact on conservation at the Art Institute of Chicago (Note: A link has been removed),

To see what the naked eye cannot see, Van Duyne used surface-enhanced Raman spectroscopy (SERS) to uncover details of Renoir’s paint. SERS, discovered by Van Duyne in 1977, is widely recognized as the most sensitive form of spectroscopy capable of identifying molecules.

Van Duyne and his colleagues’ detective work informed the production of a new digital visualization of the painting’s original colors by the Art Institute’s conservation department. The re-colorized reproduction and the original painting (presented in a case that offers 360-degree views) can be viewed side by side at the exhibition “Renoir’s True Colors: Science Solves a Mystery” through April 27 [2014] at the Art Institute.

I first wrote about Van Duyne’s technique in my wiki, The NanoTech Mysteries. From the Scientists get artful page (Note: A footnote was removed),

Richard Van Duyne, then a chemist at Northwestern University, developed the technique in 1977. Van Duyne’s technology, based on Raman spectroscopy which has been around since the 1920s, is called surface-enhanced Raman spectroscopy’ or SERS “[and] uses laser light and nanoparticles of precious metals to interact with molecules to show the chemical make-up of a particular dye.”

This next item is about forgery detection. A March 5, 2014 news release on EurekAlert describes the latest developments,

Gallery owners, private collectors, conservators, museums and art dealers face many problems in protecting and evaluating their collections such as determining origin, authenticity and discovery of forgery, as well as conservation issues. Today these problems are more accurately addressed through the application of modern, non-destructive, “hi-tech” techniques.

Dmitry Gavrilov, a PhD student in the Department of Physics at the University of Windsor (Windsor, Canada), along with Dr. Roman Gr. Maev, the Department of Physics Professor at the University of Windsor (Windsor, Canada) and Professor Dr. Darryl Almond of the University of Bath (Bath, UK) have been busy applying modern techniques to this age-old field. Infrared imaging, thermography, spectroscopy, UV fluorescence analysis, and acoustic microscopy are among the innovative approaches they are using to conduct pre-restoration analysis of works of art. Some fascinating results from their applications are published today in the Canadian Journal of Physics.

Since the early 1900s, using infrared imaging in various wave bands, scientists have been able to see what parts of artworks have been retouched or altered and sometimes even reveal the artist’s original sketches beneath layers of the paint. Thermography is a relatively new approach in art analysis that allows for deep subsurface investigation to find defects and past reparations. To a conservator these new methods are key in saving priceless works from further damage.

Gavrilov explains, “We applied new approaches in processing thermographic data, materials spectra data, and also the technique referred to as craquelure pattern analysis. The latter is based on advanced morphological processing of images of surface cracks. These cracks, caused by a number of factors such as structure of canvas, paints and binders used, can uncover important clues on the origins of a painting.”

“Air-coupled acoustic imaging and acoustic microscopy are other innovative approaches which have been developed and introduced into art analysis by our team under supervision of Dr. Roman Gr. Maev. The technique has proven to be extremely sensitive to small layer detachments and allows for the detection of early stages of degradation. It is based on the same principles as medical and industrial ultrasound, namely, the sending a sound wave to the sample and receiving it back. ”

Spectroscopy is a technique that has been useful in the fight against art fraud. It can determine chemical composition of pigments and binders, which is essential information in the hands of an art specialist in revealing fakes. As described in the paper, “…according to the FBI, the value of art fraud, forgery and theft is up to $6 billion per year, which makes it the third most lucrative crime in the world after drug trafficking and the illegal weapons trade.”

One might wonder how these modern applications can be safe for delicate works of art when even flash photography is banned in art galleries. The authors discuss this and other safety concerns, describing both historic and modern-day implications of flash bulbs and exhibit illumination and scientific methods. As the paper concludes, the authors suggest that we can expect that the number of “hi-tech” techniques will only increase. In the future, art experts will likely have a variety of tools to help them solve many of the mysteries hiding beneath the layers.

Here’s a link to and a citation for the paper,

A review of imaging methods in analysis of works of art: Thermographic imaging method in art analysis by D. Gavrilov, R.Gr. Maev, and D.P. Almond. Canadian Journal of Physics, 10.1139/cjp-2013-0128

This paper is open access.

More questions about whether nanoparticles penetrate the skin

The research from the University of Bath about nanoparticles not penetrating the skin has drawn some interest. In addition to the mention here yesterday, in this Oct. 3, 2012 posting, there was this Oct. 2, 2012 posting by Dexter Johnson at the Nanoclast blog on the IEEE [Institute of Electrical and Electronics Engineers] website. I have excerpted the first and last paragraphs of Dexter’s posting as they neatly present the campaign to regulate the use of  nanoparticles in cosmetics and the means by which science progresses, i.e. this study is not definitive,

For at least the last several years, NGO’s like Friends of the Earth (FoE) have been leveraging preliminary studies that indicated that nanoparticles might pass right through our skin to call for a complete moratorium on the use of any nanomaterials in sunscreens and cosmetics.

This latest UK research certainly won’t put this issue to rest. These experiments will need to be repeated and the results duplicated. That’s how science works. We should not be jumping to any conclusions that this research proves nanoparticles are absolutely safe any more than we should be jumping to the conclusion that they are a risk. Science cuts both ways.

Meanwhile a writer in Australia, Sarah Berry, takes a different approach in her Oct. 4, 2012 article for the Australian newspaper, the  Sydney Morning Herald,

“Breakthrough” claims by cosmetic companies aren’t all they’re cracked up to be, according to a new study.

Nanotechnology — the science of super-small particles — has featured in cosmetic formulations since the late ’80s. Brands claim the technology delivers the “deep-penetrating action” of vitamins and other “active ingredients”.

You may think you know what direction Berry is going to pursue but she swerves,

Dr Gregory Crocetti, a nanotechnology campaigner with Friends of the Earth Australia, was scathing of the study. “To conclude that nanoparticles do not penetrate human skin based on a short-term study using excised pig skin is highly irresponsible,” he said. “This is yet another example of short-term, in-vitro research that doesn’t reflect real-life conditions like skin flexing, and the fact that penetration enhancers are used in most cosmetics. There is an urgent need for more long-term studies that actually reflect realistic conditions.”

Professor Brian Gulson, from Macquarie University in NSW, was was similarly critical. The geochemist’s own study, from 2010 and in conjunction with CSIRO [Australia’s national science agency, the Commonwealth Scientific and Industrial Research Organization], found that small amounts of zinc particles in sunscreen “can pass through the protective layers of skin exposed to the sun in a real-life environment and be detected in blood and urine”.

Of the latest study he said: “Even though they used a sophisticated method of laser scanning confocal microscopy, their results only reinforced earlier studies [and had] no relevance to ‘real life’, especially to cosmetics, because they used polystyrene nanoparticles, and because they used excised (that is, ‘dead’) pig’s skin.”

I missed the fact that this study was an in vitro test, which is always less convincing than in vivo testing. In my Nov. 29, 2011 posting about some research into nano zinc oxide I mentioned in vitro vs. in vivo testing and Brian Gulson’s research,

I was able to access the study and while I’m not an expert by any means I did note that the study was ‘in vitro’, in this case, the cells were on slides when they were being studied. It’s impossible to draw hard and fast conclusions about what will happen in a body (human or otherwise) since there are other systems at work which are not present on a slide.

… here’s what Brian Gulson had to say about nano zinc oxide concentrations in his work and about a shortcoming in his study (from an Australian Broadcasting Corporation [ABC] Feb. 25, 2010 interviewwith Ashley Hall,

BRIAN GULSON: I guess the critical thing was that we didn’t find large amounts of it getting through the skin. The sunscreens contain 18 to 20 per cent zinc oxide usually and ours was about 20 per zinc. So that’s an awful lot of zinc you’re putting on the skin but we found tiny amounts in the blood of that tracer that we used.

ASHLEY HALL: So is it a significant amount?

BRIAN GULSON: No, no it’s really not.

ASHLEY HALL: But Brian Gulson is warning people who use a lot of sunscreen over an extended period that they could be at risk of having elevated levels of zinc.

BRIAN GULSON: Maybe with young children where you’re applying it seven days a week, it could be an issue but I’m more than happy to continue applying it to my grandchildren.

ASHLEY HALL: This study doesn’t shed any light on the question of whether the nano-particles themselves played a part in the zinc absorption.

BRIAN GULSON: That was the most critical thing. This isotope technique cannot tell whether or not it’s a zinc oxide nano-particle that got through skin or whether it’s just zinc that was dissolved up in contact with the skin and then forms zinc ions or so-called soluble ions. So that’s one major deficiency of our study.

Of course, I have a question about Gulson’s conclusion  that very little of the nano zinc oxide was penetrating the skin based on blood and urine samples taken over the course of the study. Is it possible that after penetrating the skin it was stored in the cells  instead of being eliminated?

It seems it’s not yet time to press the panic button since more research is needed for scientists to refine their understanding of nano zinc oxide and possible health effects from its use.

What I found most interesting in Berry’s article was the advice from the Friends of the Earth,

The contradictory claims about sunscreen can make it hard to know what to do this summer. Friends of the Earth Australia advise people to continue to be sun safe — seeking shade, wearing protective clothing, a hat and sunglasses and using broad spectrum SPF 30+ sunscreen.

This is a huge change in tone for that organization, which until now has been relentless in its anti nanosunscreen stance. Here they advise using a sunscreen and they don’t qualify it as they would usually by saying you should avoid nanosunscreens. I guess after the debacle earlier this year (mentioned in this Feb. 9, 2012 posting titled: Unintended consequences: Australians not using sunscreens to avoid nanoparticles?), they have reconsidered the intensity of their campaign.

For anyone interested in some of the history of the Friends of the Earth’s campaign and the NGO (non governemental organization) which went against the prevailing sentiment against nanosunscreen, I suggest reading Dexter’s posting in full and for those interested in the response from Australian scientists about this latest research, do read Berry’s article.

Can nanoparticles pass through the skin or not?

Researchers at the University of Bath (England) have proved that nanoparticles do not penetrate the skin, according to the Oct. 1, 2012 news item on Nanowerk,

 Research by scientists at the University of Bath is challenging claims that nanoparticles in medicated and cosmetic creams are able to transport and deliver active ingredients deep inside the skin.
Nanoparticles, which are tiny particles that are less than one hundredth of the thickness of a human hair, are used in sunscreens and some cosmetic and pharmaceutical creams.
The Bath study (“Objective assessment of nanoparticle disposition in mammalian skin after topical exposure”) discovered that even the tiniest of nanoparticles did not penetrate the skin’s surface.
These findings have implications for pharmaceutical researchers and cosmetic companies that design skin creams with nanoparticles that are supposed to transport ingredients to the deeper layers of the skin. [emphasis mine]

Back in July 2012, a research team at Northwestern University claimed to have successfully delivered gene regulation technology using moisturizers to penetrate the skin barrier, excerpted from my July 4, 2012 posting,

The news item originated from a July 2, 2012 news release, by Marla Paul for Northwestern University, which provides more details about the researchers,

“The technology developed by my collaborator Chad Mirkin and his lab is incredibly exciting because it can break through the skin barrier,” said co-senior author Amy S. Paller, M.D., the Walter J. Hamlin Professor, chair of dermatology and professor of pediatrics at Northwestern University Feinberg School of Medicine. She also is director of Northwestern’s Skin Disease Research Center.

A co-senior author of the paper, Mirkin is the George B. Rathmann Professor of Chemistry in the Weinberg College of Arts and Sciences and professor of medicine, chemical and biological engineering, biomedical engineering and materials science and engineering. He also is the director of Northwestern’s International Institute for Nanotechnology.

Interdisciplinary research is a hallmark of Northwestern. Paller and Mirkin said their work highlights the power of physician-scientists and scientists and engineers from other fields coming together to address a difficult medical problem.

“This all happened because of our world-class presence in both cancer nanotechnology and skin disease research,” Paller said. “In putting together the Skin Disease Research Center proposal, I reached out to Chad to see if his nanostructures might be applied to skin disease. We initially worked together through a pilot project of the center, and now the rest is history.”

There’s more about how the nanoscale structures make their way through the skin but it seems the team from the University of Bath are prepared to contradict this claim, from the University of Bath’s Oct. 1,2012 news release (which originated the news item on Nanowerk),

Research by scientists at the University of Bath is challenging claims that nanoparticles in medicated and cosmetic creams are able to transport and deliver active ingredients deep inside the skin.

The Bath study discovered that even the tiniest of nanoparticles did not penetrate the skin’s surface.

These findings have implications for pharmaceutical researchers and cosmetic companies that design skin creams with nanoparticles that are supposed to transport ingredients to the deeper layers of the skin.

However the findings will also allay safety concerns that potentially harmful nanoparticles such as those used in sunscreens can be absorbed into the body.

The scientists used a technique called laser scanning confocal microscopy to examine whether fluorescently-tagged polystyrene beads, ranging in size from 20 to 200 nanometers, were absorbed into the skin. [emphasis mine]

They found that even when the skin sample had been partially compromised by stripping the outer layers with adhesive tape, the nanoparticles did not penetrate the skin’s outer layer, known as the stratum corneum.

I note they tested nanostructures larger than 20 nanometers so it’s possible that nanostructures that measure less than 20 nanometers could penetrate skin, non? However, it seems the structure used to ‘penetrate’ the skin by the team Northwestern University are considerably larger (excerpted from my July 4, 2012 posting),

The topical delivery of gene regulation technology to cells deep in the skin is extremely difficult because of the formidable defenses skin provides for the body. The Northwestern approach takes advantage of drugs consisting of novel spherical arrangements of nucleic acids. These structures, each about 1,000 times smaller than the diameter of a human hair, have the unique ability to recruit and bind to natural proteins that allow them to traverse the skin and enter cells.

(Side note: I believe a structure 1,000 times smaller than the diameter of a human hair would be measured in microns not nanometers.) I gather it’s the use of the nucleic acids in specialized formulations by the Northwestern team which make nanoparticle entry past the skin possible which contrasts with the work done by the University of Bath researchers who tested nanoparticles in standard cosmetic formulations.

Hands, Waldo, and nano-scalpels

Hands were featured in Waldo (a 1943 short story by Robert Heinlein) and in Richard Feynman’s “Plenty of room at the bottom” 1959 lecture both of which were concerned with describing a field we now call nanotechnology. As I put it in my Aug. 17, 2009 posting,

Both of these texts feature the development of ‘smaller and smaller robotic hands to manipulate matter at the atomic and molecular levels’ and both of these have been cited as the birth of nanotechnology.

The details are a bit sketchy but it seems that scientists at the University of Bath (UK) have created a tiny (nanoscale) tool that looks like a hand. From the University of Bath’s Dec. 12, 2011 news release,

The lower picture shows the AFM probe with the nano-hand circled. The upper image is a vastly enlarged image of the nano-hand, showing the beckoning motion spotted by Dr Gordeev.

Here’s a little more about Dr. Gordeev’s observation from the Dec. 12, 2011 news item on Nanowerk,

Dr Sergey Gordeev, from the Department of Physics, was trying to create a nano-scalpel, a tool which can be used by biologists to look inside cells, when the process went wrong.

Dr Gordeev said: “I was amazed when I looked at the nano-scalpel and saw what appeared to be a beckoning hand.

“Nanoscience research is moving very fast at the moment, so maybe the nano-hand is trying to attract people and funders into this area.

The research group is using funding from Bath Ventures, an organisation which commercialises the results of the University’s research, and private company Diamond Hard Surfaces Ltd, to explore the use of hard coatings for nano-tools, making them more durable and suitable for delicate biological procedures.

I appreciate Dr. Gordeev’s whimsical notion that the hand might be trying to attract funding for this research group.