Tag Archives: University of Bath

Preventing warmed-up vaccines from becoming useless

One of the major problems with vaccines is that they need to be refrigerated. (The Nanopatch, which additionally wouldn’t require needles or syringes, is my favourite proposed solution and it comes from Australia.) This latest research into making vaccines more long-lasting is from the UK and takes a different approach to the problem.

From a June 8, 2020 news item on phys.org,

Vaccines are notoriously difficult to transport to remote or dangerous places, as they spoil when not refrigerated. Formulations are safe between 2°C and 8°C, but at other temperatures the proteins start to unravel, making the vaccines ineffective. As a result, millions of children around the world miss out on life-saving inoculations.

However, scientists have now found a way to prevent warmed-up vaccines from degrading. By encasing protein molecules in a silica shell, the structure remains intact even when heated to 100°C, or stored at room temperature for up to three years.

The technique for tailor-fitting a vaccine with a silica coat—known as ensilication—was developed by a Bath [University] team in collaboration with the University of Newcastle. This pioneering technology was seen to work in the lab two years ago, and now it has demonstrated its effectiveness in the real world too.

Here’s the lead researcher describing her team’s work

Ensilication: success in animal trials from University of Bath on Vimeo.

A June 8, 2020 University of Bath press release (also on EurekAlert) fills in more details about the research,

In their latest study, published in the journal Scientific Reports, the researchers sent both ensilicated and regular samples of the tetanus vaccine from Bath to Newcastle by ordinary post (a journey time of over 300 miles, which by post takes a day or two). When doses of the ensilicated vaccine were subsequently injected into mice, an immune response was triggered, showing the vaccine to be active. No immune response was detected in mice injected with unprotected doses of the vaccine, indicating the medicine had been damaged in transit.

Dr Asel Sartbaeva, who led the project from the University of Bath’s Department of Chemistry, said: “This is really exciting data because it shows us that ensilication preserves not just the structure of the vaccine proteins but also the function – the immunogenicity.”

“This project has focused on tetanus, which is part of the DTP (diphtheria, tetanus and pertussis) vaccine given to young children in three doses. Next, we will be working on developing a thermally-stable vaccine for diphtheria, and then pertussis. Eventually we want to create a silica cage for the whole DTP trivalent vaccine, so that every child in the world can be given DTP without having to rely on cold chain distribution.”

Cold chain distribution requires a vaccine to be refrigerated from the moment of manufacturing to the endpoint destination.

Silica is an inorganic, non-toxic material, and Dr Sartbaeva estimates that ensilicated vaccines could be used for humans within five to 15 years. She hopes the technology to silica-wrap proteins will eventually be adopted to store and transport all childhood vaccines, as well as other protein-based products, such as antibodies and enzymes.

“Ultimately, we want to make important medicines stable so they can be more widely available,” she said. “The aim is to eradicate vaccine-preventable diseases in low income countries by using thermally stable vaccines and cutting out dependence on cold chain.”

Currently, up to 50% of vaccine doses are discarded before use due to exposure to suboptimal temperatures. According to the World Health Organisation (WHO), 19.4 million infants did not receive routine life-saving vaccinations in 2018.

Here’s a link to and a citation for the paper,

Ensilicated tetanus antigen retains immunogenicity: in vivo study and time-resolved SAXS characterization by A. Doekhie, R. Dattani, Y-C. Chen, Y. Yang, A. Smith, A. P. Silve, F. Koumanov, S. A. Wells, K. J. Edler, K. J. Marchbank, J. M. H. van den Elsen & A. Sartbaeva. Scientific Reports volume 10, Article number: 9243 (2020) DOI: https://doi.org/10.1038/s41598-020-65876-3 Published 08 June 2020

This paper is open access

Nanopatch update

I tend to lose track as a science gets closer to commercialization since the science news becomes business news and I almost never scan that sector. It’s been about two-and-half years since I featured research that suggested Nanopatch provided more effective polio vaccination than the standard needle and syringe method in a December 20, 2017 post. The latest bits of news have an interesting timeline.

March 2020

Mark Kendal (Wikipedia entry) is the researcher behind the Nanopatch. He’s interviewed in a March 5, 2020 episode (about 20 mins.) in the Pioneers Series (bankrolled by Rolex [yes, the watch company]) on Monocle.com. Coincidentally or not, a new piece of research funded by Vaxxas (the nanopatch company founded by Mark Kendall; on the website you will find a ‘front’ page and a ‘Contact us’ page only) was announced in a March 17, 2020 news item on medical.net,

Vaxxas, a clinical-stage biotechnology company commercializing a novel vaccination platform, today announced the publication in the journal PLoS Medicine of groundbreaking clinical research indicating the broad immunological and commercial potential of Vaxxas’ novel high-density microarray patch (HD-MAP). Using influenza vaccine, the clinical study of Vaxxas’ HD-MAP demonstrated significantly enhanced immune response compared to vaccination by needle/syringe. This is the largest microarray patch clinical vaccine study ever performed.

“With vaccine coated onto Vaxxas HD-MAPs shown to be stable for up to a year at 40°C [emphasis mine], we can offer a truly differentiated platform with a global reach, particularly into low and middle income countries or in emergency use and pandemic situations,” said Angus Forster, Chief Development and Operations Officer of Vaxxas and lead author of the PLoS Medicine publication. “Vaxxas’ HD-MAP is readily fabricated by injection molding to produce a 10 x 10 mm square with more than 3,000 microprojections that are gamma-irradiated before aseptic dry application of vaccine to the HD-MAP’s tips. All elements of device design, as well as coating and QC, have been engineered to enable small, modular, aseptic lines to make millions of vaccine products per week.”

The PLoS publication reported results and analyses from a clinical study involving 210 clinical subjects [emphasis mine]. The clinical study was a two-part, randomized, partially double-blind, placebo-controlled trial conducted at a single Australian clinical site. The clinical study’s primary objective was to measure the safety and tolerability of A/Singapore/GP1908/2015 H1N1 (A/Sing) monovalent vaccine delivered by Vaxxas HD-MAP in comparison to an uncoated Vaxxas HD-MAP and IM [intramuscular] injection of a quadrivalent seasonal influenza vaccine (QIV) delivering approximately the same dose of A/Sing HA protein. Exploratory outcomes were: to evaluate the immune responses to HD-MAP application to the forearm with A/Sing at 4 dose levels in comparison to IM administration of A/Sing at the standard 15 μg HA per dose per strain, and to assess further measures of immune response through additional assays and assessment of the local skin response via punch biopsy of the HD-MAP application sites. Local skin response, serological, mucosal and cellular immune responses were assessed pre- and post-vaccination.

Here’s a link to and a citation for the latest ‘nanopatch’ paper,

Safety, tolerability, and immunogenicity of influenza vaccination with a high-density microarray patch: Results from a randomized, controlled phase I clinical trial by Angus H. Forster, Katey Witham, Alexandra C. I. Depelsenaire, Margaret Veitch, James W. Wells, Adam Wheatley, Melinda Pryor, Jason D. Lickliter, Barbara Francis, Steve Rockman, Jesse Bodle, Peter Treasure, Julian Hickling, Germain J. P. Fernando. DOI: https://doi.org/10.1371/journal.pmed.1003024 PLOS (Public Library of Science) Published: March 17, 2020

This is an open access paper.

May 2020

Two months later, Merck, an American multinational pharmaceutical company, showed some serious interest in the ‘nanopatch’. A May 28, 2020 article by Chris Newmarker for drugdelvierybusiness.com announces the news (Note: Links have been removed),

Merck has exercised its option to use Vaxxas‘ High Density Microarray Patch (HD-MAP) platform as a delivery platform for a vaccine candidate, the companies announced today [Thursday, May 28, 2020].

Also today, Vaxxas announced that German manufacturing equipment maker Harro Höfliger will help Vaxxas develop a high-throughput, aseptic manufacturing line to make vaccine products based on Vaxxas’ HD-MAP technology. Initial efforts will focus on having a pilot line operating in 2021 to support late-stage clinical studies — with a goal of single, aseptic-based lines being able to churn out 5 million vaccine products a week.

“A major challenge in commercializing microarray patches — like Vaxxas’ HD-MAP — for vaccination is the ability to manufacture at industrially-relevant scale, while meeting stringent sterility and quality standards. Our novel device design along with our innovative vaccine coating and quality verification technologies are an excellent fit for integration with Harro Höfliger’s aseptic process automation platforms. Adopting a modular approach, it will be possible to achieve output of tens-of-millions of vaccine-HD-MAP products per week,” Hoey [David L. Hoey, President and CEO of Vaxxas] said.

Vaxxas also claims that the patches can deliver vaccine more efficiently — a positive when people around the world are clamoring for a vaccine against COVID-19. The company points to a recent [March 17, 2020] clinical study in which their micropatch delivering a sixth of an influenza vaccine dose produced an immune response comparable to a full dose by intramuscular injection. A two-thirds dose by HD-MAP generated significantly faster and higher overall antibody responses.

As I noted earlier, this is an interesting timeline.

Final comment

In the end, what all of this means is that there may be more than one way to deal with vaccines and medicines that deteriorate all too quickly unless refrigerated. I wish all of these researchers the best.

Nanodevices show (from the inside) how cells change

Embryo cells + nanodevices from University of Bath on Vimeo.

Caption: Five mouse embryos, each containing a nanodevice that is 22-millionths of a metre long. The film begins when the embryos are 2-hours old and continues for 5 hours. Each embryo is about 100-millionths of a metre in diameter. Credit: Professor Tony Perry

Fascinating, yes? As I often watch before reading the caption, these were mysterious grey blobs moving around was my first impression. Given the headline for the May 26, 2020 news item on ScienceDaily, I was expecting the squarish-shaped devices inside,

For the first time, scientists have introduced minuscule tracking devices directly into the interior of mammalian cells, giving an unprecedented peek into the processes that govern the beginning of development.

This work on one-cell embryos is set to shift our understanding of the mechanisms that underpin cellular behaviour in general, and may ultimately provide insights into what goes wrong in ageing and disease.

The research, led by Professor Tony Perry from the Department of Biology and Biochemistry at the University of Bath [UK], involved injecting a silicon-based nanodevice together with sperm into the egg cell of a mouse. The result was a healthy, fertilised egg containing a tracking device.

This image looks to have been enhanced with colour,

Fluorescence of an embryo containing a nanodevice. Courtesy: University of Bath

A May 25, 2020 University of Bath press release (also on EurekAlert but published May 26, 2020)

The tiny devices are a little like spiders, complete with eight highly flexible ‘legs’. The legs measure the ‘pulling and pushing’ forces exerted in the cell interior to a very high level of precision, thereby revealing the cellular forces at play and showing how intracellular matter rearranged itself over time.

The nanodevices are incredibly thin – similar to some of the cell’s structural components, and measuring 22 nanometres, making them approximately 100,000 times thinner than a pound coin. This means they have the flexibility to register the movement of the cell’s cytoplasm as the one-cell embryo embarks on its voyage towards becoming a two-cell embryo.

“This is the first glimpse of the physics of any cell on this scale from within,” said Professor Perry. “It’s the first time anyone has seen from the inside how cell material moves around and organises itself.”

WHY PROBE A CELL’S MECHANICAL BEHAVIOUR?

The activity within a cell determines how that cell functions, explains Professor Perry. “The behaviour of intracellular matter is probably as influential to cell behaviour as gene expression,” he said. Until now, however, this complex dance of cellular material has remained largely unstudied. As a result, scientists have been able to identify the elements that make up a cell, but not how the cell interior behaves as a whole.

“From studies in biology and embryology, we know about certain molecules and cellular phenomena, and we have woven this information into a reductionist narrative of how things work, but now this narrative is changing,” said Professor Perry. The narrative was written largely by biologists, who brought with them the questions and tools of biology. What was missing was physics. Physics asks about the forces driving a cell’s behaviour, and provides a top-down approach to finding the answer.

“We can now look at the cell as a whole, not just the nuts and bolts that make it.”

Mouse embryos were chosen for the study because of their relatively large size (they measure 100 microns, or 100-millionths of a metre, in diameter, compared to a regular cell which is only 10 microns [10-millionths of a metre] in diameter). This meant that inside each embryo, there was space for a tracking device.

The researchers made their measurements by examining video recordings taken through a microscope as the embryo developed. “Sometimes the devices were pitched and twisted by forces that were even greater than those inside muscle cells,” said Professor Perry. “At other times, the devices moved very little, showing the cell interior had become calm. There was nothing random about these processes – from the moment you have a one-cell embryo, everything is done in a predictable way. The physics is programmed.”

The results add to an emerging picture of biology that suggests material inside a living cell is not static, but instead changes its properties in a pre-ordained way as the cell performs its function or responds to the environment. The work may one day have implications for our understanding of how cells age or stop working as they should, which is what happens in disease.

The study is published this week in Nature Materials and involved a trans-disciplinary partnership between biologists, materials scientists and physicists based in the UK, Spain and the USA.

Here’s a link to and a citation for the paper,

Tracking intracellular forces and mechanical property changes in mouse one-cell embryo development by Marta Duch, Núria Torras, Maki Asami, Toru Suzuki, María Isabel Arjona, Rodrigo Gómez-Martínez, Matthew D. VerMilyea, Robert Castilla, José Antonio Plaza & Anthony C. F. Perry. Nature Materials (2020) DOI: https://doi.org/10.1038/s41563-020-0685-9 Published 25 May 2020

This paper is behind a paywall.

Bloodless diabetes monitor enabled by nanotechnology

There have been some remarkable advances in the treatment of many diseases, diabetes being one of them. Of course, we can always make things better.and monitoring a diabetic patient’s glucose without have to draw blood is an improvement that may occur sooner rather than later as an April 9,2018 news item on Nanowerk suggests,

Scientists have created a non-invasive, adhesive patch, which promises the measurement of glucose levels through the skin without a finger-prick blood test, potentially removing the need for millions of diabetics to frequently carry out the painful and unpopular tests.

The patch does not pierce the skin, instead it draws glucose out from fluid between cells across hair follicles, which are individually accessed via an array of miniature sensors using a small electric current. The glucose collects in tiny reservoirs and is measured. Readings can be taken every 10 to 15 minutes over several hours.

Crucially, because of the design of the array of sensors and reservoirs, the patch does not require calibration with a blood sample — meaning that finger prick blood tests are unnecessary.

The device can measure glucose levels without piercing the skin Courtesy: University of Bath

An April 9, 2018 University of Bath press release, which originated the news item, expands on the theme,

Having established proof of the concept behind the device in a study published in Nature Nanotechnology, the research team from the University of Bath hopes that it can eventually become a low-cost, wearable sensor that sends regular, clinically relevant glucose measurements to the wearer’s phone or smartwatch wirelessly, alerting them when they may need to take action.

An important advantage of this device over others is that each miniature sensor of the array can operate on a small area over an individual hair follicle – this significantly reduces inter- and intra-skin variability in glucose extraction and increases the accuracy of the measurements taken such that calibration via a blood sample is not required.

The project is a multidisciplinary collaboration between scientists from the Departments of Physics, Pharmacy & Pharmacology, and Chemistry at the University of Bath.

Professor Richard Guy, from the Department of Pharmacy & Pharmacology, said: “A non-invasive – that is, needle-less – method to monitor blood sugar has proven a difficult goal to attain. The closest that has been achieved has required either at least a single-point calibration with a classic ‘finger-stick’, or the implantation of a pre-calibrated sensor via a single needle insertion. The monitor developed at Bath promises a truly calibration-free approach, an essential contribution in the fight to combat the ever-increasing global incidence of diabetes.”

Dr Adelina Ilie, from the Department of Physics, said: “The specific architecture of our array permits calibration-free operation, and it has the further benefit of allowing realisation with a variety of materials in combination. We utilised graphene as one of the components as it brings important advantages: specifically, it is strong, conductive, flexible, and potentially low-cost and environmentally friendly. In addition, our design can be implemented using high-throughput fabrication techniques like screen printing, which we hope will ultimately support a disposable, widely affordable device.”

In this study the team tested the patch on both pig skin, where they showed it could accurately track glucose levels across the range seen in diabetic human patients, and on healthy human volunteers, where again the patch was able to track blood sugar variations throughout the day.

The next steps include further refinement of the design of the patch to optimise the number of sensors in the array, to demonstrate full functionality over a 24-hour wear period, and to undertake a number of key clinical trials.

Diabetes is a serious public health problem which is increasing. The World Health Organization predicts the world-wide incidence of diabetes to rise from 171M in 2000 to 366M in 2030. In the UK, just under six per cent of adults have diabetes and the NHS spends around 10% of its budget on diabetes monitoring and treatments. Up to 50% of adults with diabetes are undiagnosed.

An effective, non-invasive way of monitoring blood glucose could both help diabetics, as well as those at risk of developing diabetes, make the right choices to either manage the disease well or reduce their risk of developing the condition. The work was funded by the Engineering and Physical Sciences Research Council (EPSRC), the Medical Research Council (MRC), and the Sir Halley Stewart Trust.

Here’s a link to and a citation for the paper,

Non-invasive, transdermal, path-selective and specific glucose monitoring via a graphene-based platform by Luca Lipani, Bertrand G. R. Dupont, Floriant Doungmene, Frank Marken, Rex M. Tyrrell, Richard H. Guy, & Adelina Ilie. Nature Nanotechnology (2018) doi:10.1038/s41565-018-0112-4 Published online: 09 April 2018

This paper is behind a paywall.

Training drugs

This summarizes some of what’s happening in nanomedicine and provides a plug (boost) for the  University of Cambridge’s nanotechnology programmes (from a June 26, 2017 news item on Nanowerk),

Nanotechnology is creating new opportunities for fighting disease – from delivering drugs in smart packaging to nanobots powered by the world’s tiniest engines.

Chemotherapy benefits a great many patients but the side effects can be brutal.
When a patient is injected with an anti-cancer drug, the idea is that the molecules will seek out and destroy rogue tumour cells. However, relatively large amounts need to be administered to reach the target in high enough concentrations to be effective. As a result of this high drug concentration, healthy cells may be killed as well as cancer cells, leaving many patients weak, nauseated and vulnerable to infection.

One way that researchers are attempting to improve the safety and efficacy of drugs is to use a relatively new area of research known as nanothrapeutics to target drug delivery just to the cells that need it.

Professor Sir Mark Welland is Head of the Electrical Engineering Division at Cambridge. In recent years, his research has focused on nanotherapeutics, working in collaboration with clinicians and industry to develop better, safer drugs. He and his colleagues don’t design new drugs; instead, they design and build smart packaging for existing drugs.

The University of Cambridge has produced a video interview (referencing a 1966 movie ‘Fantastic Voyage‘ in its title)  with Sir Mark Welland,

A June 23, 2017 University of Cambridge press release, which originated the news item, delves further into the topic of nanotherapeutics (nanomedicine) and nanomachines,

Nanotherapeutics come in many different configurations, but the easiest way to think about them is as small, benign particles filled with a drug. They can be injected in the same way as a normal drug, and are carried through the bloodstream to the target organ, tissue or cell. At this point, a change in the local environment, such as pH, or the use of light or ultrasound, causes the nanoparticles to release their cargo.

Nano-sized tools are increasingly being looked at for diagnosis, drug delivery and therapy. “There are a huge number of possibilities right now, and probably more to come, which is why there’s been so much interest,” says Welland. Using clever chemistry and engineering at the nanoscale, drugs can be ‘taught’ to behave like a Trojan horse, or to hold their fire until just the right moment, or to recognise the target they’re looking for.

“We always try to use techniques that can be scaled up – we avoid using expensive chemistries or expensive equipment, and we’ve been reasonably successful in that,” he adds. “By keeping costs down and using scalable techniques, we’ve got a far better chance of making a successful treatment for patients.”

In 2014, he and collaborators demonstrated that gold nanoparticles could be used to ‘smuggle’ chemotherapy drugs into cancer cells in glioblastoma multiforme, the most common and aggressive type of brain cancer in adults, which is notoriously difficult to treat. The team engineered nanostructures containing gold and cisplatin, a conventional chemotherapy drug. A coating on the particles made them attracted to tumour cells from glioblastoma patients, so that the nanostructures bound and were absorbed into the cancer cells.

Once inside, these nanostructures were exposed to radiotherapy. This caused the gold to release electrons that damaged the cancer cell’s DNA and its overall structure, enhancing the impact of the chemotherapy drug. The process was so effective that 20 days later, the cell culture showed no evidence of any revival, suggesting that the tumour cells had been destroyed.

While the technique is still several years away from use in humans, tests have begun in mice. Welland’s group is working with MedImmune, the biologics R&D arm of pharmaceutical company AstraZeneca, to study the stability of drugs and to design ways to deliver them more effectively using nanotechnology.

“One of the great advantages of working with MedImmune is they understand precisely what the requirements are for a drug to be approved. We would shut down lines of research where we thought it was never going to get to the point of approval by the regulators,” says Welland. “It’s important to be pragmatic about it so that only the approaches with the best chance of working in patients are taken forward.”

The researchers are also targeting diseases like tuberculosis (TB). With funding from the Rosetrees Trust, Welland and postdoctoral researcher Dr Íris da luz Batalha are working with Professor Andres Floto in the Department of Medicine to improve the efficacy of TB drugs.

Their solution has been to design and develop nontoxic, biodegradable polymers that can be ‘fused’ with TB drug molecules. As polymer molecules have a long, chain-like shape, drugs can be attached along the length of the polymer backbone, meaning that very large amounts of the drug can be loaded onto each polymer molecule. The polymers are stable in the bloodstream and release the drugs they carry when they reach the target cell. Inside the cell, the pH drops, which causes the polymer to release the drug.

In fact, the polymers worked so well for TB drugs that another of Welland’s postdoctoral researchers, Dr Myriam Ouberaï, has formed a start-up company, Spirea, which is raising funding to develop the polymers for use with oncology drugs. Ouberaï is hoping to establish a collaboration with a pharma company in the next two years.

“Designing these particles, loading them with drugs and making them clever so that they release their cargo in a controlled and precise way: it’s quite a technical challenge,” adds Welland. “The main reason I’m interested in the challenge is I want to see something working in the clinic – I want to see something working in patients.”

Could nanotechnology move beyond therapeutics to a time when nanomachines keep us healthy by patrolling, monitoring and repairing the body?

Nanomachines have long been a dream of scientists and public alike. But working out how to make them move has meant they’ve remained in the realm of science fiction.

But last year, Professor Jeremy Baumberg and colleagues in Cambridge and the University of Bath developed the world’s tiniest engine – just a few billionths of a metre [nanometre] in size. It’s biocompatible, cost-effective to manufacture, fast to respond and energy efficient.

The forces exerted by these ‘ANTs’ (for ‘actuating nano-transducers’) are nearly a hundred times larger than those for any known device, motor or muscle. To make them, tiny charged particles of gold, bound together with a temperature-responsive polymer gel, are heated with a laser. As the polymer coatings expel water from the gel and collapse, a large amount of elastic energy is stored in a fraction of a second. On cooling, the particles spring apart and release energy.

The researchers hope to use this ability of ANTs to produce very large forces relative to their weight to develop three-dimensional machines that swim, have pumps that take on fluid to sense the environment and are small enough to move around our bloodstream.

Working with Cambridge Enterprise, the University’s commercialisation arm, the team in Cambridge’s Nanophotonics Centre hopes to commercialise the technology for microfluidics bio-applications. The work is funded by the Engineering and Physical Sciences Research Council and the European Research Council.

“There’s a revolution happening in personalised healthcare, and for that we need sensors not just on the outside but on the inside,” explains Baumberg, who leads an interdisciplinary Strategic Research Network and Doctoral Training Centre focused on nanoscience and nanotechnology.

“Nanoscience is driving this. We are now building technology that allows us to even imagine these futures.”

I have featured Welland and his work here before and noted his penchant for wanting to insert nanodevices into humans as per this excerpt from an April 30, 2010 posting,
Getting back to the Cambridge University video, do go and watch it on the Nanowerk site. It is fun and very informative and approximately 17 mins. I noticed that they reused part of their Nokia morph animation (last mentioned on this blog here) and offered some thoughts from Professor Mark Welland, the team leader on that project. Interestingly, Welland was talking about yet another possibility. (Sometimes I think nano goes too far!) He was suggesting that we could have chips/devices in our brains that would allow us to think about phoning someone and an immediate connection would be made to that person. Bluntly—no. Just think what would happen if the marketers got access and I don’t even want to think what a person who suffers psychotic breaks (i.e., hearing voices) would do with even more input. Welland starts to talk at the 11 minute mark (I think). For an alternative take on the video and more details, visit Dexter Johnson’s blog, Nanoclast, for this posting. Hint, he likes the idea of a phone in the brain much better than I do.

I’m not sure what could have occasioned this latest press release and related video featuring Welland and nanotherapeutics other than guessing that it was a slow news period.

Machine learning programs learn bias

The notion of bias in artificial intelligence (AI)/algorithms/robots is gaining prominence (links to other posts featuring algorithms and bias are at the end of this post). The latest research concerns machine learning where an artificial intelligence system trains itself with ordinary human language from the internet. From an April 13, 2017 American Association for the Advancement of Science (AAAS) news release on EurekAlert,

As artificial intelligence systems “learn” language from existing texts, they exhibit the same biases that humans do, a new study reveals. The results not only provide a tool for studying prejudicial attitudes and behavior in humans, but also emphasize how language is intimately intertwined with historical biases and cultural stereotypes. A common way to measure biases in humans is the Implicit Association Test (IAT), where subjects are asked to pair two concepts they find similar, in contrast to two concepts they find different; their response times can vary greatly, indicating how well they associated one word with another (for example, people are more likely to associate “flowers” with “pleasant,” and “insects” with “unpleasant”). Here, Aylin Caliskan and colleagues developed a similar way to measure biases in AI systems that acquire language from human texts; rather than measuring lag time, however, they used the statistical number of associations between words, analyzing roughly 2.2 million words in total. Their results demonstrate that AI systems retain biases seen in humans. For example, studies of human behavior show that the exact same resume is 50% more likely to result in an opportunity for an interview if the candidate’s name is European American rather than African-American. Indeed, the AI system was more likely to associate European American names with “pleasant” stimuli (e.g. “gift,” or “happy”). In terms of gender, the AI system also reflected human biases, where female words (e.g., “woman” and “girl”) were more associated than male words with the arts, compared to mathematics. In a related Perspective, Anthony G. Greenwald discusses these findings and how they could be used to further analyze biases in the real world.

There are more details about the research in this April 13, 2017 Princeton University news release on EurekAlert (also on ScienceDaily),

In debates over the future of artificial intelligence, many experts think of the new systems as coldly logical and objectively rational. But in a new study, researchers have demonstrated how machines can be reflections of us, their creators, in potentially problematic ways. Common machine learning programs, when trained with ordinary human language available online, can acquire cultural biases embedded in the patterns of wording, the researchers found. These biases range from the morally neutral, like a preference for flowers over insects, to the objectionable views of race and gender.

Identifying and addressing possible bias in machine learning will be critically important as we increasingly turn to computers for processing the natural language humans use to communicate, for instance in doing online text searches, image categorization and automated translations.

“Questions about fairness and bias in machine learning are tremendously important for our society,” said researcher Arvind Narayanan, an assistant professor of computer science and an affiliated faculty member at the Center for Information Technology Policy (CITP) at Princeton University, as well as an affiliate scholar at Stanford Law School’s Center for Internet and Society. “We have a situation where these artificial intelligence systems may be perpetuating historical patterns of bias that we might find socially unacceptable and which we might be trying to move away from.”

The paper, “Semantics derived automatically from language corpora contain human-like biases,” published April 14  [2017] in Science. Its lead author is Aylin Caliskan, a postdoctoral research associate and a CITP fellow at Princeton; Joanna Bryson, a reader at University of Bath, and CITP affiliate, is a coauthor.

As a touchstone for documented human biases, the study turned to the Implicit Association Test, used in numerous social psychology studies since its development at the University of Washington in the late 1990s. The test measures response times (in milliseconds) by human subjects asked to pair word concepts displayed on a computer screen. Response times are far shorter, the Implicit Association Test has repeatedly shown, when subjects are asked to pair two concepts they find similar, versus two concepts they find dissimilar.

Take flower types, like “rose” and “daisy,” and insects like “ant” and “moth.” These words can be paired with pleasant concepts, like “caress” and “love,” or unpleasant notions, like “filth” and “ugly.” People more quickly associate the flower words with pleasant concepts, and the insect terms with unpleasant ideas.

The Princeton team devised an experiment with a program where it essentially functioned like a machine learning version of the Implicit Association Test. Called GloVe, and developed by Stanford University researchers, the popular, open-source program is of the sort that a startup machine learning company might use at the heart of its product. The GloVe algorithm can represent the co-occurrence statistics of words in, say, a 10-word window of text. Words that often appear near one another have a stronger association than those words that seldom do.

The Stanford researchers turned GloVe loose on a huge trawl of contents from the World Wide Web, containing 840 billion words. Within this large sample of written human culture, Narayanan and colleagues then examined sets of so-called target words, like “programmer, engineer, scientist” and “nurse, teacher, librarian” alongside two sets of attribute words, such as “man, male” and “woman, female,” looking for evidence of the kinds of biases humans can unwittingly possess.

In the results, innocent, inoffensive biases, like for flowers over bugs, showed up, but so did examples along lines of gender and race. As it turned out, the Princeton machine learning experiment managed to replicate the broad substantiations of bias found in select Implicit Association Test studies over the years that have relied on live, human subjects.

For instance, the machine learning program associated female names more with familial attribute words, like “parents” and “wedding,” than male names. In turn, male names had stronger associations with career attributes, like “professional” and “salary.” Of course, results such as these are often just objective reflections of the true, unequal distributions of occupation types with respect to gender–like how 77 percent of computer programmers are male, according to the U.S. Bureau of Labor Statistics.

Yet this correctly distinguished bias about occupations can end up having pernicious, sexist effects. An example: when foreign languages are naively processed by machine learning programs, leading to gender-stereotyped sentences. The Turkish language uses a gender-neutral, third person pronoun, “o.” Plugged into the well-known, online translation service Google Translate, however, the Turkish sentences “o bir doktor” and “o bir hem?ire” with this gender-neutral pronoun are translated into English as “he is a doctor” and “she is a nurse.”

“This paper reiterates the important point that machine learning methods are not ‘objective’ or ‘unbiased’ just because they rely on mathematics and algorithms,” said Hanna Wallach, a senior researcher at Microsoft Research New York City, who was not involved in the study. “Rather, as long as they are trained using data from society and as long as society exhibits biases, these methods will likely reproduce these biases.”

Another objectionable example harkens back to a well-known 2004 paper by Marianne Bertrand of the University of Chicago Booth School of Business and Sendhil Mullainathan of Harvard University. The economists sent out close to 5,000 identical resumes to 1,300 job advertisements, changing only the applicants’ names to be either traditionally European American or African American. The former group was 50 percent more likely to be offered an interview than the latter. In an apparent corroboration of this bias, the new Princeton study demonstrated that a set of African American names had more unpleasantness associations than a European American set.

Computer programmers might hope to prevent cultural stereotype perpetuation through the development of explicit, mathematics-based instructions for the machine learning programs underlying AI systems. Not unlike how parents and mentors try to instill concepts of fairness and equality in children and students, coders could endeavor to make machines reflect the better angels of human nature.

“The biases that we studied in the paper are easy to overlook when designers are creating systems,” said Narayanan. “The biases and stereotypes in our society reflected in our language are complex and longstanding. Rather than trying to sanitize or eliminate them, we should treat biases as part of the language and establish an explicit way in machine learning of determining what we consider acceptable and unacceptable.”

Here’s a link to and a citation for the Princeton paper,

Semantics derived automatically from language corpora contain human-like biases by Aylin Caliskan, Joanna J. Bryson, Arvind Narayanan. Science  14 Apr 2017: Vol. 356, Issue 6334, pp. 183-186 DOI: 10.1126/science.aal4230

This paper appears to be open access.

Links to more cautionary posts about AI,

Aug 5, 2009: Autonomous algorithms; intelligent windows; pretty nano pictures

June 14, 2016:  Accountability for artificial intelligence decision-making

Oct. 25, 2016 Removing gender-based stereotypes from algorithms

March 1, 2017: Algorithms in decision-making: a government inquiry in the UK

There’s also a book which makes some of the current use of AI programmes and big data quite accessible reading: Cathy O’Neil’s ‘Weapons of Math Destruction: How Big Data Increases Inequality and Threatens Democracy’.

Gold spring-shaped coils for detecting twisted molecules

An April 3, 2017 news item on ScienceDaily describes a technique that could improve nanorobotics and more,

University of Bath scientists have used gold spring-shaped coils 5,000 times thinner than human hairs with powerful lasers to enable the detection of twisted molecules, and the applications could improve pharmaceutical design, telecommunications and nanorobotics.

An April 3, 2017 University of Bath press release (also on EurekAlert), which originated the news item, provides more detail (Note: A link has been removed),

Molecules, including many pharmaceuticals, twist in certain ways and can exist in left or right ‘handed’ forms depending on how they twist. This twisting, called chirality, is crucial to understand because it changes the way a molecule behaves, for example within our bodies.

Scientists can study chiral molecules using particular laser light, which itself twists as it travels. Such studies get especially difficult for small amounts of molecules. This is where the minuscule gold springs can be helpful. Their shape twists the light and could better fit it to the molecules, making it easier to detect minute amounts.

Using some of the smallest springs ever created, the researchers from the University of Bath Department of Physics, working with colleagues from the Max Planck Institute for Intelligent Systems, examined how effective the gold springs could be at enhancing interactions between light and chiral molecules. They based their study on a colour-conversion method for light, known as Second Harmonic Generation (SHG), whereby the better the performance of the spring, the more red laser light converts into blue laser light.

They found that the springs were indeed very promising but that how well they performed depended on the direction they were facing.

Physics PhD student David Hooper who is the first author of the study, said: “It is like using a kaleidoscope to look at a picture; the picture becomes distorted when you rotate the kaleidoscope. We need to minimise the distortion.”

In order to reduce the distortions, the team is now working on ways to optimise the springs, which are known as chiral nanostructures.

“Closely observing the chirality of molecules has lots of potential applications, for example it could help improve the design and purity of pharmaceuticals and fine chemicals, help develop motion controls for nanorobotics and miniaturise components in telecommunications,” said Dr Ventsislav Valev who led the study and the University of Bath research team.

Gold spring shaped coils help reveal information about chiral molecules. Credit Ventsi Valev.

Here’s a link to and a citation for the paper,

Strong Rotational Anisotropies Affect Nonlinear Chiral Metamaterials by David C. Hooper, Andrew G. Mark, Christian Kuppe, Joel T. Collins, Peer Fischer, Ventsislav K. Valev. Advanced Materials DOI: 10.1002/adma.201605110  View/save citation First published: 31 January 2017

This is an open access paper.

Drip dry housing

This piece on new construction materials does have a nanotechnology aspect although it’s not made clear exactly how nanotechnology plays a role.

From a Dec. 28, 2016 news item on phys.org (Note: A link has been removed),

The construction industry is preparing to use textiles from the clothing and footwear industries. Gore-Tex-like membranes, which are usually found in weather-proof jackets and trekking shoes, are now being studied to build breathable, water-resistant walls. Tyvek is one such synthetic textile being used as a “raincoat” for homes.

You can find out more about Tyvek here.on the Dupont website.

A Dec. 21, 2016 press release by Chiara Cecchi for Youris ((European Research Media Center), which originated the news item, proceeds with more about textile-type construction materials,

Camping tents, which have been used for ages to protect against wind, ultra-violet rays and rain, have also inspired the modern construction industry, or “buildtech sector”. This new field of research focuses on the different fibres (animal-based such as wool or silk, plant-based such as linen and cotton and synthetic such as polyester and rayon) in order to develop technical or high-performance materials, thus improving the quality of construction, especially for buildings, dams, bridges, tunnels and roads. This is due to the fibres’ mechanical properties, such as lightness, strength, and also resistance to many factors like creep, deterioration by chemicals and pollutants in the air or rain.

“Textiles play an important role in the modernisation of infrastructure and in sustainable buildings”, explains Andrea Bassi, professor at the Department of Civil and Environmental Engineering (DICA), Politecnico of Milan, “Nylon and fiberglass are mixed with traditional fibres to control thermal and acoustic insulation in walls, façades and roofs. Technological innovation in materials, which includes nanotechnologies [emphasis mine] combined with traditional textiles used in clothes, enables buildings and other constructions to be designed using textiles containing steel polyvinyl chloride (PVC) or ethylene tetrafluoroethylene (ETFE). This gives the materials new antibacterial, antifungal and antimycotic properties in addition to being antistatic, sound-absorbing and water-resistant”.

Rooflys is another example. In this case, coated black woven textiles are placed under the roof to protect roof insulation from mould. These building textiles have also been tested for fire resistance, nail sealability, water and vapour impermeability, wind and UV resistance.

Photo: Production line at the co-operative enterprise CAVAC Biomatériaux, France. Natural fibres processed into a continuous mat (biofib) – Martin Ansell, BRE CICM, University of Bath, UK

In Spain three researchers from the Technical University of Madrid (UPM) have developed a new panel made with textile waste. They claim that it can significantly enhance both the thermal and acoustic conditions of buildings, while reducing greenhouse gas emissions and the energy impact associated with the development of construction materials.

Besides textiles, innovative natural fibre composite materials are a parallel field of the research on insulators that can preserve indoor air quality. These bio-based materials, such as straw and hemp, can reduce the incidence of mould growth because they breathe. The breathability of materials refers to their ability to absorb and desorb moisture naturally”, says expert Finlay White from Modcell, who contributed to the construction of what they claim are the world’s first commercially available straw houses, “For example, highly insulated buildings with poor ventilation can build-up high levels of moisture in the air. If the moisture meets a cool surface it will condensate and producing mould, unless it is managed. Bio-based materials have the means to absorb moisture so that the risk of condensation is reduced, preventing the potential for mould growth”.

The Bristol-based green technology firm [Modcell] is collaborating with the European Isobio project, which is testing bio-based insulators which perform 20% better than conventional materials. “This would lead to a 5% total energy reduction over the lifecycle of a building”, explains Martin Ansell, from BRE Centre for Innovative Construction Materials (BRE CICM), University of Bath, UK, another partner of the project.

“Costs would also be reduced. We are evaluating the thermal and hygroscopic properties of a range of plant-derived by-products including hemp, jute, rape and straw fibres plus corn cob residues. Advanced sol-gel coatings are being deposited on these fibres to optimise these properties in order to produce highly insulating and breathable construction materials”, Ansell concludes.

You can find Modcell here.

Here’s another image, which I believe is a closeup of the processed fibre shown in the above,

Production line at the co-operative enterprise CAVAC Biomatériaux, France. Natural fibres processed into a continuous mat (biofib) – Martin Ansell, BRE CICM, University of Bath, UK [Note: This caption appears to be a copy of the caption for the previous image]

Cardiac pacemakers: Korea’s in vivo demonstration of a self-powered one* and UK’s breath-based approach

As i best I can determine ,the last mention of a self-powered pacemaker and the like on this blog was in a Nov. 5, 2012 posting (Developing self-powered batteries for pacemakers). This latest news from The Korea Advanced Institute of Science and Technology (KAIST) is, I believe, the first time that such a device has been successfully tested in vivo. From a June 23, 2014 news item on ScienceDaily,

As the number of pacemakers implanted each year reaches into the millions worldwide, improving the lifespan of pacemaker batteries has been of great concern for developers and manufacturers. Currently, pacemaker batteries last seven years on average, requiring frequent replacements, which may pose patients to a potential risk involved in medical procedures.

A research team from the Korea Advanced Institute of Science and Technology (KAIST), headed by Professor Keon Jae Lee of the Department of Materials Science and Engineering at KAIST and Professor Boyoung Joung, M.D. of the Division of Cardiology at Severance Hospital of Yonsei University, has developed a self-powered artificial cardiac pacemaker that is operated semi-permanently by a flexible piezoelectric nanogenerator.

A June 23, 2014 KAIST news release on EurekAlert, which originated the news item, provides more details,

The artificial cardiac pacemaker is widely acknowledged as medical equipment that is integrated into the human body to regulate the heartbeats through electrical stimulation to contract the cardiac muscles of people who suffer from arrhythmia. However, repeated surgeries to replace pacemaker batteries have exposed elderly patients to health risks such as infections or severe bleeding during operations.

The team’s newly designed flexible piezoelectric nanogenerator directly stimulated a living rat’s heart using electrical energy converted from the small body movements of the rat. This technology could facilitate the use of self-powered flexible energy harvesters, not only prolonging the lifetime of cardiac pacemakers but also realizing real-time heart monitoring.

The research team fabricated high-performance flexible nanogenerators utilizing a bulk single-crystal PMN-PT thin film (iBULe Photonics). The harvested energy reached up to 8.2 V and 0.22 mA by bending and pushing motions, which were high enough values to directly stimulate the rat’s heart.

Professor Keon Jae Lee said:

“For clinical purposes, the current achievement will benefit the development of self-powered cardiac pacemakers as well as prevent heart attacks via the real-time diagnosis of heart arrhythmia. In addition, the flexible piezoelectric nanogenerator could also be utilized as an electrical source for various implantable medical devices.”

This image illustrating a self-powered nanogenerator for a cardiac pacemaker has been provided by KAIST,

This picture shows that a self-powered cardiac pacemaker is enabled by a flexible piezoelectric energy harvester. Credit: KAIST

This picture shows that a self-powered cardiac pacemaker is enabled by a flexible piezoelectric energy harvester.
Credit: KAIST

Here’s a link to and a citation for the paper,

Self-Powered Cardiac Pacemaker Enabled by Flexible Single Crystalline PMN-PT Piezoelectric Energy Harvester by Geon-Tae Hwang, Hyewon Park, Jeong-Ho Lee, SeKwon Oh, Kwi-Il Park, Myunghwan Byun, Hyelim Park, Gun Ahn, Chang Kyu Jeong, Kwangsoo No, HyukSang Kwon, Sang-Goo Lee, Boyoung Joung, and Keon Jae Lee. Advanced Materials DOI: 10.1002/adma.201400562
Article first published online: 17 APR 2014

© 2014 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim

This paper is behind a paywall.

There was a May 15, 2014 KAIST news release on EurekAlert announcing this same piece of research but from a technical perspective,

The energy efficiency of KAIST’s piezoelectric nanogenerator has increased by almost 40 times, one step closer toward the commercialization of flexible energy harvesters that can supply power infinitely to wearable, implantable electronic devices

NANOGENERATORS are innovative self-powered energy harvesters that convert kinetic energy created from vibrational and mechanical sources into electrical power, removing the need of external circuits or batteries for electronic devices. This innovation is vital in realizing sustainable energy generation in isolated, inaccessible, or indoor environments and even in the human body.

Nanogenerators, a flexible and lightweight energy harvester on a plastic substrate, can scavenge energy from the extremely tiny movements of natural resources and human body such as wind, water flow, heartbeats, and diaphragm and respiration activities to generate electrical signals. The generators are not only self-powered, flexible devices but also can provide permanent power sources to implantable biomedical devices, including cardiac pacemakers and deep brain stimulators.

However, poor energy efficiency and a complex fabrication process have posed challenges to the commercialization of nanogenerators. Keon Jae Lee, Associate Professor of Materials Science and Engineering at KAIST, and his colleagues have recently proposed a solution by developing a robust technique to transfer a high-quality piezoelectric thin film from bulk sapphire substrates to plastic substrates using laser lift-off (LLO).

Applying the inorganic-based laser lift-off (LLO) process, the research team produced a large-area PZT thin film nanogenerators on flexible substrates (2 cm x 2 cm).

“We were able to convert a high-output performance of ~250 V from the slight mechanical deformation of a single thin plastic substrate. Such output power is just enough to turn on 100 LED lights,” Keon Jae Lee explained.

The self-powered nanogenerators can also work with finger and foot motions. For example, under the irregular and slight bending motions of a human finger, the measured current signals had a high electric power of ~8.7 μA. In addition, the piezoelectric nanogenerator has world-record power conversion efficiency, almost 40 times higher than previously reported similar research results, solving the drawbacks related to the fabrication complexity and low energy efficiency.

Lee further commented,

“Building on this concept, it is highly expected that tiny mechanical motions, including human body movements of muscle contraction and relaxation, can be readily converted into electrical energy and, furthermore, acted as eternal power sources.”

The research team is currently studying a method to build three-dimensional stacking of flexible piezoelectric thin films to enhance output power, as well as conducting a clinical experiment with a flexible nanogenerator.

In addition to the 2012 posting I mentioned earlier, there was also this July 12, 2010 posting which described research on harvesting biomechanical movement ( heart beat, blood flow, muscle stretching, or even irregular vibration) at the Georgia (US) Institute of Technology where the lead researcher observed,

…  Wang [Professor Zhong Lin Wang at Georgia Tech] tells Nanowerk. “However, the applications of the nanogenerators under in vivo and in vitro environments are distinct. Some crucial problems need to be addressed before using these devices in the human body, such as biocompatibility and toxicity.”

Bravo to the KAIST researchers for getting this research to the in vivo testing stage.

Meanwhile at the University of Bristol and at the University of Bath, researchers have received funding for a new approach to cardiac pacemakers, designed them with the breath in mind. From a June 24, 2014 news item on Azonano,

Pacemaker research from the Universities of Bath and Bristol could revolutionise the lives of over 750,000 people who live with heart failure in the UK.

The British Heart Foundation (BHF) is awarding funding to researchers developing a new type of heart pacemaker that modulates its pulses to match breathing rates.

A June 23, 2014 University of Bristol press release, which originated the news item, provides some context,

During 2012-13 in England, more than 40,000 patients had a pacemaker fitted.

Currently, the pulses from pacemakers are set at a constant rate when fitted which doesn’t replicate the natural beating of the human heart.

The normal healthy variation in heart rate during breathing is lost in cardiovascular disease and is an indicator for sleep apnoea, cardiac arrhythmia, hypertension, heart failure and sudden cardiac death.

The device is then briefly described (from the press release),

The novel device being developed by scientists at the Universities of Bath and Bristol uses synthetic neural technology to restore this natural variation of heart rate with lung inflation, and is targeted towards patients with heart failure.

The device works by saving the heart energy, improving its pumping efficiency and enhancing blood flow to the heart muscle itself.  Pre-clinical trials suggest the device gives a 25 per cent increase in the pumping ability, which is expected to extend the life of patients with heart failure.

One aim of the project is to miniaturise the pacemaker device to the size of a postage stamp and to develop an implant that could be used in humans within five years.

Dr Alain Nogaret, Senior Lecturer in Physics at the University of Bath, explained“This is a multidisciplinary project with strong translational value.  By combining fundamental science and nanotechnology we will be able to deliver a unique treatment for heart failure which is not currently addressed by mainstream cardiac rhythm management devices.”

The research team has already patented the technology and is working with NHS consultants at the Bristol Heart Institute, the University of California at San Diego and the University of Auckland. [emphasis mine]

Professor Julian Paton, from the University of Bristol, added: “We’ve known for almost 80 years that the heart beat is modulated by breathing but we have never fully understood the benefits this brings. The generous new funding from the BHF will allow us to reinstate this natural occurring synchrony between heart rate and breathing and understand how it brings therapy to hearts that are failing.”

Professor Jeremy Pearson, Associate Medical Director at the BHF, said: “This study is a novel and exciting first step towards a new generation of smarter pacemakers. More and more people are living with heart failure so our funding in this area is crucial. The work from this innovative research team could have a real impact on heart failure patients’ lives in the future.”

Given some current events (‘Tesla opens up its patents’, Mike Masnick’s June 12, 2014 posting on Techdirt), I wonder what the situation will be vis à vis patents by the time this device gets to market.

* ‘one’ added to title on Aug. 13, 2014.

Richard Van Duyne solves mystery of Renoir’s red with surface-enhanced Raman spectroscopy (SERS) and Canadian scientists uncover forgeries

The only things these two items have in common is that they are concerned with visual art. and with solving mysteries The first item concerns research by Richard Van Duyne into the nature of the red paint used in one of Renoir’s paintings. A February 14, 2014 news item on Azonano describes some of the art conservation work that Van Duyne’s (nanoish) technology has made possible along with details about this most recent work,

Scientists are using powerful analytical and imaging tools to study artworks from all ages, delving deep below the surface to reveal the process and materials used by some of the world’s greatest artists.

Northwestern University chemist Richard P. Van Duyne, in collaboration with conservation scientists at the Art Institute of Chicago, has been using a scientific method he discovered nearly four decades ago to investigate masterpieces by Pierre-Auguste Renoir, Winslow Homer and Mary Cassatt.

Van Duyne recently identified the chemical components of paint, now partially faded, used by Renoir in his oil painting “Madame Léon Clapisson.” Van Duyne discovered the artist used carmine lake, a brilliant but light-sensitive red pigment, on this colorful canvas. The scientific investigation is the cornerstone of a new exhibition at the Art Institute of Chicago.

The Art Institute of Chicago’s exhibition is called, Renoir’s True Colors: Science Solves a Mystery. being held from Feb. 12, 2014 – April 27, 2014. Here is an image of the Renoir painting in question and an image featuring the equipment being used,

Renoir-Madame-Leon-Clapisson.Art Institute of Chicago.

Renoir-Madame-Leon-Clapisson.Art Institute of Chicago.

Renoir and surface-enhanced Raman spectroscopy (SERS). Art Institute of Chicago

Renoir and surface-enhanced Raman spectroscopy (SERS). Art Institute of Chicago

The Feb. 13, 2014 Northwestern University news release (also on EurekAlert) by Megan Fellman, which originated the news item, gives a brief description of Van Duyne’s technique and its impact on conservation at the Art Institute of Chicago (Note: A link has been removed),

To see what the naked eye cannot see, Van Duyne used surface-enhanced Raman spectroscopy (SERS) to uncover details of Renoir’s paint. SERS, discovered by Van Duyne in 1977, is widely recognized as the most sensitive form of spectroscopy capable of identifying molecules.

Van Duyne and his colleagues’ detective work informed the production of a new digital visualization of the painting’s original colors by the Art Institute’s conservation department. The re-colorized reproduction and the original painting (presented in a case that offers 360-degree views) can be viewed side by side at the exhibition “Renoir’s True Colors: Science Solves a Mystery” through April 27 [2014] at the Art Institute.

I first wrote about Van Duyne’s technique in my wiki, The NanoTech Mysteries. From the Scientists get artful page (Note: A footnote was removed),

Richard Van Duyne, then a chemist at Northwestern University, developed the technique in 1977. Van Duyne’s technology, based on Raman spectroscopy which has been around since the 1920s, is called surface-enhanced Raman spectroscopy’ or SERS “[and] uses laser light and nanoparticles of precious metals to interact with molecules to show the chemical make-up of a particular dye.”

This next item is about forgery detection. A March 5, 2014 news release on EurekAlert describes the latest developments,

Gallery owners, private collectors, conservators, museums and art dealers face many problems in protecting and evaluating their collections such as determining origin, authenticity and discovery of forgery, as well as conservation issues. Today these problems are more accurately addressed through the application of modern, non-destructive, “hi-tech” techniques.

Dmitry Gavrilov, a PhD student in the Department of Physics at the University of Windsor (Windsor, Canada), along with Dr. Roman Gr. Maev, the Department of Physics Professor at the University of Windsor (Windsor, Canada) and Professor Dr. Darryl Almond of the University of Bath (Bath, UK) have been busy applying modern techniques to this age-old field. Infrared imaging, thermography, spectroscopy, UV fluorescence analysis, and acoustic microscopy are among the innovative approaches they are using to conduct pre-restoration analysis of works of art. Some fascinating results from their applications are published today in the Canadian Journal of Physics.

Since the early 1900s, using infrared imaging in various wave bands, scientists have been able to see what parts of artworks have been retouched or altered and sometimes even reveal the artist’s original sketches beneath layers of the paint. Thermography is a relatively new approach in art analysis that allows for deep subsurface investigation to find defects and past reparations. To a conservator these new methods are key in saving priceless works from further damage.

Gavrilov explains, “We applied new approaches in processing thermographic data, materials spectra data, and also the technique referred to as craquelure pattern analysis. The latter is based on advanced morphological processing of images of surface cracks. These cracks, caused by a number of factors such as structure of canvas, paints and binders used, can uncover important clues on the origins of a painting.”

“Air-coupled acoustic imaging and acoustic microscopy are other innovative approaches which have been developed and introduced into art analysis by our team under supervision of Dr. Roman Gr. Maev. The technique has proven to be extremely sensitive to small layer detachments and allows for the detection of early stages of degradation. It is based on the same principles as medical and industrial ultrasound, namely, the sending a sound wave to the sample and receiving it back. ”

Spectroscopy is a technique that has been useful in the fight against art fraud. It can determine chemical composition of pigments and binders, which is essential information in the hands of an art specialist in revealing fakes. As described in the paper, “…according to the FBI, the value of art fraud, forgery and theft is up to $6 billion per year, which makes it the third most lucrative crime in the world after drug trafficking and the illegal weapons trade.”

One might wonder how these modern applications can be safe for delicate works of art when even flash photography is banned in art galleries. The authors discuss this and other safety concerns, describing both historic and modern-day implications of flash bulbs and exhibit illumination and scientific methods. As the paper concludes, the authors suggest that we can expect that the number of “hi-tech” techniques will only increase. In the future, art experts will likely have a variety of tools to help them solve many of the mysteries hiding beneath the layers.

Here’s a link to and a citation for the paper,

A review of imaging methods in analysis of works of art: Thermographic imaging method in art analysis by D. Gavrilov, R.Gr. Maev, and D.P. Almond. Canadian Journal of Physics, 10.1139/cjp-2013-0128

This paper is open access.

More questions about whether nanoparticles penetrate the skin

The research from the University of Bath about nanoparticles not penetrating the skin has drawn some interest. In addition to the mention here yesterday, in this Oct. 3, 2012 posting, there was this Oct. 2, 2012 posting by Dexter Johnson at the Nanoclast blog on the IEEE [Institute of Electrical and Electronics Engineers] website. I have excerpted the first and last paragraphs of Dexter’s posting as they neatly present the campaign to regulate the use of  nanoparticles in cosmetics and the means by which science progresses, i.e. this study is not definitive,

For at least the last several years, NGO’s like Friends of the Earth (FoE) have been leveraging preliminary studies that indicated that nanoparticles might pass right through our skin to call for a complete moratorium on the use of any nanomaterials in sunscreens and cosmetics.

This latest UK research certainly won’t put this issue to rest. These experiments will need to be repeated and the results duplicated. That’s how science works. We should not be jumping to any conclusions that this research proves nanoparticles are absolutely safe any more than we should be jumping to the conclusion that they are a risk. Science cuts both ways.

Meanwhile a writer in Australia, Sarah Berry, takes a different approach in her Oct. 4, 2012 article for the Australian newspaper, the  Sydney Morning Herald,

“Breakthrough” claims by cosmetic companies aren’t all they’re cracked up to be, according to a new study.

Nanotechnology — the science of super-small particles — has featured in cosmetic formulations since the late ’80s. Brands claim the technology delivers the “deep-penetrating action” of vitamins and other “active ingredients”.

You may think you know what direction Berry is going to pursue but she swerves,

Dr Gregory Crocetti, a nanotechnology campaigner with Friends of the Earth Australia, was scathing of the study. “To conclude that nanoparticles do not penetrate human skin based on a short-term study using excised pig skin is highly irresponsible,” he said. “This is yet another example of short-term, in-vitro research that doesn’t reflect real-life conditions like skin flexing, and the fact that penetration enhancers are used in most cosmetics. There is an urgent need for more long-term studies that actually reflect realistic conditions.”

Professor Brian Gulson, from Macquarie University in NSW, was was similarly critical. The geochemist’s own study, from 2010 and in conjunction with CSIRO [Australia’s national science agency, the Commonwealth Scientific and Industrial Research Organization], found that small amounts of zinc particles in sunscreen “can pass through the protective layers of skin exposed to the sun in a real-life environment and be detected in blood and urine”.

Of the latest study he said: “Even though they used a sophisticated method of laser scanning confocal microscopy, their results only reinforced earlier studies [and had] no relevance to ‘real life’, especially to cosmetics, because they used polystyrene nanoparticles, and because they used excised (that is, ‘dead’) pig’s skin.”

I missed the fact that this study was an in vitro test, which is always less convincing than in vivo testing. In my Nov. 29, 2011 posting about some research into nano zinc oxide I mentioned in vitro vs. in vivo testing and Brian Gulson’s research,

I was able to access the study and while I’m not an expert by any means I did note that the study was ‘in vitro’, in this case, the cells were on slides when they were being studied. It’s impossible to draw hard and fast conclusions about what will happen in a body (human or otherwise) since there are other systems at work which are not present on a slide.

… here’s what Brian Gulson had to say about nano zinc oxide concentrations in his work and about a shortcoming in his study (from an Australian Broadcasting Corporation [ABC] Feb. 25, 2010 interviewwith Ashley Hall,

BRIAN GULSON: I guess the critical thing was that we didn’t find large amounts of it getting through the skin. The sunscreens contain 18 to 20 per cent zinc oxide usually and ours was about 20 per zinc. So that’s an awful lot of zinc you’re putting on the skin but we found tiny amounts in the blood of that tracer that we used.

ASHLEY HALL: So is it a significant amount?

BRIAN GULSON: No, no it’s really not.

ASHLEY HALL: But Brian Gulson is warning people who use a lot of sunscreen over an extended period that they could be at risk of having elevated levels of zinc.

BRIAN GULSON: Maybe with young children where you’re applying it seven days a week, it could be an issue but I’m more than happy to continue applying it to my grandchildren.

ASHLEY HALL: This study doesn’t shed any light on the question of whether the nano-particles themselves played a part in the zinc absorption.

BRIAN GULSON: That was the most critical thing. This isotope technique cannot tell whether or not it’s a zinc oxide nano-particle that got through skin or whether it’s just zinc that was dissolved up in contact with the skin and then forms zinc ions or so-called soluble ions. So that’s one major deficiency of our study.

Of course, I have a question about Gulson’s conclusion  that very little of the nano zinc oxide was penetrating the skin based on blood and urine samples taken over the course of the study. Is it possible that after penetrating the skin it was stored in the cells  instead of being eliminated?

It seems it’s not yet time to press the panic button since more research is needed for scientists to refine their understanding of nano zinc oxide and possible health effects from its use.

What I found most interesting in Berry’s article was the advice from the Friends of the Earth,

The contradictory claims about sunscreen can make it hard to know what to do this summer. Friends of the Earth Australia advise people to continue to be sun safe — seeking shade, wearing protective clothing, a hat and sunglasses and using broad spectrum SPF 30+ sunscreen.

This is a huge change in tone for that organization, which until now has been relentless in its anti nanosunscreen stance. Here they advise using a sunscreen and they don’t qualify it as they would usually by saying you should avoid nanosunscreens. I guess after the debacle earlier this year (mentioned in this Feb. 9, 2012 posting titled: Unintended consequences: Australians not using sunscreens to avoid nanoparticles?), they have reconsidered the intensity of their campaign.

For anyone interested in some of the history of the Friends of the Earth’s campaign and the NGO (non governemental organization) which went against the prevailing sentiment against nanosunscreen, I suggest reading Dexter’s posting in full and for those interested in the response from Australian scientists about this latest research, do read Berry’s article.