Tag Archives: France

Deep learning and some history from the Swiss National Science Foundation (SNSF)

A June 27, 2016 news item on phys.org provides a measured analysis of deep learning and its current state of development (from a Swiss perspective),

In March 2016, the world Go champion Lee Sedol lost 1-4 against the artificial intelligence AlphaGo. For many, this was yet another defeat for humanity at the hands of the machines. Indeed, the success of the AlphaGo software was forged in an area of artificial intelligence that has seen huge progress over the last decade. Deep learning, as it’s called, uses artificial neural networks to process algorithmic calculations. This software architecture therefore mimics biological neural networks.

Much of the progress in deep learning is thanks to the work of Jürgen Schmidhuber, director of the IDSIA (Istituto Dalle Molle di Studi sull’Intelligenza Artificiale) which is located in the suburbs of Lugano. The IDSIA doctoral student Shane Legg and a group of former colleagues went on to found DeepMind, the startup acquired by Google in early 2014 for USD 500 million. The DeepMind algorithms eventually wound up in AlphaGo.

“Schmidhuber is one of the best at deep learning,” says Boi Faltings of the EPFL Artificial Intelligence Lab. “He never let go of the need to keep working at it.” According to Stéphane Marchand-Maillet of the University of Geneva computing department, “he’s been in the race since the very beginning.”

A June 27, 2016 SNSF news release (first published as a story in Horizons no. 109 June 2016) by Fabien Goubet, which originated the news item, goes on to provide a brief history,

The real strength of deep learning is structural recognition, and winning at Go is just an illustration of this, albeit a rather resounding one. Elsewhere, and for some years now, we have seen it applied to an entire spectrum of areas, such as visual and vocal recognition, online translation tools and smartphone personal assistants. One underlying principle of machine learning is that algorithms must first be trained using copious examples. Naturally, this has been helped by the deluge of user-generated content spawned by smartphones and web 2.0, stretching from Facebook photo comments to official translations published on the Internet. By feeding a machine thousands of accurately tagged images of cats, for example, it learns first to recognise those cats and later any image of a cat, including those it hasn’t been fed.

Deep learning isn’t new; it just needed modern computers to come of age. As far back as the early 1950s, biologists tried to lay out formal principles to explain the working of the brain’s cells. In 1956, the psychologist Frank Rosenblatt of the New York State Aeronautical Laboratory published a numerical model based on these concepts, thereby creating the very first artificial neural network. Once integrated into a calculator, it learned to recognise rudimentary images.

“This network only contained eight neurones organised in a single layer. It could only recognise simple characters”, says Claude Touzet of the Adaptive and Integrative Neuroscience Laboratory of Aix-Marseille University. “It wasn’t until 1985 that we saw the second generation of artificial neural networks featuring multiple layers and much greater performance”. This breakthrough was made simultaneously by three researchers: Yann LeCun in Paris, Geoffrey Hinton in Toronto and Terrence Sejnowski in Baltimore.

Byte-size learning

In multilayer networks, each layer learns to recognise the precise visual characteristics of a shape. The deeper the layer, the more abstract the characteristics. With cat photos, the first layer analyses pixel colour, and the following layer recognises the general form of the cat. This structural design can support calculations being made upon thousands of layers, and it was this aspect of the architecture that gave rise to the name ‘deep learning’.

Marchand-Maillet explains: “Each artificial neurone is assigned an input value, which it computes using a mathematical function, only firing if the output exceeds a pre-defined threshold”. In this way, it reproduces the behaviour of real neurones, which only fire and transmit information when the input signal (the potential difference across the entire neural circuit) reaches a certain level. In the artificial model, the results of a single layer are weighted, added up and then sent as the input signal to the following layer, which processes that input using different functions, and so on and so forth.

For example, if a system is trained with great quantities of photos of apples and watermelons, it will progressively learn to distinguish them on the basis of diameter, says Marchand-Maillet. If it cannot decide (e.g., when processing a picture of a tiny watermelon), the subsequent layers take over by analysing the colours or textures of the fruit in the photo, and so on. In this way, every step in the process further refines the assessment.

Video games to the rescue

For decades, the frontier of computing held back more complex applications, even at the cutting edge. Industry walked away, and deep learning only survived thanks to the video games sector, which eventually began producing graphics chips, or GPUs, with an unprecedented power at accessible prices: up to 6 teraflops (i.e., 6 trillion calculations per second) for a few hundred dollars. “There’s no doubt that it was this calculating power that laid the ground for the quantum leap in deep learning”, says Touzet. GPUs are also very good at parallel calculations, a useful function for executing the innumerable simultaneous operations required by neural networks.
Although image analysis is getting great results, things are more complicated for sequential data objects such as natural spoken language and video footage. This has formed part of Schmidhuber’s work since 1989, and his response has been to develop recurrent neural networks in which neurones communicate with each other in loops, feeding processed data back into the initial layers.

Such sequential data analysis is highly dependent on context and precursory data. In Lugano, networks have been instructed to memorise the order of a chain of events. Long Short Term Memory (LSTM) networks can distinguish ‘boat’ from ‘float’ by recalling the sound that preceded ‘oat’ (i.e., either ‘b’ or ‘fl’). “Recurrent neural networks are more powerful than other approaches such as the Hidden Markov models”, says Schmidhuber, who also notes that Google Voice integrated LSTMs in 2015. “With looped networks, the number of layers is potentially infinite”, says Faltings [?].

For Schmidhuber, deep learning is just one aspect of artificial intelligence; the real thing will lead to “the most important change in the history of our civilisation”. But Marchand-Maillet sees deep learning as “a bit of hype, leading us to believe that artificial intelligence can learn anything provided there’s data. But it’s still an open question as to whether deep learning can really be applied to every last domain”.

It’s nice to get an historical perspective and eye-opening to realize that scientists have been working on these concepts since the 1950s.

“Brute force” technique for biomolecular information processing

The research is being announced by the University of Tokyo but there is definitely a French flavour to this project. From a June 20, 2016 news item on ScienceDaily,

A Franco-Japanese research group at the University of Tokyo has developed a new “brute force” technique to test thousands of biochemical reactions at once and quickly home in on the range of conditions where they work best. Until now, optimizing such biomolecular systems, which can be applied for example to diagnostics, would have required months or years of trial and error experiments, but with this new technique that could be shortened to days.

A June 20, 2016 University of Tokyo news release on EurekAlert, which originated the news item, describes the project in more detail,

“We are interested in programming complex biochemical systems so that they can process information in a way that is analogous to electronic devices. If you could obtain a high-resolution map of all possible combinations of reaction conditions and their corresponding outcomes, the development of such reactions for specific purposes like diagnostic tests would be quicker than it is today,” explains Centre National de la Recherche Scientifique (CNRS) researcher Yannick Rondelez at the Institute of Industrial Science (IIS) [located at the University of Tokyo].

“Currently researchers use a combination of computer simulations and painstaking experiments. However, while simulations can test millions of conditions, they are based on assumptions about how molecules behave and may not reflect the full detail of reality. On the other hand, testing all possible conditions, even for a relatively simple design, is a daunting job.”

Rondelez and his colleagues at the Laboratory for Integrated Micro-Mechanical Systems (LIMMS), a 20-year collaboration between the IIS and the French CNRS, demonstrated a system that can test ten thousand different biochemical reaction conditions at once. Working with the IIS Applied Microfluidic Laboratory of Professor Teruo Fujii, they developed a platform to generate a myriad of micrometer-sized droplets containing random concentrations of reagents and then sandwich a single layer of them between glass slides. Fluorescent markers combined with the reagents are automatically read by a microscope to determine the precise concentrations in each droplet and also observe how the reaction proceeds.

“It was difficult to fine-tune the device at first,” explains Dr Anthony Genot, a CNRS researcher at LIMMS. “We needed to create generate thousands of droplets containing reagents within a precise range of concentrations to produce high resolution maps of the reactions we were studying. We expected that this would be challenging. But one unanticipated difficulty was immobilizing the droplets for the several days it took for some reactions to unfold. It took a lot of testing to create a glass chamber design that was airtight and firmly held the droplets in place.” Overall, it took nearly two years to fine-tune the device until the researchers could get their droplet experiment to run smoothly.

Seeing the new system producing results was revelatory. “You start with a screen full of randomly-colored dots, and then suddenly the computer rearranges them into a beautiful high-resolution map, revealing hidden information about the reaction dynamics. Seeing them all slide into place to produce something that had only ever been seen before through simulation was almost magical,” enthuses Rondelez.

“The map can tell us not only about the best conditions of biochemical reactions, it can also tell us about how the molecules behave in certain conditions. Using this map we’ve already found a molecular behavior that had been predicted theoretically, but had not been shown experimentally. With our technique we can explore how molecules talk to each other in test tube conditions. Ultimately, we hope to illuminate the intimate machinery of living molecular systems like ourselves,” says Rondelez.

Here’s a link to and a citation for the paper,

High-resolution mapping of bifurcations in nonlinear biochemical circuits by A. J. Genot, A. Baccouche, R. Sieskind, N. Aubert-Kato, N. Bredeche, J. F. Bartolo, V. Taly, T. Fujii, & Y. Rondelez. Nature Chemistry (2016)
doi:10.1038/nchem.2544 Published online 20 June 2016

This paper is behind a paywall.

Update on the International NanoCar race coming up in Autumn 2016

First off, the race seems to be adjusting its brand (it was billed as the International NanoCar Race in my Dec. 21, 2015 posting), from a May 20, 2016 news item on Nanowerk,

The first-ever international race of molecule-cars (Nanocar Race) will take place at the CEMES laboratory in Toulouse this fall [2016].

A May 9, 2016 notice on France’s Centre national de la recherce scientifique’s (CNRS) news website, which originated the news item, fills in a few more details,

Five teams are fine-tuning their cars—each made up of around a hundred atoms and measuring a few nanometers in length. They will be propelled by an electric current on a gold atom “race track.” We take you behind the scenes to see how these researcher-racers are preparing for the NanoCar Race.

About this video

Original title: The NanoCar Race

Production year: 2016

Length: 6 min 23

Director: Pierre de Parscau

Producer: CNRS Images

Speaker(s) :

Christian Joachim
Centre d’Elaboration des Matériaux et d’Etudes Structurales

Gwénaël Rapenne
(CEMES/CNRS)

Corentin Durand
(CEMES/CNRS)

Pierre Abeilhou
(CEMES/CNRS)

Frank Eisenhut
Technical University of Dresden

You can find the video which is embedded in both the Nanowerk news item and here with the CNRS notice.

Spider webs inspire liquid wire

Courtesy University of Oxford

Courtesy University of Oxford

Usually, when science talk runs to spider webs the focus is on strength but this research from the UK and France is all about resilience. From a May 16, 2016 news item on phys.org,

Why doesn’t a spider’s web sag in the wind or catapult flies back out like a trampoline? The answer, according to new research by an international team of scientists, lies in the physics behind a ‘hybrid’ material produced by spiders for their webs.

Pulling on a sticky thread in a garden spider’s orb web and letting it snap back reveals that the thread never sags but always stays taut—even when stretched to many times its original length. This is because any loose thread is immediately spooled inside the tiny droplets of watery glue that coat and surround the core gossamer fibres of the web’s capture spiral.

This phenomenon is described in the journal PNAS by scientists from the University of Oxford, UK and the Université Pierre et Marie Curie, Paris, France.

The researchers studied the details of this ‘liquid wire’ technique in spiders’ webs and used it to create composite fibres in the laboratory which, just like the spider’s capture silk, extend like a solid and compress like a liquid. These novel insights may lead to new bio-inspired technology.

A May 16, 2016 University of Oxford press release (also on EurekAlert), which originated the news item, provides more detail,

Professor Fritz Vollrath of the Oxford Silk Group in the Department of Zoology at Oxford University said: ‘The thousands of tiny droplets of glue that cover the capture spiral of the spider’s orb web do much more than make the silk sticky and catch the fly. Surprisingly, each drop packs enough punch in its watery skins to reel in loose bits of thread. And this winching behaviour is used to excellent effect to keep the threads tight at all times, as we can all observe and test in the webs in our gardens.’

The novel properties observed and analysed by the scientists rely on a subtle balance between fibre elasticity and droplet surface tension. Importantly, the team was also able to recreate this technique in the laboratory using oil droplets on a plastic filament. And this artificial system behaved just like the spider’s natural winch silk, with spools of filament reeling and unreeling inside the oil droplets as the thread extended and contracted.

Dr Hervé Elettro, the first author and a doctoral researcher at Institut Jean Le Rond D’Alembert, Université Pierre et Marie Curie, Paris, said: ‘Spider silk has been known to be an extraordinary material for around 40 years, but it continues to amaze us. While the web is simply a high-tech trap from the spider’s point of view, its properties have a huge amount to offer the worlds of materials, engineering and medicine.

‘Our bio-inspired hybrid threads could be manufactured from virtually any components. These new insights could lead to a wide range of applications, such as microfabrication of complex structures, reversible micro-motors, or self-tensioned stretchable systems.’

Here’s a link to and a citation for the paper,

In-drop capillary spooling of spider capture thread inspires hybrid fibers with mixed solid–liquid mechanical properties by Hervé Elettro, Sébastien Neukirch, Fritz Vollrath, and Arnaud Antkowiak. PNAS doi: 10.1073/pnas.1602451113

This paper appears to be open access.

The Leonardo Project and the master’s DNA (deoxyribonucleic acid)

I’ve never really understood the mania for digging up bodies of famous people in history and trying to ascertain how the person really died or what kind of diseases they may have had but the practice fascinates me. The latest famous person to be subjected to a forensic inquiry centuries after death is Leonardo da Vinci. A May 5, 2016 Human Evolution (journal) news release on EurekAlert provides details,

A team of eminent specialists from a variety of academic disciplines has coalesced around a goal of creating new insight into the life and genius of Leonardo da Vinci by means of authoritative new research and modern detective technologies, including DNA science.

The Leonardo Project is in pursuit of several possible physical connections to Leonardo, beaming radar, for example, at an ancient Italian church floor to help corroborate extensive research to pinpoint the likely location of the tomb of his father and other relatives. A collaborating scholar also recently announced the successful tracing of several likely DNA relatives of Leonardo living today in Italy (see endnotes).

If granted the necessary approvals, the Project will compare DNA from Leonardo’s relatives past and present with physical remnants — hair, bones, fingerprints and skin cells — associated with the Renaissance figure whose life marked the rebirth of Western civilization.

The Project’s objectives, motives, methods, and work to date are detailed in a special issue of the journal Human Evolution, published coincident with a meeting of the group hosted in Florence this week under the patronage of Eugenio Giani, President of the Tuscan Regional Council (Consiglio Regionale della Toscana).

The news release goes on to provide some context for the work,

Born in Vinci, Italy, Leonardo died in 1519, age 67, and was buried in Amboise, southwest of Paris. His creative imagination foresaw and described innovations hundreds of years before their invention, such as the helicopter and armored tank. His artistic legacy includes the iconic Mona Lisa and The Last Supper.

The idea behind the Project, founded in 2014, has inspired and united anthropologists, art historians, genealogists, microbiologists, and other experts from leading universities and institutes in France, Italy, Spain, Canada and the USA, including specialists from the J. Craig Venter Institute of California, which pioneered the sequencing of the human genome.

The work underway resembles in complexity recent projects such as the successful search for the tomb of historic author Miguel de Cervantes and, in March 2015, the identification of England’s King Richard III from remains exhumed from beneath a UK parking lot, fittingly re-interred 500 years after his death.

Like Richard, Leonardo was born in 1452, and was buried in a setting that underwent changes in subsequent years such that the exact location of the grave was lost.

If DNA and other analyses yield a definitive identification, conventional and computerized techniques might reconstruct the face of Leonardo from models of the skull.”

In addition to Leonardo’s physical appearance, information potentially revealed from the work includes his ancestry and additional insight into his diet, state of health, personal habits, and places of residence.

According to the news release, the researchers have an agenda that goes beyond facial reconstruction and clues about  ancestry and diet,

Beyond those questions, and the verification of Leonardo’s “presumed remains” in the chapel of Saint-Hubert at the Château d’Amboise, the Project aims to develop a genetic profile extensive enough to understand better his abilities and visual acuity, which could provide insights into other individuals with remarkable qualities.

It may also make a lasting contribution to the art world, within which forgery is a multi-billion dollar industry, by advancing a technique for extracting and sequencing DNA from other centuries-old works of art, and associated methods of attribution.

Says Jesse Ausubel, Vice Chairman of the Richard Lounsbery Foundation, sponsor of the Project’s meetings in 2015 and 2016: “I think everyone in the group believes that Leonardo, who devoted himself to advancing art and science, who delighted in puzzles, and whose diverse talents and insights continue to enrich society five centuries after his passing, would welcome the initiative of this team — indeed would likely wish to lead it were he alive today.”

The researchers aim to have the work complete by 2019,

In the journal, group members underline the highly conservative, precautionary approach required at every phase of the Project, which they aim to conclude in 2019 to mark the 500th anniversary of Leonardo’s death.

For example, one objective is to verify whether fingerprints on Leonardo’s paintings, drawings, and notebooks can yield DNA consistent with that extracted from identified remains.

Early last year, Project collaborators from the International Institute for Humankind Studies in Florence opened discussions with the laboratory in that city where Leonardo’s Adoration of the Magi has been undergoing restoration for nearly two years, to explore the possibility of analyzing dust from the painting for possible DNA traces. A crucial question is whether traces of DNA remain or whether restoration measures and the passage of time have obliterated all evidence of Leonardo’s touch.

In preparation for such analysis, a team from the J. Craig Venter Institute and the University of Florence is examining privately owned paintings believed to be of comparable age to develop and calibrate techniques for DNA extraction and analysis. At this year’s meeting in Florence, the researchers also described a pioneering effort to analyze the microbiome of a painting thought to be about five centuries old.

If human DNA can one day be obtained from Leonardo’s work and sequenced, the genetic material could then be compared with genetic information from skeletal or other remains that may be exhumed in the future.

Here’s a list of the participating organizations (from the news release),

  • The Institut de Paléontologie Humaine, Paris
  • The International Institute for Humankind Studies, Florence
  • The Laboratory of Molecular Anthropology and Paleogenetics, Biology Department, University of Florence
  • Museo Ideale Leonardo da Vinci, in Vinci, Italy
  • J. Craig Venter Institute, La Jolla, California
  • Laboratory of Genetic Identification, University of Granada, Spain
  • The Rockefeller University, New York City

You can find the special issue of Human Evolution (HE Vol. 31, 2016 no. 3) here. The introductory essay is open access but the other articles are behind a paywall.

“One minus one equals zero” has been disproved

Two mirror-image molecules can be optically active according to an April 27, 2016 news item on ScienceDaily,

In 1848, Louis Pasteur showed that molecules that are mirror images of each other had exactly opposite rotations of light. When mixed in solution, they cancel the effects of the other, and no rotation of light is observed. Now, a research team has demonstrated that a mixture of mirror-image molecules crystallized in the solid state can be optically active.

An April 26, 2016 Northwestern University news release (also on EurekAlert), which originated the news item, expands on the theme,

In the world of chemistry, one minus one almost always equals zero.

But new research from Northwestern University and the Centre National de la Recherche Scientifique (CNRS) in France shows that is not always the case. And the discovery will change scientists’ understanding of mirror-image molecules and their optical activity.

Now, Northwestern’s Kenneth R. Poeppelmeier and his research team are the first to demonstrate that a mixture of mirror-image molecules crystallized in the solid state can be optically active. The scientists first designed and made the materials and then measured their optical properties.

“In our case, one minus one does not always equal zero,” said first author Romain Gautier of CNRS. “This discovery will change scientists’ understanding of these molecules, and new applications could emerge from this observation.”

The property of rotating light, which has been known for more than two centuries to exist in many molecules, already has many applications in medicine, electronics, lasers and display devices.

“The phenomenon of optical activity can occur in a mixture of mirror-image molecules, and now we’ve measured it,” said Poeppelmeier, a Morrison Professor of Chemistry in the Weinberg College of Arts and Sciences. “This is an important experiment.”

Although this phenomenon has been predicted for a long time, no one — until now — had created such a racemic mixture (a combination of equal amounts of mirror-image molecules) and measured the optical activity.

“How do you deliberately create these materials?” Poeppelmeier said. “That’s what excites me as a chemist.” He and Gautier painstakingly designed the material, using one of four possible solid-state arrangements known to exhibit circular dichroism (the ability to absorb differently the “rotated” light).

Next, Richard P. Van Duyne, a Morrison Professor of Chemistry at Northwestern, and graduate student Jordan M. Klingsporn measured the material’s optical activity, finding that mirror-image molecules are active when arranged in specific orientations in the solid state.

Here’s a link to and a citation for the paper,

Optical activity from racemates by Romain Gautier, Jordan M. Klingsporn, Richard P. Van Duyne, & Kenneth R. Poeppelmeier. Nature Materials (2016) doi:10.1038/nmat4628 Published online 18 April 2016

This paper is behind a paywall.

Not enough talk about nano risks?

It’s not often that a controversy amongst visual artists intersects with a story about carbon nanotubes, risk, and the roles that  scientists play in public discourse.

Nano risks

Dr. Andrew Maynard, Director of the Risk Innovation Lab at Arizona State University, opens the discussion in a March 29, 2016 article for the appropriately named website, The Conversation (Note: Links have been removed),

Back in 2008, carbon nanotubes – exceptionally fine tubes made up of carbon atoms – were making headlines. A new study from the U.K. had just shown that, under some conditions, these long, slender fiber-like tubes could cause harm in mice in the same way that some asbestos fibers do.

As a collaborator in that study, I was at the time heavily involved in exploring the risks and benefits of novel nanoscale materials. Back then, there was intense interest in understanding how materials like this could be dangerous, and how they might be made safer.

Fast forward to a few weeks ago, when carbon nanotubes were in the news again, but for a very different reason. This time, there was outrage not over potential risks, but because the artist Anish Kapoor had been given exclusive rights to a carbon nanotube-based pigment – claimed to be one of the blackest pigments ever made.

The worries that even nanotech proponents had in the early 2000s about possible health and environmental risks – and their impact on investor and consumer confidence – seem to have evaporated.

I had covered the carbon nanotube-based coating in a March 14, 2016 posting here,

Surrey NanoSystems (UK) is billing their Vantablack as the world’s blackest coating and they now have a new product in that line according to a March 10, 2016 company press release (received via email),

A whole range of products can now take advantage of Vantablack’s astonishing characteristics, thanks to the development of a new spray version of the world’s blackest coating material. The new substance, Vantablack S-VIS, is easily applied at large scale to virtually any surface, whilst still delivering the proven performance of Vantablack.

Oddly, the company news release notes Vantablack S-VIS could be used in consumer products while including the recommendation that it not be used in products where physical contact or abrasion is possible,

… Its ability to deceive the eye also opens up a range of design possibilities to enhance styling and appearance in luxury goods and jewellery [emphasis mine].

… “We are continuing to develop the technology, and the new sprayable version really does open up the possibility of applying super-black coatings in many more types of airborne or terrestrial applications. Possibilities include commercial products such as cameras, [emphasis mine] equipment requiring improved performance in a smaller form factor, as well as differentiating the look of products by means of the coating’s unique aesthetic appearance. It’s a major step forward compared with today’s commercial absorber coatings.”

The structured surface of Vantablack S-VIS means that it is not recommended for applications where it is subject to physical contact or abrasion. [emphasis mine] Ideally, it should be applied to surfaces that are protected, either within a packaged product, or behind a glass or other protective layer.

Presumably Surrey NanoSystems is looking at ways to make its Vantablack S-VIS capable of being used in products such as jewellery, cameras, and other consumers products where physical contact and abrasions are a strong possibility.

Andrew has pointed questions about using Vantablack S-VIS in new applications (from his March 29, 2016 article; Note: Links have been removed),

The original Vantablack was a specialty carbon nanotube coating designed for use in space, to reduce the amount of stray light entering space-based optical instruments. It was this far remove from any people that made Vantablack seem pretty safe. Whatever its toxicity, the chances of it getting into someone’s body were vanishingly small. It wasn’t nontoxic, but the risk of exposure was minuscule.

In contrast, Vantablack S-VIS is designed to be used where people might touch it, inhale it, or even (unintentionally) ingest it.

To be clear, Vantablack S-VIS is not comparable to asbestos – the carbon nanotubes it relies on are too short, and too tightly bound together to behave like needle-like asbestos fibers. Yet its combination of novelty, low density and high surface area, together with the possibility of human exposure, still raise serious risk questions.

For instance, as an expert in nanomaterial safety, I would want to know how readily the spray – or bits of material dislodged from surfaces – can be inhaled or otherwise get into the body; what these particles look like; what is known about how their size, shape, surface area, porosity and chemistry affect their ability to damage cells; whether they can act as “Trojan horses” and carry more toxic materials into the body; and what is known about what happens when they get out into the environment.

Risk and the roles that scientists play

Andrew makes his point and holds various groups to account (from his March 29, 2016 article; Note: Links have been removed),

… in the case of Vantablack S-VIS, there’s been a conspicuous absence of such nanotechnology safety experts in media coverage.

This lack of engagement isn’t too surprising – publicly commenting on emerging topics is something we rarely train, or even encourage, our scientists to do.

And yet, where technologies are being commercialized at the same time their safety is being researched, there’s a need for clear lines of communication between scientists, users, journalists and other influencers. Otherwise, how else are people to know what questions they should be asking, and where the answers might lie?

In 2008, initiatives existed such as those at the Center for Biological and Environmental Nanotechnology (CBEN) at Rice University and the Project on Emerging Nanotechnologies (PEN) at the Woodrow Wilson International Center for Scholars (where I served as science advisor) that took this role seriously. These and similar programs worked closely with journalists and others to ensure an informed public dialogue around the safe, responsible and beneficial uses of nanotechnology.

In 2016, there are no comparable programs, to my knowledge – both CBEN and PEN came to the end of their funding some years ago.

Some of the onus here lies with scientists themselves to make appropriate connections with developers, consumers and others. But to do this, they need the support of the institutions they work in, as well as the organizations who fund them. This is not a new idea – there is of course a long and ongoing debate about how to ensure academic research can benefit ordinary people.

Media and risk

As mainstream media such as newspapers and broadcast news continue to suffer losses in audience numbers, the situation vis à vis science journalism has changed considerably since 2008. Finding information is more of a challenge even for the interested.

As for those who might be interested, the chances of catching their attention are considerably more challenging. For example, some years ago scientists claimed to have achieved ‘cold fusion’ and there were television interviews (on the 60 minutes tv programme, amongst others) and cover stories in Time magazine and Newsweek magazine, which you could find in the grocery checkout line. You didn’t have to look for it. In fact, it was difficult to avoid the story. Sadly, the scientists had oversold and misrepresented their findings and that too was extensively covered in mainstream media. The news cycle went on for months. Something similar happened in 2010 with ‘arsenic life’. There was much excitement and then it became clear that scientists had overstated and misrepresented their findings. That news cycle was completed within three or fewer weeks and most members of the public were unaware. Media saturation is no longer what it used to be.

Innovative outreach needs to be part of the discussion and perhaps the Vantablack S-VIS controversy amongst artists can be viewed through that lens.

Anish Kapoor and his exclusive rights to Vantablack

According to a Feb. 29, 2016 article by Henri Neuendorf for artnet news, there is some consternation regarding internationally known artist, Anish Kapoor and a deal he has made with Surrey Nanosystems, the makers of Vantablack in all its iterations (Note: Links have been removed),

Anish Kapoor provoked the fury of fellow artists by acquiring the exclusive rights to the blackest black in the world.

The Indian-born British artist has been working and experimenting with the “super black” paint since 2014 and has recently acquired exclusive rights to the pigment according to reports by the Daily Mail.

The artist clearly knows the value of this innovation for his work. “I’ve been working in this area for the last 30 years or so with all kinds of materials but conventional materials, and here’s one that does something completely different,” he said, adding “I’ve always been drawn to rather exotic materials.”

This description from his Wikipedia entry gives some idea of Kapoor’s stature (Note: Links have been removed),

Sir Anish Kapoor, CBE RA (Hindi: अनीश कपूर, Punjabi: ਅਨੀਸ਼ ਕਪੂਰ), (born 12 March 1954) is a British-Indian sculptor. Born in Bombay,[1][2] Kapoor has lived and worked in London since the early 1970s when he moved to study art, first at the Hornsey College of Art and later at the Chelsea School of Art and Design.

He represented Britain in the XLIV Venice Biennale in 1990, when he was awarded the Premio Duemila Prize. In 1991 he received the Turner Prize and in 2002 received the Unilever Commission for the Turbine Hall at Tate Modern. Notable public sculptures include Cloud Gate (colloquially known as “the Bean”) in Chicago’s Millennium Park; Sky Mirror, exhibited at the Rockefeller Center in New York City in 2006 and Kensington Gardens in London in 2010;[3] Temenos, at Middlehaven, Middlesbrough; Leviathan,[4] at the Grand Palais in Paris in 2011; and ArcelorMittal Orbit, commissioned as a permanent artwork for London’s Olympic Park and completed in 2012.[5]

Kapoor received a Knighthood in the 2013 Birthday Honours for services to visual arts. He was awarded an honorary doctorate degree from the University of Oxford in 2014.[6] [7] In 2012 he was awarded Padma Bhushan by Congress led Indian government which is India’s 3rd highest civilian award.[8]

Artists can be cutthroat but they can also be prankish. Take a look at this image of Kapoor and note the blue background,

Artist Anish Kapoor is known for the rich pigments he uses in his work. (Image: Andrew Winning/Reuters)

Artist Anish Kapoor is known for the rich pigments he uses in his work. (Image: Andrew Winning/Reuters)

I don’t know why or when this image (used to illustrate Andrew’s essay) was taken so it may be coincidental but the background for the image brings to mind, Yves Klein and his International Klein Blue (IKB) pigment. From the IKB Wikipedia entry,

L'accord bleu (RE 10), 1960, mixed media piece by Yves Klein featuring IKB pigment on canvas and sponges Jaredzimmerman (WMF) - Foundation Stedelijk Museum Amsterdam Collection

L’accord bleu (RE 10), 1960, mixed media piece by Yves Klein featuring IKB pigment on canvas and sponges Jaredzimmerman (WMF) – Foundation Stedelijk Museum Amsterdam Collection

Here’s more from the IKB Wikipedia entry (Note: Links have been removed),

International Klein Blue (IKB) was developed by Yves Klein in collaboration with Edouard Adam, a Parisian art paint supplier whose shop is still in business on the Boulevard Edgar-Quinet in Montparnasse.[1] The uniqueness of IKB does not derive from the ultramarine pigment, but rather from the matte, synthetic resin binder in which the color is suspended, and which allows the pigment to maintain as much of its original qualities and intensity of color as possible.[citation needed] The synthetic resin used in the binder is a polyvinyl acetate developed and marketed at the time under the name Rhodopas M or M60A by the French pharmaceutical company Rhône-Poulenc.[2] Adam still sells the binder under the name “Médium Adam 25.”[1]

In May 1960, Klein deposited a Soleau envelope, registering the paint formula under the name International Klein Blue (IKB) at the Institut national de la propriété industrielle (INPI),[3] but he never patented IKB. Only valid under French law, a soleau enveloppe registers the date of invention, according to the depositor, prior to any legal patent application. The copy held by the INPI was destroyed in 1965. Klein’s own copy, which the INPI returned to him duly stamped is still extant.[4]

In short, it’s not the first time an artist has ‘owned’ a colour. Kapoor is not a performance artist as was Klein but his sculptural work lends itself to spectacle and to stimulating public discourse. As to whether or not, this is a prank, I cannot say but it has stimulated a discourse which ranges from intellectual property and artists to the risks of carbon nanotubes and the role scientists could play in the discourse about the risks associated with emerging technologies.

Regardless of how is was intended, bravo to Kapoor.

More reading

Andrew’s March 29, 2016 article has also been reproduced on Nanowerk and Slate.

Johathan Jones has written about Kapoor and the Vantablack  controversy in a Feb. 29, 2016 article for The Guardian titled: Can an artist ever really own a colour?

When based on plastic materials, contemporary art can degrade quickly

There’s an intriguing April 1, 2016 article by Josh Fischman for Scientific American about a problem with artworks from the 20th century and later—plastic-based materials (Note: A link has been removed),

Conservators at museums and art galleries have a big worry. They believe there is a good chance the art they showcase now will not be fit to be seen in one hundred years, according to researchers in a project  called Nanorestart. Why? After 1940, artists began using plastic-based material that was a far cry from the oil-based paints used by classical painters. Plastic is also far more fragile, it turns out. Its chemical bonds readily break. And they cannot be restored using techniques historically relied upon by conservators.

So art conservation scientists have turned to nanotechnology for help.

Sadly, there isn’t any detail in Fischman’s article (*ETA June 17, 2016 article [for Fast Company] by Charlie Sorrel, which features some good pictures, a succinct summary of Fischman’s article and a literary reference [Kurt Vonnegut’s Bluebeard]I*) about how nanotechnology is playing or might play a role in this conservation effort. Further investigation into the two projects (NanoRestART and POPART) mentioned by Fischman didn’t provide much more detail about NanoRestART’s science aspect but POPART does provide some details.

NanoRestART

It’s probably too soon (this project isn’t even a year-old) to be getting much in the way of the nanoscience details but NanoRestART has big plans according to its website homepage,

The conservation of this diverse cultural heritage requires advanced solutions at the cutting edge of modern chemistry and material science in an entirely new scientific framework that will be developed within NANORESTART project.

The NANORESTART project will focus on the synthesis of novel poly-functional nanomaterials and on the development of highly innovative restoration techniques to address the conservation of a wide variety of materials mainly used by modern and contemporary artists.

In NANORESTART, enterprises and academic centers of excellence in the field of synthesis and characterization of nano- and advanced materials have joined forces with complementary conservation institutions and freelance restorers. This multidisciplinary approach will cover the development of different materials in response to real conservation needs, the testing of such materials, the assessment of their environmental impact, and their industrial scalability.

NanoRestART’s (NANOmaterials for the REStoration of works of ART) project page spells out their goals in the order in which they are being approached,

The ground-breaking nature of our research can be more easily outlined by focussing on specific issues. The main conservation challenges that will be addressed in the project are:

 

Conservation challenge 1Cleaning of contemporary painted and plastic surfaces (CC1)

Conservation challenge 2Stabilization of canvases and painted layers in contemporary art (CC2)

Conservation challenge 3Removal of unwanted modern materials (CC3)

Conservation challenge 4Enhanced protection of artworks in museums and outdoors (CC4)

The European Commission provides more information about the project on its CORDIS website’s NanoRestART webpage including the start and end dates for the project and the consortium members,

From 2015-06-01 to 2018-12-01, ongoing project

CHALMERS TEKNISKA HOEGSKOLA AB
Sweden
MIRABILE ANTONIO
France
NATIONALMUSEET
Denmark
CONSIGLIO NAZIONALE DELLE RICERCHE
Italy
UNIVERSITY COLLEGE CORK, NATIONAL UNIVERSITY OF IRELAND, CORK
Ireland
MBN NANOMATERIALIA SPA
Italy
KEMIJSKI INSTITUT
Slovenia
CHEVALIER AURELIA
France
UNIVERSIDADE FEDERAL DO RIO GRANDE DO SUL
Brazil
UNIVERSITA CA’ FOSCARI VENEZIA
Italy
AKZO NOBEL PULP AND PERFORMANCE CHEMICALS AB
Sweden
COMMISSARIAT A L ENERGIE ATOMIQUE ET AUX ENERGIES ALTERNATIVES
France
ARKEMA FRANCE SA
France
UNIVERSIDAD DE SANTIAGO DE COMPOSTELA
Spain
UNIVERSITY COLLEGE LONDON
United Kingdom
ZFB ZENTRUM FUR BUCHERHALTUNG GMBH
Germany
UNIVERSITAT DE BARCELONA
Spain
THE BOARD OF TRUSTEES OF THE TATE GALLERY
United Kingdom
ASSOCIAZIONE ITALIANA PER LA RICERCA INDUSTRIALE – AIRI
Italy
THE ART INSTITUTE OF CHICAGO
United States
MINISTERIO DE EDUCACION, CULTURA Y DEPORTE
Spain
STICHTING HET RIJKSMUSEUM
Netherlands
UNIVERSITEIT VAN AMSTERDAM
Netherlands
UNIVERSIDADE FEDERAL DO RIO DE JANEIRO
Brazil
ACCADEMIA DI BELLE ARTI DI BRERA
Italy

It was a bit surprising to see Brazil and the US as participants but The Art Institute of Chicago has done nanotechnology-enabled conservation in the past as per my March 24, 2014 posting about a Renoir painting. I’m not familiar with the Brazilian organization.

POPART

POPART (Preservation of Plastic Artefacts in museum collections) mentioned by Fischman was a European Commission project which ran from 2008 – 2012. Reports can be found on the CORDIS Popart webpage. The final report has some interesting bits (Note: I have added subheads in the [] square brackets),

To achieve a valid comparison of the various invasive and non-invasive techniques proposed for the identification and characterisation of plastics, a sample collection (SamCo) of plastics artefacts of about 100 standard and reference plastic objects was gathered. SamCo was made up of two kinds of reference materials: standards and objects. Each standard represents the reference material of a ‘pure’ plastic; while each object represents the reference of the same plastic as in the standards, but compounded with pigments, dyestuffs, fillers, anti oxidants, plasticizers etc.  Three partners ICN [Instituut Collectie Nederland], V&A [Victoria and Albert Museum] and Natmus [National Museet] collected different natural and synthetic plastics from the ICN reference collections of plastic objects, from flea markets, antique shops and from private collections and from their own collection to contribute to SamCo, the sample collection for identification by POPART partners. …

As a successive step, the collections of the following museums were surveyed:

-Victoria & Albert Museum (V&A), London, U.K.
-Stedelijk Museum, Amsterdam, The Netherlands
-Musée d’Art Moderne et d’Art Contemporaine (MAMAC) Nice, France
-Musée d’Art moderne, St. Etienne, France
-Musée Galliera, Paris, France

At the V&A approximately 200 objects were surveyed. Good or fair conservation conditions were found for about 85% of the objects, whereas the remaining 15% was in poor or even in unacceptable (3%) conditions. In particular, crazing and delamination of polyurethane faux leather and surface stickiness and darkening of plasticized PVC were observed. The situation at the Stedelijk Museum in Amsterdam was particularly favourable because a previous survey had been done in 1995 so that it was possible to make a comparison with the Popart survey in 2010. A total number of 40 objects, which comprised plastics early dating from the 1930’s until the newer plastics from the 1980’s, were considered and their actual conservation state compared with the 1995 records. Of the objects surveyed in 2010, it can be concluded that 21 remained in the same condition. 13 objects containing PA, PUR, PVC, PP or natural rubber changed due to chemical and physical degradation while works of art containing either PMMA or PS changed due to mechanical damages and incorrect artist’s technique (inappropriate adhesive) into a lesser condition. 6 works of art (containing either PA or PMMA or both) changed into a better condition due to restoration or replacements.  More than 230 objects have been examined in the 3 museums in France. A particular effort was devoted to the identification of the constituting plastics materials. Surveys have been undertaken without any sophisticated equipment, in order to work in museums everyday conditions. Plastics hidden by other materials or by paint layers were not or hardly accessible, it is why the final count of some plastics may be under estimated in the final results. Another outcome is that plastic identification has been made at a general level only, by trying to identify the polymer family each plastic belongs to. Lastly, evidence of chemical degradation processes that do not cause visible or perceptible damage have not been detected and could not be taken in account in the final results.

… The most damaged artefacts resulted constituted by cellulose acetate, cellulose nitrate and PVC.

[Polly (the doll)]

One of the main issues that is of interest for conservators and curators is to assess which kinds of plastics are most vulnerable to deterioration and to what extent they can deteriorate under the environmental conditions normally encountered in museums. Although one might expect that real time deterioration could be ascertained by a careful investigation of museum objects on display or in storage, real objects or artworks may not sampled due to ethical considerations. Therefore, reference objects were prepared by Natmus in the form of a doll (Polly) for simultaneous exposures in different environmental conditions. The doll comprised of 11 different plastics representative of types typically found in modern museum collections. The 16 identical dolls realized were exposed in different places, not only in normal exhibit conditions, but also in some selected extreme conditions to ascertain possible acceleration of the deterioration process. In most cases the environmental parameters were also measured. The dolls were periodically evaluated by visual inspection and in selected cases by instrumental analyses. 

In conclusion the experimental campaign carried out with Polly dolls can be viewed as a pilot study aimed at tackling the practical issues related to the monitoring of real three dimensional plastic artworks and the surrounding environment.

The overall exposure period (one year and half) was sufficient to observe initial changes in the more susceptible polymers, such as polyurethane ethers and esters, and polyamide, with detectable chromatic changes and surface effects. Conversely the other polymers were shown to be stable in the same conditions over this time period.

[Polly as an awareness raising tool]

Last but not least, the educational and communication benefits of an object like Polly facilitated the dissemination of the Popart Project to the public, and increased the awareness of issues associated with plastics in museum collections.

[Cleaning issues]

Mechanical cleaning has long been perceived as the least damaging technique to remove soiling from plastics. The results obtained from POPART suggest that the risks of introducing scratches or residues by mechanical cleaning are measurable. Some plastics were clearly more sensitive to mechanical damage than others. From the model plastics evaluated, HIPS was the most sensitive followed by HDPE, PVC, PMMA and CA. Scratches could not be measured on XPS due to its inhomogeneous surfaces. Plasticised PVC scratched easily, but appeared to repair itself because plasticiser migrated to surfaces and filled scratches.

Photo micrographs revealed that although all 22 cleaning materials evaluated in POPART scratched test plastics, some scratches were sufficiently shallow to be invisible to the naked eye. Duzzit and Scotch Brite sponges as well as all paper based products caused more scratching of surfaces than brushes and cloths. Some cleaning materials, notably Akapad yellow and white sponges, compressed air, latex and synthetic rubber sponges and goat hair brushes left residues on surfaces. These residues were only visible on glass-clear, transparent test plastics such as PMMA. HDPE and HIPS surfaces both had matte and roughened appearances after cleaning with dry-ice. XPS was completely destroyed by the treatment. No visible changes were present on PMMA and PVC.

Of the cleaning methods evaluated, only canned air, natural and synthetic feather duster left surfaces unchanged. Natural and synthetic feather duster, microfiber-, spectacle – and cotton cloths, cotton bud, sable hair brush and leather chamois showed good results when applied to clean model plastics.

Most mechanical cleaning materials induced static electricity after cleaning, causing immediate attraction of dust. It was also noticed that generally when adding an aqueous cleaning agent to a cleaning material, the area scratched was reduced. This implied that cleaning agents also functioned as lubricants. A similar effect was exhibited by white spirit and isopropanol.
Based on cleaning vectors, Judith Hofenk de Graaff detergent, distilled water and Dehypon LS45 were the least damaging cleaning agents for all model plastics evaluated. None of the aqueous cleaning agents caused visible changes when used in combination with the least damaging cleaning materials. Sable hair brush, synthetic feather duster and yellow Akapad sponge were unsuitable for applying aqueous cleaning agents. Polyvinyl acetate sponge swelled in contact with solvents and was only suitable for aqueous cleaning processes.

Based on cleaning vectors, white spirit was the least damaging solvent. Acetone and Surfynol 61 were the most damaging for all model plastics and cannot be recommended for cleaning plastics. Surfynol 61 dissolved polyvinyl acetate sponge and left a milky residue on surfaces, which was particularly apparent on clear PMMA surfaces. Surfynol 61 left residues on surfaces on evaporating and acetone evaporated too rapidly to lubricate cleaning materials thereby increasing scratching of surfaces.

Supercritical carbon dioxide induced discolouration and mechanical damage to the model plastics, particularly to XPS, CA and PMMA and should not be used for conservation cleaning of plastics.

Potential Impact:
Cultural heritage is recognised as an economical factor, the cost of decay of cultural heritage and the risk associated to some material in collection may be high. It is generally estimated that plastics, developed at great numbers since the 20th century’s interbellum, will not survive that long. This means that fewer generations will have access to lasting plastic art for study, contemplation and enjoyment. On the other hand will it normally be easier to reveal a contemporary object’s technological secrets because of better documentation and easier access to artists’ working methods, ideas and intentions. A first more or less world encompassing recognition of the problems involved with museum objects made wholly or in part of plastics was through the conference ‘Saving the twentieth century” held in Ottawa, Canada in 1991. This was followed later by ‘Modern Art, who cares’ in Amsterdam, The Netherlands in 1997, ‘Mortality Immortality? The Legacy of Modern Art’ in Los Angeles, USA in 1998 and, for example much more recent, ‘Plastics –Looking at the future and learning from the Past’ in London, UK in 2007. A growing professional interest in the care of plastics was clearly reflected in the creation of an ICOM-CC working group dedicated to modern materials in 1996, its name change to Modern Materials and Contemporary Art in 2002, and its growing membership from 60 at inception to over 200 at the 16th triennial conference in Lisbon, Portugal in 2011 and tentatively to over 300 as one of the aims put forward in the 2011-2014 programme of that ICOM-CC working group. …

[Intellectual property]

Another element pertaining to conservation of modern art is the copyright of artists that extends at least 50 years beyond their death. Both, damage, value and copyright may influence the way by which damage is measured through scientific analysis, more specifically through the application of invasive or non invasive techniques. Any selection of those will not only have an influence on the extent of observable damage, but also on the detail of information gathered and necessary to explain damage and to suggest conservation measures.

[How much is deteriorating?]

… it is obvious from surveys carried out in several museums in France, the UK and The Netherlands that from 15 to 35 % of what I would then call an average plastic material based collection is in a poor to unacceptable condition. However, some 75 % would require cleaning,

I hope to find out more about how nanotechnology is expected to be implemented in the conservation and preservation of plastic-based art. The NanoRestART project started in June 2015 and hopefully more information will be disseminated in the next year or so.

While it’s not directly related, there was some work with conservation of daguerreotypes (19th century photographic technique) and nanotechnology mentioned in my Nov. 17, 2015 posting which was a followup to my Jan. 10, 2015 posting about the project and the crisis precipitating it.

*ETA June 30, 2016: Here’s clip from a BBC programme, Science in Action broadcast on June 30, 2016 featuring a chat with some of the scientists involved in the NanoRestArt project (Note: This excerpt is from a longer programme and seemingly starts in the middle of a conversation,)

Watching paint dry at the nanoscale

When paint dries it separates itself into two layers and according to scientists this may have implications for improving performance in products ranging from paints to beauty and cosmetics. From a March 18, 2016 news item on ScienceDaily,

New research published today in the journal Physical Review Letters has described a new physical mechanism that separates particles according to their size during the drying of wet coatings. The discovery could help improve the performance of a wide variety of everyday goods, from paint to sunscreen.

A March 18, 2016 University of Surrey (England) press release (also on EurekAlert), which originated the news item, provides more details,

Researchers from the University of Surrey [England, UK] in collaboration with the Université Claude Bernard, Lyon [France] used computer simulation and materials experiments to show how when coatings with different sized particles, such as paints dry, the coating spontaneously forms two layers.

This mechanism can be used to control the properties at the top and bottom of coatings independently, which could help increase performance of coatings across industries as diverse as beauty and pharmaceuticals.

Dr Andrea Fortini, of the University of Surrey and lead author explained:

“When coatings such as paint, ink or even outer layers on tablets are made, they work by spreading a liquid containing solid particles onto a surface, and allowing the liquid to evaporate. This is nothing new, but what is exciting is that we’ve shown that during evaporation, the small particles push away the larger ones, remaining at the top surface whilst the larger are pushed to bottom. This happens naturally.”

Dr Fortini continued, “This type of ‘self-layering’ in a coating could be very useful. For example, in a sun screen, most of the sunlight-blocking particles could be designed to push their way to the top, leaving particles that can adhere to the skin near the bottom of the coating. Typically the particles used in coatings have sizes that are 1000 times smaller than the width of a human hair so engineering these coatings takes place at a microscopic level. ”

The team is continuing to work on such research to understand how to control the width of the layer by changing the type and amount of small particles in the coating and explore their use in industrial products such as paints, inks, and adhesives

Here’s a link to and a citation for the paper,

Dynamic Stratification in Drying Films of Colloidal Mixtures by Andrea Fortini, Ignacio Martín-Fabiani, Jennifer Lesage De La Haye, Pierre-Yves Dugas, Muriel Lansalot, Franck D’Agosto, Elodie Bourgeat-Lami, Joseph L. Keddie, and Richard P. Sear. Phys. Rev. Lett. 116, 118301 – Published 18 March 2016 DOI:http://dx.doi.org/10.1103/PhysRevLett.116.118301

© 2016 American Physical Society

This article is behind a paywall.

Observing silica microspheres leads to theories about schools of fish and human crowds

Researchers developing theories about the crowd behaviour of tiny particles believe the theories may have some relevance to macro world phenomena.

[downloaded from http://www.ucl.ac.uk/news/news-articles/0316/090316-crowd-control]

[downloaded from http://www.ucl.ac.uk/news/news-articles/0316/090316-crowd-control]

From a March 9, 2016 news item on Nanowerk,

Crowds formed from tiny particles disperse as their environment becomes more disordered, according to scientists from UCL [University College London, UK], Bilkent University [Turkey] and Université Pierre et Marie Curie [France].

The new mechanism is counterintuitive and might help describe crowd behaviour in natural, real-world systems where many factors impact on individuals’ responses to either gather or disperse.

“Bacterial colonies, schools of fish, flocking birds, swarming insects and pedestrian flow all show collective and dynamic behaviours which are sensitive to changes in the surrounding environment and their dispersal or gathering can be sometimes the difference between life and death,” said lead researcher, Dr Giorgio Volpe, UCL Chemistry.

A March 9, 2016 UCL press release (also on EurekAlert), which originated the news item, expands on the theme,

“The crowd often has different behaviours to the individuals within it and we don’t know what the simple rules of motion are for this. If we understood these and how they are adapted in complex environments, we could externally regulate active systems. Examples include controlling the delivery of biotherapeutics in nanoparticle carriers to the target in the body, or improving crowd security in a panic situation.”

The study, published today in Nature Communications, investigated the behaviour of active colloidal particles in a controllable system to find out the rules of motion for individuals gathering or dispersing in response to external factors.

Colloidal particles are free to diffuse through a solution and for this study suspended silica microspheres were used. The colloidal particles became active with the addition of E. coli bacteria to the solution. Active colloidal particles were chosen as a model system because they move of their own accord using the energy from their environment, which is similar to how animals move to get food.

Initially, the active colloidal particles gathered at the centre of the area illuminated by a smooth beam which provided an active potential. Disorder was introduced using a speckle beam pattern which disordered the attractive potential and caused the colloids to disperse from the area at a rate of 0.6 particles per minute over 30 minutes. The particles switched between gathering and dispersing proportional to the level of external disorder imposed.

Erçağ Pinçe, who is first author of the study with Dr Sabareesh K. P. Velu, both Bilkent University, said: “We didn’t expect to see this mechanism as it’s counterintuitive but it might already be at play in natural systems. Our finding suggests there may be a way to control active matter through external factors. We could use it to control an existing system, or to design active agents that exploit the features of the environment to perform a given task, for example designing distinct depolluting agents for different types of polluted terrains and soils.”

Co-author, Dr Giovanni Volpe, Bilkent University, added: “Classical statistical physics allows us to understand what happens when a system is at equilibrium but unfortunately for researchers, life happens far from equilibrium. Behaviours are often unpredictable as they strongly depend on the characteristic of the environment. We hope that understanding these behaviours will help reveal the physics behind living organisms, but also help deliver innovative technologies in personalised healthcare, environmental sustainability and security.”

The team now plan on applying their findings to real-life situations to improve society. In particular, they want to exploit the main conclusions from their work to develop intelligent nanorobots for applications in drug-delivery and environmental sustainability that are capable of efficiently navigate through complex natural environments.

Here’s a link to and a citation for the paper,

Disorder-mediated crowd control in an active matter system by Erçağ Pinçe, Sabareesh K. P. Velu, Agnese Callegari, Parviz Elahi, Sylvain Gigan, Giovanni Volpe, & Giorgio Volpe. Nature Communications 7, Article number: 10907 doi:10.1038/ncomms10907 Published 09 March 2016

This is an open access paper.