Tag Archives: France

Graphene Canada and its second annual conference

An Aug. 31, 2016 news item on Nanotechnology Now announces Canada’s second graphene-themed conference,

The 2nd edition of Graphene & 2D Materials Canada 2016 International Conference & Exhibition (www.graphenecanadaconf.com) will take place in Montreal (Canada): 18-20 October, 2016.

– An industrial forum with focus on Graphene Commercialization (Abalonyx, Alcereco Inc, AMO GmbH, Avanzare, AzTrong Inc, Bosch GmbH, China Innovation Alliance of the Graphene Industry (CGIA), Durham University & Applied Graphene Materials, Fujitsu Laboratories Ltd., Hanwha Techwin, Haydale, IDTechEx, North Carolina Central University & Chaowei Power Ltd, NTNU&CrayoNano, Phantoms Foundation, Southeast University, The Graphene Council, University of Siegen, University of Sunderland and University of Waterloo)
– Extensive thematic workshops in parallel (Materials & Devices Characterization, Chemistry, Biosensors & Energy and Electronic Devices)
– A significant exhibition (Abalonyx, Go Foundation, Grafoid, Group NanoXplore Inc., Raymor | Nanointegris and Suragus GmbH)

As I noted in my 2015 post about Graphene Canada and its conference, the group is organized in a rather interesting fashion and I see the tradition continues, i.e., the lead organizers seem to be situated in countries other than Canada. From the Aug. 31, 2016 news item on Nanotechnology Now,

Organisers: Phantoms Foundation [located in Spain] www.phantomsnet.net
Catalan Institute of Nanoscience and Nanotechnology – ICN2 (Spain) | CEMES/CNRS (France) | GO Foundation (Canada) | Grafoid Inc (Canada) | Graphene Labs – IIT (Italy) | McGill University (Canada) | Texas Instruments (USA) | Université Catholique de Louvain (Belgium) | Université de Montreal (Canada)

You can find the conference website here.

Improving the quality of sight in artificial retinas

Researchers at France’s Centre national de la recherche scientifique (CNRS) and elsewhere have taken a step forward to improving sight derived from artificial retinas according to an Aug. 25, 2016 news item on Nanowerk (Note: A link has been removed),

A major therapeutic challenge, the retinal prostheses that have been under development during the past ten years can enable some blind subjects to perceive light signals, but the image thus restored is still far from being clear. By comparing in rodents the activity of the visual cortex generated artificially by implants against that produced by “natural sight”, scientists from CNRS, CEA [Commissariat à l’énergie atomique et aux énergies alternatives is the French Alternative Energies and Atomic Energy Commission], INSERM [Institut national de la santé et de la recherche médicale is the French National Institute of Health and Medical Research], AP-HM [Assistance Publique – Hôpitaux de Marseille] and Aix-Marseille Université identified two factors that limit the resolution of prostheses.

Based on these findings, they were able to improve the precision of prosthetic activation. These multidisciplinary efforts, published on 23 August 2016 in eLife (“Probing the functional impact of sub-retinal prosthesis”), thus open the way towards further advances in retinal prostheses that will enhance the quality of life of implanted patients.

An Aug. 24, 2015 CNRS press release, which originated the news item, expands on the theme,

A retinal prosthesis comprises three elements: a camera (inserted in the patient’s spectacles), an electronic microcircuit (which transforms data from the camera into an electrical signal) and a matrix of microscopic electrodes (implanted in the eye in contact with the retina). This prosthesis replaces the photoreceptor cells of the retina: like them, it converts visual information into electrical signals which are then transmitted to the brain via the optic nerve. It can treat blindness caused by a degeneration of retinal photoreceptors, on condition that the optical nerve has remained functional1. Equipped with these implants, patients who were totally blind can recover visual perceptions in the form of light spots, or phosphenes.  Unfortunately, at present, the light signals perceived are not clear enough to recognize faces, read or move about independently.

To understand the resolution limits of the image generated by the prosthesis, and to find ways of optimizing the system, the scientists carried out a large-scale experiment on rodents.  By combining their skills in ophthalmology and the physiology of vision, they compared the response of the visual system of rodents to both natural visual stimuli and those generated by the prosthesis.

Their work showed that the prosthesis activated the visual cortex of the rodent in the correct position and at ranges comparable to those obtained under natural conditions.  However, the extent of the activation was much too great, and its shape was much too elongated.  This deformation was due to two separate phenomena observed at the level of the electrode matrix. Firstly, the scientists observed excessive electrical diffusion: the thin layer of liquid situated between the electrode and the retina passively diffused the electrical stimulus to neighboring nerve cells. And secondly, they detected the unwanted activation of retinal fibers situated close to the cells targeted for stimulation.

Armed with these findings, the scientists were able to improve the properties of the interface between the prosthesis and retina, with the help of specialists in interface physics.  Together, they were able to generate less diffuse currents and significantly improve artificial activation, and hence the performance of the prosthesis.

This lengthy study, because of the range of parameters covered (to study the different positions, types and intensities of signals) and the surgical problems encountered (in inserting the implant and recording the images generated in the animal’s brain) has nevertheless opened the way towards making promising improvements to retinal prostheses for humans.

This work was carried out by scientists from the Institut de Neurosciences de la Timone (CNRS/AMU) and AP-HM, in collaboration with CEA-Leti and the Institut de la Vision (CNRS/Inserm/UPMC).

Artificial retinas

© F. Chavane & S. Roux.

Activation (colored circles at the level of the visual cortex) of the visual system by prosthetic stimulation (in the middle, in red, the insert shows an image of an implanted prosthesis) is greater and more elongated than the activation achieved under natural stimulation (on the left, in yellow). Using a protocol to adapt stimulation (on the right, in green), the size and shape of the activation can be controlled and are more similar to natural visual activation (yellow).

Here’s a link to and a citation for the paper,

Probing the functional impact of sub-retinal prosthesis by Sébastien Roux, Frédéric Matonti, Florent Dupont, Louis Hoffart, Sylvain Takerkart, Serge Picaud, Pascale Pham, and Frédéric Chavane. eLife 2016;5:e12687 DOI: http://dx.doi.org/10.7554/eLife.12687 Published August 23, 2016

This paper appears to be open access.

New electrochromic material for ‘smart’ windows

Given that it’s summer, I seem to be increasingly obsessed with windows that help control the heat from the sun. So, this Aug. 22, 2016 news item on ScienceDaily hit my sweet spot,

Researchers in the Cockrell School of Engineering at The University of Texas at Austin have invented a new flexible smart window material that, when incorporated into windows, sunroofs, or even curved glass surfaces, will have the ability to control both heat and light from the sun. …

Delia Milliron, an associate professor in the McKetta Department of Chemical Engineering, and her team’s advancement is a new low-temperature process for coating the new smart material on plastic, which makes it easier and cheaper to apply than conventional coatings made directly on the glass itself. The team demonstrated a flexible electrochromic device, which means a small electric charge (about 4 volts) can lighten or darken the material and control the transmission of heat-producing, near-infrared radiation. Such smart windows are aimed at saving on cooling and heating bills for homes and businesses.

An Aug. 22, 2016 University of Texas at Austin news release (also on EurekAlert), which originated the news item, describes the international team behind this research and offers more details about the research itself,

The research team is an international collaboration, including scientists at the European Synchrotron Radiation Facility and CNRS in France, and Ikerbasque in Spain. Researchers at UT Austin’s College of Natural Sciences provided key theoretical work.

Milliron and her team’s low-temperature process generates a material with a unique nanostructure, which doubles the efficiency of the coloration process compared with a coating produced by a conventional high-temperature process. It can switch between clear and tinted more quickly, using less power.

The new electrochromic material, like its high-temperature processed counterpart, has an amorphous structure, meaning the atoms lack any long-range organization as would be found in a crystal. However, the new process yields a unique local arrangement of the atoms in a linear, chain-like structure. Whereas conventional amorphous materials produced at high temperature have a denser three-dimensionally bonded structure, the researchers’ new linearly structured material, made of chemically condensed niobium oxide, allows ions to flow in and out more freely. As a result, it is twice as energy efficient as the conventionally processed smart window material.

At the heart of the team’s study is their rare insight into the atomic-scale structure of the amorphous materials, whose disordered structures are difficult to characterize. Because there are few techniques for characterizing the atomic-scale structure sufficiently enough to understand properties, it has been difficult to engineer amorphous materials to enhance their performance.

“There’s relatively little insight into amorphous materials and how their properties are impacted by local structure,” Milliron said. “But, we were able to characterize with enough specificity what the local arrangement of the atoms is, so that it sheds light on the differences in properties in a rational way.”

Graeme Henkelman, a co-author on the paper and chemistry professor in UT Austin’s College of Natural Sciences, explains that determining the atomic structure for amorphous materials is far more difficult than for crystalline materials, which have an ordered structure. In this case, the researchers were able to use a combination of techniques and measurements to determine an atomic structure that is consistent in both experiment and theory.

“Such collaborative efforts that combine complementary techniques are, in my view, the key to the rational design of new materials,” Henkelman said.

Milliron believes the knowledge gained here could inspire deliberate engineering of amorphous materials for other applications such as supercapacitors that store and release electrical energy rapidly and efficiently.

The Milliron lab’s next challenge is to develop a flexible material using their low-temperature process that meets or exceeds the best performance of electrochromic materials made by conventional high-temperature processing.

“We want to see if we can marry the best performance with this new low-temperature processing strategy,” she said.

Here’s a link to and a citation for the paper,

Linear topology in amorphous metal oxide electrochromic networks obtained via low-temperature solution processing by Anna Llordés, Yang Wang, Alejandro Fernandez-Martinez, Penghao Xiao, Tom Lee, Agnieszka Poulain, Omid Zandi, Camila A. Saez Cabezas, Graeme Henkelman, & Delia J. Milliron. Nature Materials (2016)  doi:10.1038/nmat4734 Published online 22 August 2016

This paper is behind a paywall.

Deep learning and some history from the Swiss National Science Foundation (SNSF)

A June 27, 2016 news item on phys.org provides a measured analysis of deep learning and its current state of development (from a Swiss perspective),

In March 2016, the world Go champion Lee Sedol lost 1-4 against the artificial intelligence AlphaGo. For many, this was yet another defeat for humanity at the hands of the machines. Indeed, the success of the AlphaGo software was forged in an area of artificial intelligence that has seen huge progress over the last decade. Deep learning, as it’s called, uses artificial neural networks to process algorithmic calculations. This software architecture therefore mimics biological neural networks.

Much of the progress in deep learning is thanks to the work of Jürgen Schmidhuber, director of the IDSIA (Istituto Dalle Molle di Studi sull’Intelligenza Artificiale) which is located in the suburbs of Lugano. The IDSIA doctoral student Shane Legg and a group of former colleagues went on to found DeepMind, the startup acquired by Google in early 2014 for USD 500 million. The DeepMind algorithms eventually wound up in AlphaGo.

“Schmidhuber is one of the best at deep learning,” says Boi Faltings of the EPFL Artificial Intelligence Lab. “He never let go of the need to keep working at it.” According to Stéphane Marchand-Maillet of the University of Geneva computing department, “he’s been in the race since the very beginning.”

A June 27, 2016 SNSF news release (first published as a story in Horizons no. 109 June 2016) by Fabien Goubet, which originated the news item, goes on to provide a brief history,

The real strength of deep learning is structural recognition, and winning at Go is just an illustration of this, albeit a rather resounding one. Elsewhere, and for some years now, we have seen it applied to an entire spectrum of areas, such as visual and vocal recognition, online translation tools and smartphone personal assistants. One underlying principle of machine learning is that algorithms must first be trained using copious examples. Naturally, this has been helped by the deluge of user-generated content spawned by smartphones and web 2.0, stretching from Facebook photo comments to official translations published on the Internet. By feeding a machine thousands of accurately tagged images of cats, for example, it learns first to recognise those cats and later any image of a cat, including those it hasn’t been fed.

Deep learning isn’t new; it just needed modern computers to come of age. As far back as the early 1950s, biologists tried to lay out formal principles to explain the working of the brain’s cells. In 1956, the psychologist Frank Rosenblatt of the New York State Aeronautical Laboratory published a numerical model based on these concepts, thereby creating the very first artificial neural network. Once integrated into a calculator, it learned to recognise rudimentary images.

“This network only contained eight neurones organised in a single layer. It could only recognise simple characters”, says Claude Touzet of the Adaptive and Integrative Neuroscience Laboratory of Aix-Marseille University. “It wasn’t until 1985 that we saw the second generation of artificial neural networks featuring multiple layers and much greater performance”. This breakthrough was made simultaneously by three researchers: Yann LeCun in Paris, Geoffrey Hinton in Toronto and Terrence Sejnowski in Baltimore.

Byte-size learning

In multilayer networks, each layer learns to recognise the precise visual characteristics of a shape. The deeper the layer, the more abstract the characteristics. With cat photos, the first layer analyses pixel colour, and the following layer recognises the general form of the cat. This structural design can support calculations being made upon thousands of layers, and it was this aspect of the architecture that gave rise to the name ‘deep learning’.

Marchand-Maillet explains: “Each artificial neurone is assigned an input value, which it computes using a mathematical function, only firing if the output exceeds a pre-defined threshold”. In this way, it reproduces the behaviour of real neurones, which only fire and transmit information when the input signal (the potential difference across the entire neural circuit) reaches a certain level. In the artificial model, the results of a single layer are weighted, added up and then sent as the input signal to the following layer, which processes that input using different functions, and so on and so forth.

For example, if a system is trained with great quantities of photos of apples and watermelons, it will progressively learn to distinguish them on the basis of diameter, says Marchand-Maillet. If it cannot decide (e.g., when processing a picture of a tiny watermelon), the subsequent layers take over by analysing the colours or textures of the fruit in the photo, and so on. In this way, every step in the process further refines the assessment.

Video games to the rescue

For decades, the frontier of computing held back more complex applications, even at the cutting edge. Industry walked away, and deep learning only survived thanks to the video games sector, which eventually began producing graphics chips, or GPUs, with an unprecedented power at accessible prices: up to 6 teraflops (i.e., 6 trillion calculations per second) for a few hundred dollars. “There’s no doubt that it was this calculating power that laid the ground for the quantum leap in deep learning”, says Touzet. GPUs are also very good at parallel calculations, a useful function for executing the innumerable simultaneous operations required by neural networks.
Although image analysis is getting great results, things are more complicated for sequential data objects such as natural spoken language and video footage. This has formed part of Schmidhuber’s work since 1989, and his response has been to develop recurrent neural networks in which neurones communicate with each other in loops, feeding processed data back into the initial layers.

Such sequential data analysis is highly dependent on context and precursory data. In Lugano, networks have been instructed to memorise the order of a chain of events. Long Short Term Memory (LSTM) networks can distinguish ‘boat’ from ‘float’ by recalling the sound that preceded ‘oat’ (i.e., either ‘b’ or ‘fl’). “Recurrent neural networks are more powerful than other approaches such as the Hidden Markov models”, says Schmidhuber, who also notes that Google Voice integrated LSTMs in 2015. “With looped networks, the number of layers is potentially infinite”, says Faltings [?].

For Schmidhuber, deep learning is just one aspect of artificial intelligence; the real thing will lead to “the most important change in the history of our civilisation”. But Marchand-Maillet sees deep learning as “a bit of hype, leading us to believe that artificial intelligence can learn anything provided there’s data. But it’s still an open question as to whether deep learning can really be applied to every last domain”.

It’s nice to get an historical perspective and eye-opening to realize that scientists have been working on these concepts since the 1950s.

“Brute force” technique for biomolecular information processing

The research is being announced by the University of Tokyo but there is definitely a French flavour to this project. From a June 20, 2016 news item on ScienceDaily,

A Franco-Japanese research group at the University of Tokyo has developed a new “brute force” technique to test thousands of biochemical reactions at once and quickly home in on the range of conditions where they work best. Until now, optimizing such biomolecular systems, which can be applied for example to diagnostics, would have required months or years of trial and error experiments, but with this new technique that could be shortened to days.

A June 20, 2016 University of Tokyo news release on EurekAlert, which originated the news item, describes the project in more detail,

“We are interested in programming complex biochemical systems so that they can process information in a way that is analogous to electronic devices. If you could obtain a high-resolution map of all possible combinations of reaction conditions and their corresponding outcomes, the development of such reactions for specific purposes like diagnostic tests would be quicker than it is today,” explains Centre National de la Recherche Scientifique (CNRS) researcher Yannick Rondelez at the Institute of Industrial Science (IIS) [located at the University of Tokyo].

“Currently researchers use a combination of computer simulations and painstaking experiments. However, while simulations can test millions of conditions, they are based on assumptions about how molecules behave and may not reflect the full detail of reality. On the other hand, testing all possible conditions, even for a relatively simple design, is a daunting job.”

Rondelez and his colleagues at the Laboratory for Integrated Micro-Mechanical Systems (LIMMS), a 20-year collaboration between the IIS and the French CNRS, demonstrated a system that can test ten thousand different biochemical reaction conditions at once. Working with the IIS Applied Microfluidic Laboratory of Professor Teruo Fujii, they developed a platform to generate a myriad of micrometer-sized droplets containing random concentrations of reagents and then sandwich a single layer of them between glass slides. Fluorescent markers combined with the reagents are automatically read by a microscope to determine the precise concentrations in each droplet and also observe how the reaction proceeds.

“It was difficult to fine-tune the device at first,” explains Dr Anthony Genot, a CNRS researcher at LIMMS. “We needed to create generate thousands of droplets containing reagents within a precise range of concentrations to produce high resolution maps of the reactions we were studying. We expected that this would be challenging. But one unanticipated difficulty was immobilizing the droplets for the several days it took for some reactions to unfold. It took a lot of testing to create a glass chamber design that was airtight and firmly held the droplets in place.” Overall, it took nearly two years to fine-tune the device until the researchers could get their droplet experiment to run smoothly.

Seeing the new system producing results was revelatory. “You start with a screen full of randomly-colored dots, and then suddenly the computer rearranges them into a beautiful high-resolution map, revealing hidden information about the reaction dynamics. Seeing them all slide into place to produce something that had only ever been seen before through simulation was almost magical,” enthuses Rondelez.

“The map can tell us not only about the best conditions of biochemical reactions, it can also tell us about how the molecules behave in certain conditions. Using this map we’ve already found a molecular behavior that had been predicted theoretically, but had not been shown experimentally. With our technique we can explore how molecules talk to each other in test tube conditions. Ultimately, we hope to illuminate the intimate machinery of living molecular systems like ourselves,” says Rondelez.

Here’s a link to and a citation for the paper,

High-resolution mapping of bifurcations in nonlinear biochemical circuits by A. J. Genot, A. Baccouche, R. Sieskind, N. Aubert-Kato, N. Bredeche, J. F. Bartolo, V. Taly, T. Fujii, & Y. Rondelez. Nature Chemistry (2016)
doi:10.1038/nchem.2544 Published online 20 June 2016

This paper is behind a paywall.

Update on the International NanoCar race coming up in Autumn 2016

First off, the race seems to be adjusting its brand (it was billed as the International NanoCar Race in my Dec. 21, 2015 posting), from a May 20, 2016 news item on Nanowerk,

The first-ever international race of molecule-cars (Nanocar Race) will take place at the CEMES laboratory in Toulouse this fall [2016].

A May 9, 2016 notice on France’s Centre national de la recherce scientifique’s (CNRS) news website, which originated the news item, fills in a few more details,

Five teams are fine-tuning their cars—each made up of around a hundred atoms and measuring a few nanometers in length. They will be propelled by an electric current on a gold atom “race track.” We take you behind the scenes to see how these researcher-racers are preparing for the NanoCar Race.

About this video

Original title: The NanoCar Race

Production year: 2016

Length: 6 min 23

Director: Pierre de Parscau

Producer: CNRS Images

Speaker(s) :

Christian Joachim
Centre d’Elaboration des Matériaux et d’Etudes Structurales

Gwénaël Rapenne

Corentin Durand

Pierre Abeilhou

Frank Eisenhut
Technical University of Dresden

You can find the video which is embedded in both the Nanowerk news item and here with the CNRS notice.

Spider webs inspire liquid wire

Courtesy University of Oxford

Courtesy University of Oxford

Usually, when science talk runs to spider webs the focus is on strength but this research from the UK and France is all about resilience. From a May 16, 2016 news item on phys.org,

Why doesn’t a spider’s web sag in the wind or catapult flies back out like a trampoline? The answer, according to new research by an international team of scientists, lies in the physics behind a ‘hybrid’ material produced by spiders for their webs.

Pulling on a sticky thread in a garden spider’s orb web and letting it snap back reveals that the thread never sags but always stays taut—even when stretched to many times its original length. This is because any loose thread is immediately spooled inside the tiny droplets of watery glue that coat and surround the core gossamer fibres of the web’s capture spiral.

This phenomenon is described in the journal PNAS by scientists from the University of Oxford, UK and the Université Pierre et Marie Curie, Paris, France.

The researchers studied the details of this ‘liquid wire’ technique in spiders’ webs and used it to create composite fibres in the laboratory which, just like the spider’s capture silk, extend like a solid and compress like a liquid. These novel insights may lead to new bio-inspired technology.

A May 16, 2016 University of Oxford press release (also on EurekAlert), which originated the news item, provides more detail,

Professor Fritz Vollrath of the Oxford Silk Group in the Department of Zoology at Oxford University said: ‘The thousands of tiny droplets of glue that cover the capture spiral of the spider’s orb web do much more than make the silk sticky and catch the fly. Surprisingly, each drop packs enough punch in its watery skins to reel in loose bits of thread. And this winching behaviour is used to excellent effect to keep the threads tight at all times, as we can all observe and test in the webs in our gardens.’

The novel properties observed and analysed by the scientists rely on a subtle balance between fibre elasticity and droplet surface tension. Importantly, the team was also able to recreate this technique in the laboratory using oil droplets on a plastic filament. And this artificial system behaved just like the spider’s natural winch silk, with spools of filament reeling and unreeling inside the oil droplets as the thread extended and contracted.

Dr Hervé Elettro, the first author and a doctoral researcher at Institut Jean Le Rond D’Alembert, Université Pierre et Marie Curie, Paris, said: ‘Spider silk has been known to be an extraordinary material for around 40 years, but it continues to amaze us. While the web is simply a high-tech trap from the spider’s point of view, its properties have a huge amount to offer the worlds of materials, engineering and medicine.

‘Our bio-inspired hybrid threads could be manufactured from virtually any components. These new insights could lead to a wide range of applications, such as microfabrication of complex structures, reversible micro-motors, or self-tensioned stretchable systems.’

Here’s a link to and a citation for the paper,

In-drop capillary spooling of spider capture thread inspires hybrid fibers with mixed solid–liquid mechanical properties by Hervé Elettro, Sébastien Neukirch, Fritz Vollrath, and Arnaud Antkowiak. PNAS doi: 10.1073/pnas.1602451113

This paper appears to be open access.

The Leonardo Project and the master’s DNA (deoxyribonucleic acid)

I’ve never really understood the mania for digging up bodies of famous people in history and trying to ascertain how the person really died or what kind of diseases they may have had but the practice fascinates me. The latest famous person to be subjected to a forensic inquiry centuries after death is Leonardo da Vinci. A May 5, 2016 Human Evolution (journal) news release on EurekAlert provides details,

A team of eminent specialists from a variety of academic disciplines has coalesced around a goal of creating new insight into the life and genius of Leonardo da Vinci by means of authoritative new research and modern detective technologies, including DNA science.

The Leonardo Project is in pursuit of several possible physical connections to Leonardo, beaming radar, for example, at an ancient Italian church floor to help corroborate extensive research to pinpoint the likely location of the tomb of his father and other relatives. A collaborating scholar also recently announced the successful tracing of several likely DNA relatives of Leonardo living today in Italy (see endnotes).

If granted the necessary approvals, the Project will compare DNA from Leonardo’s relatives past and present with physical remnants — hair, bones, fingerprints and skin cells — associated with the Renaissance figure whose life marked the rebirth of Western civilization.

The Project’s objectives, motives, methods, and work to date are detailed in a special issue of the journal Human Evolution, published coincident with a meeting of the group hosted in Florence this week under the patronage of Eugenio Giani, President of the Tuscan Regional Council (Consiglio Regionale della Toscana).

The news release goes on to provide some context for the work,

Born in Vinci, Italy, Leonardo died in 1519, age 67, and was buried in Amboise, southwest of Paris. His creative imagination foresaw and described innovations hundreds of years before their invention, such as the helicopter and armored tank. His artistic legacy includes the iconic Mona Lisa and The Last Supper.

The idea behind the Project, founded in 2014, has inspired and united anthropologists, art historians, genealogists, microbiologists, and other experts from leading universities and institutes in France, Italy, Spain, Canada and the USA, including specialists from the J. Craig Venter Institute of California, which pioneered the sequencing of the human genome.

The work underway resembles in complexity recent projects such as the successful search for the tomb of historic author Miguel de Cervantes and, in March 2015, the identification of England’s King Richard III from remains exhumed from beneath a UK parking lot, fittingly re-interred 500 years after his death.

Like Richard, Leonardo was born in 1452, and was buried in a setting that underwent changes in subsequent years such that the exact location of the grave was lost.

If DNA and other analyses yield a definitive identification, conventional and computerized techniques might reconstruct the face of Leonardo from models of the skull.”

In addition to Leonardo’s physical appearance, information potentially revealed from the work includes his ancestry and additional insight into his diet, state of health, personal habits, and places of residence.

According to the news release, the researchers have an agenda that goes beyond facial reconstruction and clues about  ancestry and diet,

Beyond those questions, and the verification of Leonardo’s “presumed remains” in the chapel of Saint-Hubert at the Château d’Amboise, the Project aims to develop a genetic profile extensive enough to understand better his abilities and visual acuity, which could provide insights into other individuals with remarkable qualities.

It may also make a lasting contribution to the art world, within which forgery is a multi-billion dollar industry, by advancing a technique for extracting and sequencing DNA from other centuries-old works of art, and associated methods of attribution.

Says Jesse Ausubel, Vice Chairman of the Richard Lounsbery Foundation, sponsor of the Project’s meetings in 2015 and 2016: “I think everyone in the group believes that Leonardo, who devoted himself to advancing art and science, who delighted in puzzles, and whose diverse talents and insights continue to enrich society five centuries after his passing, would welcome the initiative of this team — indeed would likely wish to lead it were he alive today.”

The researchers aim to have the work complete by 2019,

In the journal, group members underline the highly conservative, precautionary approach required at every phase of the Project, which they aim to conclude in 2019 to mark the 500th anniversary of Leonardo’s death.

For example, one objective is to verify whether fingerprints on Leonardo’s paintings, drawings, and notebooks can yield DNA consistent with that extracted from identified remains.

Early last year, Project collaborators from the International Institute for Humankind Studies in Florence opened discussions with the laboratory in that city where Leonardo’s Adoration of the Magi has been undergoing restoration for nearly two years, to explore the possibility of analyzing dust from the painting for possible DNA traces. A crucial question is whether traces of DNA remain or whether restoration measures and the passage of time have obliterated all evidence of Leonardo’s touch.

In preparation for such analysis, a team from the J. Craig Venter Institute and the University of Florence is examining privately owned paintings believed to be of comparable age to develop and calibrate techniques for DNA extraction and analysis. At this year’s meeting in Florence, the researchers also described a pioneering effort to analyze the microbiome of a painting thought to be about five centuries old.

If human DNA can one day be obtained from Leonardo’s work and sequenced, the genetic material could then be compared with genetic information from skeletal or other remains that may be exhumed in the future.

Here’s a list of the participating organizations (from the news release),

  • The Institut de Paléontologie Humaine, Paris
  • The International Institute for Humankind Studies, Florence
  • The Laboratory of Molecular Anthropology and Paleogenetics, Biology Department, University of Florence
  • Museo Ideale Leonardo da Vinci, in Vinci, Italy
  • J. Craig Venter Institute, La Jolla, California
  • Laboratory of Genetic Identification, University of Granada, Spain
  • The Rockefeller University, New York City

You can find the special issue of Human Evolution (HE Vol. 31, 2016 no. 3) here. The introductory essay is open access but the other articles are behind a paywall.

“One minus one equals zero” has been disproved

Two mirror-image molecules can be optically active according to an April 27, 2016 news item on ScienceDaily,

In 1848, Louis Pasteur showed that molecules that are mirror images of each other had exactly opposite rotations of light. When mixed in solution, they cancel the effects of the other, and no rotation of light is observed. Now, a research team has demonstrated that a mixture of mirror-image molecules crystallized in the solid state can be optically active.

An April 26, 2016 Northwestern University news release (also on EurekAlert), which originated the news item, expands on the theme,

In the world of chemistry, one minus one almost always equals zero.

But new research from Northwestern University and the Centre National de la Recherche Scientifique (CNRS) in France shows that is not always the case. And the discovery will change scientists’ understanding of mirror-image molecules and their optical activity.

Now, Northwestern’s Kenneth R. Poeppelmeier and his research team are the first to demonstrate that a mixture of mirror-image molecules crystallized in the solid state can be optically active. The scientists first designed and made the materials and then measured their optical properties.

“In our case, one minus one does not always equal zero,” said first author Romain Gautier of CNRS. “This discovery will change scientists’ understanding of these molecules, and new applications could emerge from this observation.”

The property of rotating light, which has been known for more than two centuries to exist in many molecules, already has many applications in medicine, electronics, lasers and display devices.

“The phenomenon of optical activity can occur in a mixture of mirror-image molecules, and now we’ve measured it,” said Poeppelmeier, a Morrison Professor of Chemistry in the Weinberg College of Arts and Sciences. “This is an important experiment.”

Although this phenomenon has been predicted for a long time, no one — until now — had created such a racemic mixture (a combination of equal amounts of mirror-image molecules) and measured the optical activity.

“How do you deliberately create these materials?” Poeppelmeier said. “That’s what excites me as a chemist.” He and Gautier painstakingly designed the material, using one of four possible solid-state arrangements known to exhibit circular dichroism (the ability to absorb differently the “rotated” light).

Next, Richard P. Van Duyne, a Morrison Professor of Chemistry at Northwestern, and graduate student Jordan M. Klingsporn measured the material’s optical activity, finding that mirror-image molecules are active when arranged in specific orientations in the solid state.

Here’s a link to and a citation for the paper,

Optical activity from racemates by Romain Gautier, Jordan M. Klingsporn, Richard P. Van Duyne, & Kenneth R. Poeppelmeier. Nature Materials (2016) doi:10.1038/nmat4628 Published online 18 April 2016

This paper is behind a paywall.

Not enough talk about nano risks?

It’s not often that a controversy amongst visual artists intersects with a story about carbon nanotubes, risk, and the roles that  scientists play in public discourse.

Nano risks

Dr. Andrew Maynard, Director of the Risk Innovation Lab at Arizona State University, opens the discussion in a March 29, 2016 article for the appropriately named website, The Conversation (Note: Links have been removed),

Back in 2008, carbon nanotubes – exceptionally fine tubes made up of carbon atoms – were making headlines. A new study from the U.K. had just shown that, under some conditions, these long, slender fiber-like tubes could cause harm in mice in the same way that some asbestos fibers do.

As a collaborator in that study, I was at the time heavily involved in exploring the risks and benefits of novel nanoscale materials. Back then, there was intense interest in understanding how materials like this could be dangerous, and how they might be made safer.

Fast forward to a few weeks ago, when carbon nanotubes were in the news again, but for a very different reason. This time, there was outrage not over potential risks, but because the artist Anish Kapoor had been given exclusive rights to a carbon nanotube-based pigment – claimed to be one of the blackest pigments ever made.

The worries that even nanotech proponents had in the early 2000s about possible health and environmental risks – and their impact on investor and consumer confidence – seem to have evaporated.

I had covered the carbon nanotube-based coating in a March 14, 2016 posting here,

Surrey NanoSystems (UK) is billing their Vantablack as the world’s blackest coating and they now have a new product in that line according to a March 10, 2016 company press release (received via email),

A whole range of products can now take advantage of Vantablack’s astonishing characteristics, thanks to the development of a new spray version of the world’s blackest coating material. The new substance, Vantablack S-VIS, is easily applied at large scale to virtually any surface, whilst still delivering the proven performance of Vantablack.

Oddly, the company news release notes Vantablack S-VIS could be used in consumer products while including the recommendation that it not be used in products where physical contact or abrasion is possible,

… Its ability to deceive the eye also opens up a range of design possibilities to enhance styling and appearance in luxury goods and jewellery [emphasis mine].

… “We are continuing to develop the technology, and the new sprayable version really does open up the possibility of applying super-black coatings in many more types of airborne or terrestrial applications. Possibilities include commercial products such as cameras, [emphasis mine] equipment requiring improved performance in a smaller form factor, as well as differentiating the look of products by means of the coating’s unique aesthetic appearance. It’s a major step forward compared with today’s commercial absorber coatings.”

The structured surface of Vantablack S-VIS means that it is not recommended for applications where it is subject to physical contact or abrasion. [emphasis mine] Ideally, it should be applied to surfaces that are protected, either within a packaged product, or behind a glass or other protective layer.

Presumably Surrey NanoSystems is looking at ways to make its Vantablack S-VIS capable of being used in products such as jewellery, cameras, and other consumers products where physical contact and abrasions are a strong possibility.

Andrew has pointed questions about using Vantablack S-VIS in new applications (from his March 29, 2016 article; Note: Links have been removed),

The original Vantablack was a specialty carbon nanotube coating designed for use in space, to reduce the amount of stray light entering space-based optical instruments. It was this far remove from any people that made Vantablack seem pretty safe. Whatever its toxicity, the chances of it getting into someone’s body were vanishingly small. It wasn’t nontoxic, but the risk of exposure was minuscule.

In contrast, Vantablack S-VIS is designed to be used where people might touch it, inhale it, or even (unintentionally) ingest it.

To be clear, Vantablack S-VIS is not comparable to asbestos – the carbon nanotubes it relies on are too short, and too tightly bound together to behave like needle-like asbestos fibers. Yet its combination of novelty, low density and high surface area, together with the possibility of human exposure, still raise serious risk questions.

For instance, as an expert in nanomaterial safety, I would want to know how readily the spray – or bits of material dislodged from surfaces – can be inhaled or otherwise get into the body; what these particles look like; what is known about how their size, shape, surface area, porosity and chemistry affect their ability to damage cells; whether they can act as “Trojan horses” and carry more toxic materials into the body; and what is known about what happens when they get out into the environment.

Risk and the roles that scientists play

Andrew makes his point and holds various groups to account (from his March 29, 2016 article; Note: Links have been removed),

… in the case of Vantablack S-VIS, there’s been a conspicuous absence of such nanotechnology safety experts in media coverage.

This lack of engagement isn’t too surprising – publicly commenting on emerging topics is something we rarely train, or even encourage, our scientists to do.

And yet, where technologies are being commercialized at the same time their safety is being researched, there’s a need for clear lines of communication between scientists, users, journalists and other influencers. Otherwise, how else are people to know what questions they should be asking, and where the answers might lie?

In 2008, initiatives existed such as those at the Center for Biological and Environmental Nanotechnology (CBEN) at Rice University and the Project on Emerging Nanotechnologies (PEN) at the Woodrow Wilson International Center for Scholars (where I served as science advisor) that took this role seriously. These and similar programs worked closely with journalists and others to ensure an informed public dialogue around the safe, responsible and beneficial uses of nanotechnology.

In 2016, there are no comparable programs, to my knowledge – both CBEN and PEN came to the end of their funding some years ago.

Some of the onus here lies with scientists themselves to make appropriate connections with developers, consumers and others. But to do this, they need the support of the institutions they work in, as well as the organizations who fund them. This is not a new idea – there is of course a long and ongoing debate about how to ensure academic research can benefit ordinary people.

Media and risk

As mainstream media such as newspapers and broadcast news continue to suffer losses in audience numbers, the situation vis à vis science journalism has changed considerably since 2008. Finding information is more of a challenge even for the interested.

As for those who might be interested, the chances of catching their attention are considerably more challenging. For example, some years ago scientists claimed to have achieved ‘cold fusion’ and there were television interviews (on the 60 minutes tv programme, amongst others) and cover stories in Time magazine and Newsweek magazine, which you could find in the grocery checkout line. You didn’t have to look for it. In fact, it was difficult to avoid the story. Sadly, the scientists had oversold and misrepresented their findings and that too was extensively covered in mainstream media. The news cycle went on for months. Something similar happened in 2010 with ‘arsenic life’. There was much excitement and then it became clear that scientists had overstated and misrepresented their findings. That news cycle was completed within three or fewer weeks and most members of the public were unaware. Media saturation is no longer what it used to be.

Innovative outreach needs to be part of the discussion and perhaps the Vantablack S-VIS controversy amongst artists can be viewed through that lens.

Anish Kapoor and his exclusive rights to Vantablack

According to a Feb. 29, 2016 article by Henri Neuendorf for artnet news, there is some consternation regarding internationally known artist, Anish Kapoor and a deal he has made with Surrey Nanosystems, the makers of Vantablack in all its iterations (Note: Links have been removed),

Anish Kapoor provoked the fury of fellow artists by acquiring the exclusive rights to the blackest black in the world.

The Indian-born British artist has been working and experimenting with the “super black” paint since 2014 and has recently acquired exclusive rights to the pigment according to reports by the Daily Mail.

The artist clearly knows the value of this innovation for his work. “I’ve been working in this area for the last 30 years or so with all kinds of materials but conventional materials, and here’s one that does something completely different,” he said, adding “I’ve always been drawn to rather exotic materials.”

This description from his Wikipedia entry gives some idea of Kapoor’s stature (Note: Links have been removed),

Sir Anish Kapoor, CBE RA (Hindi: अनीश कपूर, Punjabi: ਅਨੀਸ਼ ਕਪੂਰ), (born 12 March 1954) is a British-Indian sculptor. Born in Bombay,[1][2] Kapoor has lived and worked in London since the early 1970s when he moved to study art, first at the Hornsey College of Art and later at the Chelsea School of Art and Design.

He represented Britain in the XLIV Venice Biennale in 1990, when he was awarded the Premio Duemila Prize. In 1991 he received the Turner Prize and in 2002 received the Unilever Commission for the Turbine Hall at Tate Modern. Notable public sculptures include Cloud Gate (colloquially known as “the Bean”) in Chicago’s Millennium Park; Sky Mirror, exhibited at the Rockefeller Center in New York City in 2006 and Kensington Gardens in London in 2010;[3] Temenos, at Middlehaven, Middlesbrough; Leviathan,[4] at the Grand Palais in Paris in 2011; and ArcelorMittal Orbit, commissioned as a permanent artwork for London’s Olympic Park and completed in 2012.[5]

Kapoor received a Knighthood in the 2013 Birthday Honours for services to visual arts. He was awarded an honorary doctorate degree from the University of Oxford in 2014.[6] [7] In 2012 he was awarded Padma Bhushan by Congress led Indian government which is India’s 3rd highest civilian award.[8]

Artists can be cutthroat but they can also be prankish. Take a look at this image of Kapoor and note the blue background,

Artist Anish Kapoor is known for the rich pigments he uses in his work. (Image: Andrew Winning/Reuters)

Artist Anish Kapoor is known for the rich pigments he uses in his work. (Image: Andrew Winning/Reuters)

I don’t know why or when this image (used to illustrate Andrew’s essay) was taken so it may be coincidental but the background for the image brings to mind, Yves Klein and his International Klein Blue (IKB) pigment. From the IKB Wikipedia entry,

L'accord bleu (RE 10), 1960, mixed media piece by Yves Klein featuring IKB pigment on canvas and sponges Jaredzimmerman (WMF) - Foundation Stedelijk Museum Amsterdam Collection

L’accord bleu (RE 10), 1960, mixed media piece by Yves Klein featuring IKB pigment on canvas and sponges Jaredzimmerman (WMF) – Foundation Stedelijk Museum Amsterdam Collection

Here’s more from the IKB Wikipedia entry (Note: Links have been removed),

International Klein Blue (IKB) was developed by Yves Klein in collaboration with Edouard Adam, a Parisian art paint supplier whose shop is still in business on the Boulevard Edgar-Quinet in Montparnasse.[1] The uniqueness of IKB does not derive from the ultramarine pigment, but rather from the matte, synthetic resin binder in which the color is suspended, and which allows the pigment to maintain as much of its original qualities and intensity of color as possible.[citation needed] The synthetic resin used in the binder is a polyvinyl acetate developed and marketed at the time under the name Rhodopas M or M60A by the French pharmaceutical company Rhône-Poulenc.[2] Adam still sells the binder under the name “Médium Adam 25.”[1]

In May 1960, Klein deposited a Soleau envelope, registering the paint formula under the name International Klein Blue (IKB) at the Institut national de la propriété industrielle (INPI),[3] but he never patented IKB. Only valid under French law, a soleau enveloppe registers the date of invention, according to the depositor, prior to any legal patent application. The copy held by the INPI was destroyed in 1965. Klein’s own copy, which the INPI returned to him duly stamped is still extant.[4]

In short, it’s not the first time an artist has ‘owned’ a colour. Kapoor is not a performance artist as was Klein but his sculptural work lends itself to spectacle and to stimulating public discourse. As to whether or not, this is a prank, I cannot say but it has stimulated a discourse which ranges from intellectual property and artists to the risks of carbon nanotubes and the role scientists could play in the discourse about the risks associated with emerging technologies.

Regardless of how is was intended, bravo to Kapoor.

More reading

Andrew’s March 29, 2016 article has also been reproduced on Nanowerk and Slate.

Johathan Jones has written about Kapoor and the Vantablack  controversy in a Feb. 29, 2016 article for The Guardian titled: Can an artist ever really own a colour?