Tag Archives: France

Phenomen: a future and emerging information technology project

A Sept. 19, 2016 news item on Nanowerk describes a new research project incorporating photonics, phononics, and radio frequency signal processing,

HENOMEN is a ground breaking project designed to harness the potential of combined phononics, photonics and radio-frequency (RF) electronic signals to lay the foundations of a new information technology. This new Project, funded though the highly competitive H2020 [the European Union’s Horizon 2020 science funding programme] FET [Future and Emerging Technologies]-Open call, joins the efforts of three leading research institutes, three internationally recognised universities and a high-tech SME. The Consortium members kick-offed the project with a meeting on Friday September 16, 2016, at the Catalan Institute of Nanoscience and Nanotechnology (ICN2), coordinated by ICREA Research Prof Dr Clivia M. Sotomayor-Torres, of the ICN2’ Phononic and Photonic Nanostructures (P2N) Group.

A Sept. 16, 2016 ICN2 press release, which originated the news item, provides more detail,

Most information is currently transported by electrical charge (electrons) and by light (photons). Phonons are the quanta of lattice vibrations with frequencies covering a wide range up to tens of THz and provide coupling to the surrounding environment. In PHENOMEN the core of the research will be focused on phonon-based signal processing to enable on-chip synchronisation and transfer information carried between optical channels by phonons.

This ambitious prospect could serve as a future scalable platform for, e.g., hybrid information processing with phonons. To achieve it, PHENOMEN proposes to build the first practical optically-driven phonon sources and detectors including the engineering of phonon lasers to deliver coherent phonons to the rest of the chip pumped by a continuous wave optical source. It brings together interdisciplinary scientific and technology oriented partners in an early-stage research towards the development of a radically new technology.

The experimental implementation of phonons as information carriers in a chip is completely novel and of a clear foundational character. It deals with interaction and manipulation of fundamental particles and their intrinsic dual wave-particle character. Thus, it can only be possible with the participation of an interdisciplinary consortium which will create knowledge in a synergetic fashion and add value in the form of new theoretical tools,  develop novel methods to manipulate coherent phonons with light and build all-optical phononic circuits enabled by optomechanics.

The H2020 FET-Open call “Novel ideas for radically new technologies” aims to support the early stages of joint science and technology research for radically new future technological possibilities. The call is entirely non-prescriptive with regards to the nature or purpose of the technologies that are envisaged and thus targets mainly the unexpected. PHENOMEN is one of the 13 funded Research & Innovation Actions and went through a selection process with a success rate (1.4%) ten times smaller than that for an ERC grant. The retained proposals are expected to foster international collaboration in a multitude of disciplines such as robotics, nanotechnology, neuroscience, information science, biology, artificial intelligence or chemistry.

The Consortium

The PHENOMEN Consortium is made up by:

  • 3 leading research institutes:
  • 3 universities with an internationally recognised track-record in their respective areas of expertise:
  • 1 industrial partner:

2016 Nobel Chemistry Prize for molecular machines

Wednesday, Oct. 5, 2016 was the day three scientists received the Nobel Prize in Chemistry for their work on molecular machines, according to an Oct. 5, 2016 news item on phys.org,

Three scientists won the Nobel Prize in chemistry on Wednesday [Oct. 5, 2016] for developing the world’s smallest machines, 1,000 times thinner than a human hair but with the potential to revolutionize computer and energy systems.

Frenchman Jean-Pierre Sauvage, Scottish-born Fraser Stoddart and Dutch scientist Bernard “Ben” Feringa share the 8 million kronor ($930,000) prize for the “design and synthesis of molecular machines,” the Royal Swedish Academy of Sciences said.

Machines at the molecular level have taken chemistry to a new dimension and “will most likely be used in the development of things such as new materials, sensors and energy storage systems,” the academy said.

Practical applications are still far away—the academy said molecular motors are at the same stage that electrical motors were in the first half of the 19th century—but the potential is huge.

Dexter Johnson in an Oct. 5, 2016 posting on his Nanoclast blog (on the IEEE [Institute of Electrical and Electronics Engineers] website) provides some insight into the matter (Note: A link has been removed),

In what seems to have come both as a shock to some of the recipients and a confirmation to all those who envision molecular nanotechnology as the true future of nanotechnology, Bernard Feringa, Jean-Pierre Sauvage, and Sir J. Fraser Stoddart have been awarded the 2016 Nobel Prize in Chemistry for their development of molecular machines.

The Nobel Prize was awarded to all three of the scientists based on their complementary work over nearly three decades. First, in 1983, Sauvage (currently at Strasbourg University in France) was able to link two ring-shaped molecules to form a chain. Then, eight years later, Stoddart, a professor at Northwestern University in Evanston, Ill., demonstrated that a molecular ring could turn on a thin molecular axle. Then, eight years after that, Feringa, a professor at the University of Groningen, in the Netherlands, built on Stoddardt’s work and fabricated a molecular rotor blade that could spin continually in the same direction.

Speaking of the Nobel committee’s selection, Donna Nelson, a chemist and president of the American Chemical Society told Scientific American: “I think this topic is going to be fabulous for science. When the Nobel Prize is given, it inspires a lot of interest in the topic by other researchers. It will also increase funding.” Nelson added that this line of research will be fascinating for kids. “They can visualize it, and imagine a nanocar. This comes at a great time, when we need to inspire the next generation of scientists.”

The Economist, which appears to be previewing an article about the 2016 Nobel prizes ahead of the print version, has this to say in its Oct. 8, 2016 article,

BIGGER is not always better. Anyone who doubts that has only to look at the explosion of computing power which has marked the past half-century. This was made possible by continual shrinkage of the components computers are made from. That success has, in turn, inspired a search for other areas where shrinkage might also yield dividends.

One such, which has been poised delicately between hype and hope since the 1990s, is nanotechnology. What people mean by this term has varied over the years—to the extent that cynics might be forgiven for wondering if it is more than just a fancy rebranding of the word “chemistry”—but nanotechnology did originally have a fairly clear definition. It was the idea that machines with moving parts could be made on a molecular scale. And in recognition of this goal Sweden’s Royal Academy of Science this week decided to award this year’s Nobel prize for chemistry to three researchers, Jean-Pierre Sauvage, Sir Fraser Stoddart and Bernard Feringa, who have never lost sight of nanotechnology’s original objective.

Optimists talk of manufacturing molecule-sized machines ranging from drug-delivery devices to miniature computers. Pessimists recall that nanotechnology is a field that has been puffed up repeatedly by both researchers and investors, only to deflate in the face of practical difficulties.

There is, though, reason to hope it will work in the end. This is because, as is often the case with human inventions, Mother Nature has got there first. One way to think of living cells is as assemblies of nanotechnological machines. For example, the enzyme that produces adenosine triphosphate (ATP)—a molecule used in almost all living cells to fuel biochemical reactions—includes a spinning molecular machine rather like Dr Feringa’s invention. This works well. The ATP generators in a human body turn out so much of the stuff that over the course of a day they create almost a body-weight’s-worth of it. Do something equivalent commercially, and the hype around nanotechnology might prove itself justified.

Congratulations to the three winners!

Graphene Canada and its second annual conference

An Aug. 31, 2016 news item on Nanotechnology Now announces Canada’s second graphene-themed conference,

The 2nd edition of Graphene & 2D Materials Canada 2016 International Conference & Exhibition (www.graphenecanadaconf.com) will take place in Montreal (Canada): 18-20 October, 2016.

– An industrial forum with focus on Graphene Commercialization (Abalonyx, Alcereco Inc, AMO GmbH, Avanzare, AzTrong Inc, Bosch GmbH, China Innovation Alliance of the Graphene Industry (CGIA), Durham University & Applied Graphene Materials, Fujitsu Laboratories Ltd., Hanwha Techwin, Haydale, IDTechEx, North Carolina Central University & Chaowei Power Ltd, NTNU&CrayoNano, Phantoms Foundation, Southeast University, The Graphene Council, University of Siegen, University of Sunderland and University of Waterloo)
– Extensive thematic workshops in parallel (Materials & Devices Characterization, Chemistry, Biosensors & Energy and Electronic Devices)
– A significant exhibition (Abalonyx, Go Foundation, Grafoid, Group NanoXplore Inc., Raymor | Nanointegris and Suragus GmbH)

As I noted in my 2015 post about Graphene Canada and its conference, the group is organized in a rather interesting fashion and I see the tradition continues, i.e., the lead organizers seem to be situated in countries other than Canada. From the Aug. 31, 2016 news item on Nanotechnology Now,

Organisers: Phantoms Foundation [located in Spain] www.phantomsnet.net
Catalan Institute of Nanoscience and Nanotechnology – ICN2 (Spain) | CEMES/CNRS (France) | GO Foundation (Canada) | Grafoid Inc (Canada) | Graphene Labs – IIT (Italy) | McGill University (Canada) | Texas Instruments (USA) | Université Catholique de Louvain (Belgium) | Université de Montreal (Canada)

You can find the conference website here.

Improving the quality of sight in artificial retinas

Researchers at France’s Centre national de la recherche scientifique (CNRS) and elsewhere have taken a step forward to improving sight derived from artificial retinas according to an Aug. 25, 2016 news item on Nanowerk (Note: A link has been removed),

A major therapeutic challenge, the retinal prostheses that have been under development during the past ten years can enable some blind subjects to perceive light signals, but the image thus restored is still far from being clear. By comparing in rodents the activity of the visual cortex generated artificially by implants against that produced by “natural sight”, scientists from CNRS, CEA [Commissariat à l’énergie atomique et aux énergies alternatives is the French Alternative Energies and Atomic Energy Commission], INSERM [Institut national de la santé et de la recherche médicale is the French National Institute of Health and Medical Research], AP-HM [Assistance Publique – Hôpitaux de Marseille] and Aix-Marseille Université identified two factors that limit the resolution of prostheses.

Based on these findings, they were able to improve the precision of prosthetic activation. These multidisciplinary efforts, published on 23 August 2016 in eLife (“Probing the functional impact of sub-retinal prosthesis”), thus open the way towards further advances in retinal prostheses that will enhance the quality of life of implanted patients.

An Aug. 24, 2015 CNRS press release, which originated the news item, expands on the theme,

A retinal prosthesis comprises three elements: a camera (inserted in the patient’s spectacles), an electronic microcircuit (which transforms data from the camera into an electrical signal) and a matrix of microscopic electrodes (implanted in the eye in contact with the retina). This prosthesis replaces the photoreceptor cells of the retina: like them, it converts visual information into electrical signals which are then transmitted to the brain via the optic nerve. It can treat blindness caused by a degeneration of retinal photoreceptors, on condition that the optical nerve has remained functional1. Equipped with these implants, patients who were totally blind can recover visual perceptions in the form of light spots, or phosphenes.  Unfortunately, at present, the light signals perceived are not clear enough to recognize faces, read or move about independently.

To understand the resolution limits of the image generated by the prosthesis, and to find ways of optimizing the system, the scientists carried out a large-scale experiment on rodents.  By combining their skills in ophthalmology and the physiology of vision, they compared the response of the visual system of rodents to both natural visual stimuli and those generated by the prosthesis.

Their work showed that the prosthesis activated the visual cortex of the rodent in the correct position and at ranges comparable to those obtained under natural conditions.  However, the extent of the activation was much too great, and its shape was much too elongated.  This deformation was due to two separate phenomena observed at the level of the electrode matrix. Firstly, the scientists observed excessive electrical diffusion: the thin layer of liquid situated between the electrode and the retina passively diffused the electrical stimulus to neighboring nerve cells. And secondly, they detected the unwanted activation of retinal fibers situated close to the cells targeted for stimulation.

Armed with these findings, the scientists were able to improve the properties of the interface between the prosthesis and retina, with the help of specialists in interface physics.  Together, they were able to generate less diffuse currents and significantly improve artificial activation, and hence the performance of the prosthesis.

This lengthy study, because of the range of parameters covered (to study the different positions, types and intensities of signals) and the surgical problems encountered (in inserting the implant and recording the images generated in the animal’s brain) has nevertheless opened the way towards making promising improvements to retinal prostheses for humans.

This work was carried out by scientists from the Institut de Neurosciences de la Timone (CNRS/AMU) and AP-HM, in collaboration with CEA-Leti and the Institut de la Vision (CNRS/Inserm/UPMC).

Artificial retinas

© F. Chavane & S. Roux.

Activation (colored circles at the level of the visual cortex) of the visual system by prosthetic stimulation (in the middle, in red, the insert shows an image of an implanted prosthesis) is greater and more elongated than the activation achieved under natural stimulation (on the left, in yellow). Using a protocol to adapt stimulation (on the right, in green), the size and shape of the activation can be controlled and are more similar to natural visual activation (yellow).

Here’s a link to and a citation for the paper,

Probing the functional impact of sub-retinal prosthesis by Sébastien Roux, Frédéric Matonti, Florent Dupont, Louis Hoffart, Sylvain Takerkart, Serge Picaud, Pascale Pham, and Frédéric Chavane. eLife 2016;5:e12687 DOI: http://dx.doi.org/10.7554/eLife.12687 Published August 23, 2016

This paper appears to be open access.

New electrochromic material for ‘smart’ windows

Given that it’s summer, I seem to be increasingly obsessed with windows that help control the heat from the sun. So, this Aug. 22, 2016 news item on ScienceDaily hit my sweet spot,

Researchers in the Cockrell School of Engineering at The University of Texas at Austin have invented a new flexible smart window material that, when incorporated into windows, sunroofs, or even curved glass surfaces, will have the ability to control both heat and light from the sun. …

Delia Milliron, an associate professor in the McKetta Department of Chemical Engineering, and her team’s advancement is a new low-temperature process for coating the new smart material on plastic, which makes it easier and cheaper to apply than conventional coatings made directly on the glass itself. The team demonstrated a flexible electrochromic device, which means a small electric charge (about 4 volts) can lighten or darken the material and control the transmission of heat-producing, near-infrared radiation. Such smart windows are aimed at saving on cooling and heating bills for homes and businesses.

An Aug. 22, 2016 University of Texas at Austin news release (also on EurekAlert), which originated the news item, describes the international team behind this research and offers more details about the research itself,

The research team is an international collaboration, including scientists at the European Synchrotron Radiation Facility and CNRS in France, and Ikerbasque in Spain. Researchers at UT Austin’s College of Natural Sciences provided key theoretical work.

Milliron and her team’s low-temperature process generates a material with a unique nanostructure, which doubles the efficiency of the coloration process compared with a coating produced by a conventional high-temperature process. It can switch between clear and tinted more quickly, using less power.

The new electrochromic material, like its high-temperature processed counterpart, has an amorphous structure, meaning the atoms lack any long-range organization as would be found in a crystal. However, the new process yields a unique local arrangement of the atoms in a linear, chain-like structure. Whereas conventional amorphous materials produced at high temperature have a denser three-dimensionally bonded structure, the researchers’ new linearly structured material, made of chemically condensed niobium oxide, allows ions to flow in and out more freely. As a result, it is twice as energy efficient as the conventionally processed smart window material.

At the heart of the team’s study is their rare insight into the atomic-scale structure of the amorphous materials, whose disordered structures are difficult to characterize. Because there are few techniques for characterizing the atomic-scale structure sufficiently enough to understand properties, it has been difficult to engineer amorphous materials to enhance their performance.

“There’s relatively little insight into amorphous materials and how their properties are impacted by local structure,” Milliron said. “But, we were able to characterize with enough specificity what the local arrangement of the atoms is, so that it sheds light on the differences in properties in a rational way.”

Graeme Henkelman, a co-author on the paper and chemistry professor in UT Austin’s College of Natural Sciences, explains that determining the atomic structure for amorphous materials is far more difficult than for crystalline materials, which have an ordered structure. In this case, the researchers were able to use a combination of techniques and measurements to determine an atomic structure that is consistent in both experiment and theory.

“Such collaborative efforts that combine complementary techniques are, in my view, the key to the rational design of new materials,” Henkelman said.

Milliron believes the knowledge gained here could inspire deliberate engineering of amorphous materials for other applications such as supercapacitors that store and release electrical energy rapidly and efficiently.

The Milliron lab’s next challenge is to develop a flexible material using their low-temperature process that meets or exceeds the best performance of electrochromic materials made by conventional high-temperature processing.

“We want to see if we can marry the best performance with this new low-temperature processing strategy,” she said.

Here’s a link to and a citation for the paper,

Linear topology in amorphous metal oxide electrochromic networks obtained via low-temperature solution processing by Anna Llordés, Yang Wang, Alejandro Fernandez-Martinez, Penghao Xiao, Tom Lee, Agnieszka Poulain, Omid Zandi, Camila A. Saez Cabezas, Graeme Henkelman, & Delia J. Milliron. Nature Materials (2016)  doi:10.1038/nmat4734 Published online 22 August 2016

This paper is behind a paywall.

Deep learning and some history from the Swiss National Science Foundation (SNSF)

A June 27, 2016 news item on phys.org provides a measured analysis of deep learning and its current state of development (from a Swiss perspective),

In March 2016, the world Go champion Lee Sedol lost 1-4 against the artificial intelligence AlphaGo. For many, this was yet another defeat for humanity at the hands of the machines. Indeed, the success of the AlphaGo software was forged in an area of artificial intelligence that has seen huge progress over the last decade. Deep learning, as it’s called, uses artificial neural networks to process algorithmic calculations. This software architecture therefore mimics biological neural networks.

Much of the progress in deep learning is thanks to the work of Jürgen Schmidhuber, director of the IDSIA (Istituto Dalle Molle di Studi sull’Intelligenza Artificiale) which is located in the suburbs of Lugano. The IDSIA doctoral student Shane Legg and a group of former colleagues went on to found DeepMind, the startup acquired by Google in early 2014 for USD 500 million. The DeepMind algorithms eventually wound up in AlphaGo.

“Schmidhuber is one of the best at deep learning,” says Boi Faltings of the EPFL Artificial Intelligence Lab. “He never let go of the need to keep working at it.” According to Stéphane Marchand-Maillet of the University of Geneva computing department, “he’s been in the race since the very beginning.”

A June 27, 2016 SNSF news release (first published as a story in Horizons no. 109 June 2016) by Fabien Goubet, which originated the news item, goes on to provide a brief history,

The real strength of deep learning is structural recognition, and winning at Go is just an illustration of this, albeit a rather resounding one. Elsewhere, and for some years now, we have seen it applied to an entire spectrum of areas, such as visual and vocal recognition, online translation tools and smartphone personal assistants. One underlying principle of machine learning is that algorithms must first be trained using copious examples. Naturally, this has been helped by the deluge of user-generated content spawned by smartphones and web 2.0, stretching from Facebook photo comments to official translations published on the Internet. By feeding a machine thousands of accurately tagged images of cats, for example, it learns first to recognise those cats and later any image of a cat, including those it hasn’t been fed.

Deep learning isn’t new; it just needed modern computers to come of age. As far back as the early 1950s, biologists tried to lay out formal principles to explain the working of the brain’s cells. In 1956, the psychologist Frank Rosenblatt of the New York State Aeronautical Laboratory published a numerical model based on these concepts, thereby creating the very first artificial neural network. Once integrated into a calculator, it learned to recognise rudimentary images.

“This network only contained eight neurones organised in a single layer. It could only recognise simple characters”, says Claude Touzet of the Adaptive and Integrative Neuroscience Laboratory of Aix-Marseille University. “It wasn’t until 1985 that we saw the second generation of artificial neural networks featuring multiple layers and much greater performance”. This breakthrough was made simultaneously by three researchers: Yann LeCun in Paris, Geoffrey Hinton in Toronto and Terrence Sejnowski in Baltimore.

Byte-size learning

In multilayer networks, each layer learns to recognise the precise visual characteristics of a shape. The deeper the layer, the more abstract the characteristics. With cat photos, the first layer analyses pixel colour, and the following layer recognises the general form of the cat. This structural design can support calculations being made upon thousands of layers, and it was this aspect of the architecture that gave rise to the name ‘deep learning’.

Marchand-Maillet explains: “Each artificial neurone is assigned an input value, which it computes using a mathematical function, only firing if the output exceeds a pre-defined threshold”. In this way, it reproduces the behaviour of real neurones, which only fire and transmit information when the input signal (the potential difference across the entire neural circuit) reaches a certain level. In the artificial model, the results of a single layer are weighted, added up and then sent as the input signal to the following layer, which processes that input using different functions, and so on and so forth.

For example, if a system is trained with great quantities of photos of apples and watermelons, it will progressively learn to distinguish them on the basis of diameter, says Marchand-Maillet. If it cannot decide (e.g., when processing a picture of a tiny watermelon), the subsequent layers take over by analysing the colours or textures of the fruit in the photo, and so on. In this way, every step in the process further refines the assessment.

Video games to the rescue

For decades, the frontier of computing held back more complex applications, even at the cutting edge. Industry walked away, and deep learning only survived thanks to the video games sector, which eventually began producing graphics chips, or GPUs, with an unprecedented power at accessible prices: up to 6 teraflops (i.e., 6 trillion calculations per second) for a few hundred dollars. “There’s no doubt that it was this calculating power that laid the ground for the quantum leap in deep learning”, says Touzet. GPUs are also very good at parallel calculations, a useful function for executing the innumerable simultaneous operations required by neural networks.
Although image analysis is getting great results, things are more complicated for sequential data objects such as natural spoken language and video footage. This has formed part of Schmidhuber’s work since 1989, and his response has been to develop recurrent neural networks in which neurones communicate with each other in loops, feeding processed data back into the initial layers.

Such sequential data analysis is highly dependent on context and precursory data. In Lugano, networks have been instructed to memorise the order of a chain of events. Long Short Term Memory (LSTM) networks can distinguish ‘boat’ from ‘float’ by recalling the sound that preceded ‘oat’ (i.e., either ‘b’ or ‘fl’). “Recurrent neural networks are more powerful than other approaches such as the Hidden Markov models”, says Schmidhuber, who also notes that Google Voice integrated LSTMs in 2015. “With looped networks, the number of layers is potentially infinite”, says Faltings [?].

For Schmidhuber, deep learning is just one aspect of artificial intelligence; the real thing will lead to “the most important change in the history of our civilisation”. But Marchand-Maillet sees deep learning as “a bit of hype, leading us to believe that artificial intelligence can learn anything provided there’s data. But it’s still an open question as to whether deep learning can really be applied to every last domain”.

It’s nice to get an historical perspective and eye-opening to realize that scientists have been working on these concepts since the 1950s.

“Brute force” technique for biomolecular information processing

The research is being announced by the University of Tokyo but there is definitely a French flavour to this project. From a June 20, 2016 news item on ScienceDaily,

A Franco-Japanese research group at the University of Tokyo has developed a new “brute force” technique to test thousands of biochemical reactions at once and quickly home in on the range of conditions where they work best. Until now, optimizing such biomolecular systems, which can be applied for example to diagnostics, would have required months or years of trial and error experiments, but with this new technique that could be shortened to days.

A June 20, 2016 University of Tokyo news release on EurekAlert, which originated the news item, describes the project in more detail,

“We are interested in programming complex biochemical systems so that they can process information in a way that is analogous to electronic devices. If you could obtain a high-resolution map of all possible combinations of reaction conditions and their corresponding outcomes, the development of such reactions for specific purposes like diagnostic tests would be quicker than it is today,” explains Centre National de la Recherche Scientifique (CNRS) researcher Yannick Rondelez at the Institute of Industrial Science (IIS) [located at the University of Tokyo].

“Currently researchers use a combination of computer simulations and painstaking experiments. However, while simulations can test millions of conditions, they are based on assumptions about how molecules behave and may not reflect the full detail of reality. On the other hand, testing all possible conditions, even for a relatively simple design, is a daunting job.”

Rondelez and his colleagues at the Laboratory for Integrated Micro-Mechanical Systems (LIMMS), a 20-year collaboration between the IIS and the French CNRS, demonstrated a system that can test ten thousand different biochemical reaction conditions at once. Working with the IIS Applied Microfluidic Laboratory of Professor Teruo Fujii, they developed a platform to generate a myriad of micrometer-sized droplets containing random concentrations of reagents and then sandwich a single layer of them between glass slides. Fluorescent markers combined with the reagents are automatically read by a microscope to determine the precise concentrations in each droplet and also observe how the reaction proceeds.

“It was difficult to fine-tune the device at first,” explains Dr Anthony Genot, a CNRS researcher at LIMMS. “We needed to create generate thousands of droplets containing reagents within a precise range of concentrations to produce high resolution maps of the reactions we were studying. We expected that this would be challenging. But one unanticipated difficulty was immobilizing the droplets for the several days it took for some reactions to unfold. It took a lot of testing to create a glass chamber design that was airtight and firmly held the droplets in place.” Overall, it took nearly two years to fine-tune the device until the researchers could get their droplet experiment to run smoothly.

Seeing the new system producing results was revelatory. “You start with a screen full of randomly-colored dots, and then suddenly the computer rearranges them into a beautiful high-resolution map, revealing hidden information about the reaction dynamics. Seeing them all slide into place to produce something that had only ever been seen before through simulation was almost magical,” enthuses Rondelez.

“The map can tell us not only about the best conditions of biochemical reactions, it can also tell us about how the molecules behave in certain conditions. Using this map we’ve already found a molecular behavior that had been predicted theoretically, but had not been shown experimentally. With our technique we can explore how molecules talk to each other in test tube conditions. Ultimately, we hope to illuminate the intimate machinery of living molecular systems like ourselves,” says Rondelez.

Here’s a link to and a citation for the paper,

High-resolution mapping of bifurcations in nonlinear biochemical circuits by A. J. Genot, A. Baccouche, R. Sieskind, N. Aubert-Kato, N. Bredeche, J. F. Bartolo, V. Taly, T. Fujii, & Y. Rondelez. Nature Chemistry (2016)
doi:10.1038/nchem.2544 Published online 20 June 2016

This paper is behind a paywall.

Update on the International NanoCar race coming up in Autumn 2016

First off, the race seems to be adjusting its brand (it was billed as the International NanoCar Race in my Dec. 21, 2015 posting), from a May 20, 2016 news item on Nanowerk,

The first-ever international race of molecule-cars (Nanocar Race) will take place at the CEMES laboratory in Toulouse this fall [2016].

A May 9, 2016 notice on France’s Centre national de la recherce scientifique’s (CNRS) news website, which originated the news item, fills in a few more details,

Five teams are fine-tuning their cars—each made up of around a hundred atoms and measuring a few nanometers in length. They will be propelled by an electric current on a gold atom “race track.” We take you behind the scenes to see how these researcher-racers are preparing for the NanoCar Race.

About this video

Original title: The NanoCar Race

Production year: 2016

Length: 6 min 23

Director: Pierre de Parscau

Producer: CNRS Images

Speaker(s) :

Christian Joachim
Centre d’Elaboration des Matériaux et d’Etudes Structurales

Gwénaël Rapenne

Corentin Durand

Pierre Abeilhou

Frank Eisenhut
Technical University of Dresden

You can find the video which is embedded in both the Nanowerk news item and here with the CNRS notice.

Spider webs inspire liquid wire

Courtesy University of Oxford

Courtesy University of Oxford

Usually, when science talk runs to spider webs the focus is on strength but this research from the UK and France is all about resilience. From a May 16, 2016 news item on phys.org,

Why doesn’t a spider’s web sag in the wind or catapult flies back out like a trampoline? The answer, according to new research by an international team of scientists, lies in the physics behind a ‘hybrid’ material produced by spiders for their webs.

Pulling on a sticky thread in a garden spider’s orb web and letting it snap back reveals that the thread never sags but always stays taut—even when stretched to many times its original length. This is because any loose thread is immediately spooled inside the tiny droplets of watery glue that coat and surround the core gossamer fibres of the web’s capture spiral.

This phenomenon is described in the journal PNAS by scientists from the University of Oxford, UK and the Université Pierre et Marie Curie, Paris, France.

The researchers studied the details of this ‘liquid wire’ technique in spiders’ webs and used it to create composite fibres in the laboratory which, just like the spider’s capture silk, extend like a solid and compress like a liquid. These novel insights may lead to new bio-inspired technology.

A May 16, 2016 University of Oxford press release (also on EurekAlert), which originated the news item, provides more detail,

Professor Fritz Vollrath of the Oxford Silk Group in the Department of Zoology at Oxford University said: ‘The thousands of tiny droplets of glue that cover the capture spiral of the spider’s orb web do much more than make the silk sticky and catch the fly. Surprisingly, each drop packs enough punch in its watery skins to reel in loose bits of thread. And this winching behaviour is used to excellent effect to keep the threads tight at all times, as we can all observe and test in the webs in our gardens.’

The novel properties observed and analysed by the scientists rely on a subtle balance between fibre elasticity and droplet surface tension. Importantly, the team was also able to recreate this technique in the laboratory using oil droplets on a plastic filament. And this artificial system behaved just like the spider’s natural winch silk, with spools of filament reeling and unreeling inside the oil droplets as the thread extended and contracted.

Dr Hervé Elettro, the first author and a doctoral researcher at Institut Jean Le Rond D’Alembert, Université Pierre et Marie Curie, Paris, said: ‘Spider silk has been known to be an extraordinary material for around 40 years, but it continues to amaze us. While the web is simply a high-tech trap from the spider’s point of view, its properties have a huge amount to offer the worlds of materials, engineering and medicine.

‘Our bio-inspired hybrid threads could be manufactured from virtually any components. These new insights could lead to a wide range of applications, such as microfabrication of complex structures, reversible micro-motors, or self-tensioned stretchable systems.’

Here’s a link to and a citation for the paper,

In-drop capillary spooling of spider capture thread inspires hybrid fibers with mixed solid–liquid mechanical properties by Hervé Elettro, Sébastien Neukirch, Fritz Vollrath, and Arnaud Antkowiak. PNAS doi: 10.1073/pnas.1602451113

This paper appears to be open access.

The Leonardo Project and the master’s DNA (deoxyribonucleic acid)

I’ve never really understood the mania for digging up bodies of famous people in history and trying to ascertain how the person really died or what kind of diseases they may have had but the practice fascinates me. The latest famous person to be subjected to a forensic inquiry centuries after death is Leonardo da Vinci. A May 5, 2016 Human Evolution (journal) news release on EurekAlert provides details,

A team of eminent specialists from a variety of academic disciplines has coalesced around a goal of creating new insight into the life and genius of Leonardo da Vinci by means of authoritative new research and modern detective technologies, including DNA science.

The Leonardo Project is in pursuit of several possible physical connections to Leonardo, beaming radar, for example, at an ancient Italian church floor to help corroborate extensive research to pinpoint the likely location of the tomb of his father and other relatives. A collaborating scholar also recently announced the successful tracing of several likely DNA relatives of Leonardo living today in Italy (see endnotes).

If granted the necessary approvals, the Project will compare DNA from Leonardo’s relatives past and present with physical remnants — hair, bones, fingerprints and skin cells — associated with the Renaissance figure whose life marked the rebirth of Western civilization.

The Project’s objectives, motives, methods, and work to date are detailed in a special issue of the journal Human Evolution, published coincident with a meeting of the group hosted in Florence this week under the patronage of Eugenio Giani, President of the Tuscan Regional Council (Consiglio Regionale della Toscana).

The news release goes on to provide some context for the work,

Born in Vinci, Italy, Leonardo died in 1519, age 67, and was buried in Amboise, southwest of Paris. His creative imagination foresaw and described innovations hundreds of years before their invention, such as the helicopter and armored tank. His artistic legacy includes the iconic Mona Lisa and The Last Supper.

The idea behind the Project, founded in 2014, has inspired and united anthropologists, art historians, genealogists, microbiologists, and other experts from leading universities and institutes in France, Italy, Spain, Canada and the USA, including specialists from the J. Craig Venter Institute of California, which pioneered the sequencing of the human genome.

The work underway resembles in complexity recent projects such as the successful search for the tomb of historic author Miguel de Cervantes and, in March 2015, the identification of England’s King Richard III from remains exhumed from beneath a UK parking lot, fittingly re-interred 500 years after his death.

Like Richard, Leonardo was born in 1452, and was buried in a setting that underwent changes in subsequent years such that the exact location of the grave was lost.

If DNA and other analyses yield a definitive identification, conventional and computerized techniques might reconstruct the face of Leonardo from models of the skull.”

In addition to Leonardo’s physical appearance, information potentially revealed from the work includes his ancestry and additional insight into his diet, state of health, personal habits, and places of residence.

According to the news release, the researchers have an agenda that goes beyond facial reconstruction and clues about  ancestry and diet,

Beyond those questions, and the verification of Leonardo’s “presumed remains” in the chapel of Saint-Hubert at the Château d’Amboise, the Project aims to develop a genetic profile extensive enough to understand better his abilities and visual acuity, which could provide insights into other individuals with remarkable qualities.

It may also make a lasting contribution to the art world, within which forgery is a multi-billion dollar industry, by advancing a technique for extracting and sequencing DNA from other centuries-old works of art, and associated methods of attribution.

Says Jesse Ausubel, Vice Chairman of the Richard Lounsbery Foundation, sponsor of the Project’s meetings in 2015 and 2016: “I think everyone in the group believes that Leonardo, who devoted himself to advancing art and science, who delighted in puzzles, and whose diverse talents and insights continue to enrich society five centuries after his passing, would welcome the initiative of this team — indeed would likely wish to lead it were he alive today.”

The researchers aim to have the work complete by 2019,

In the journal, group members underline the highly conservative, precautionary approach required at every phase of the Project, which they aim to conclude in 2019 to mark the 500th anniversary of Leonardo’s death.

For example, one objective is to verify whether fingerprints on Leonardo’s paintings, drawings, and notebooks can yield DNA consistent with that extracted from identified remains.

Early last year, Project collaborators from the International Institute for Humankind Studies in Florence opened discussions with the laboratory in that city where Leonardo’s Adoration of the Magi has been undergoing restoration for nearly two years, to explore the possibility of analyzing dust from the painting for possible DNA traces. A crucial question is whether traces of DNA remain or whether restoration measures and the passage of time have obliterated all evidence of Leonardo’s touch.

In preparation for such analysis, a team from the J. Craig Venter Institute and the University of Florence is examining privately owned paintings believed to be of comparable age to develop and calibrate techniques for DNA extraction and analysis. At this year’s meeting in Florence, the researchers also described a pioneering effort to analyze the microbiome of a painting thought to be about five centuries old.

If human DNA can one day be obtained from Leonardo’s work and sequenced, the genetic material could then be compared with genetic information from skeletal or other remains that may be exhumed in the future.

Here’s a list of the participating organizations (from the news release),

  • The Institut de Paléontologie Humaine, Paris
  • The International Institute for Humankind Studies, Florence
  • The Laboratory of Molecular Anthropology and Paleogenetics, Biology Department, University of Florence
  • Museo Ideale Leonardo da Vinci, in Vinci, Italy
  • J. Craig Venter Institute, La Jolla, California
  • Laboratory of Genetic Identification, University of Granada, Spain
  • The Rockefeller University, New York City

You can find the special issue of Human Evolution (HE Vol. 31, 2016 no. 3) here. The introductory essay is open access but the other articles are behind a paywall.