Tag Archives: France

Testing ‘smart’ antibacterial surfaces and eating haute cuisine in space

Housekeeping in space, eh? This seems to be a French initiative. From a Nov. 15, 2016 news item on Nanowerk,

Leti [Laboratoire d’électronique des technologies de l’information (LETI)], an institute of CEA [French Alternative Energies and Atomic Energy Commission or Commissariat a l’Energie Atomique (CEA)] Tech, and three French partners are collaborating in a “house-cleaning” project aboard the International Space Station that will investigate antibacterial properties of new materials in a zero-gravity environment to see if they can improve and simplify cleaning inside spacecraft.

The Matiss experiment, as part of the Proxima Mission sponsored by France’s CNES space agency [Centre national d’études spatiales (CNES); National Centre for Space Studies (CNES)], is based on four identical plaques that European Space Agency (ESA) astronaut Thomas Pesquet, the 10th French citizen to go into space, will take with him and install when he joins the space station in November for a six-month mission. The plaques will be in the European Columbus laboratory in the space station for at least three months, and Pesquet will bring them back to earth for analysis at the conclusion of his mission.

A November 15, 2016 CEA-LETI press release on Business Wire (you may also download it from here), which originated the news item, describes the proposed experiments in more detail,

Leti, in collaboration with the ENS de Lyon, CNRS, the French company Saint Gobain and CNES, selected five advanced materials that could stop bacteria from settling and growing on “smart” surfaces. A sixth material, made of glass, will be used as control material.

The experiment will test the new smart surfaces in a gravity-free, enclosed environment. These surfaces are called “smart” because of their ability to provide an appropriate response to a given stimulus. For example, they may repel bacteria, prevent them from growing on the surface, or create their own biofilms that protect them from the bacteria.

The materials are a mix of advanced technology – from self-assembly monolayers and green polymers to ceramic polymers and water-repellent hybrid silica. By responding protectively to air-borne bacteria they become easier to clean and more hygienic. The experiment will determine which one is most effective and could lead to antibacterial surfaces on elevator buttons and bars in mass-transit cars, for example.

“Leveraging its unique chemistry platform, Leti has been developing gas, liquid and supercritical-phase-collective processes of surface functionalization for more than 10 years,” said Guillaume Nonglaton, Leti’s project manager for surface chemistry for biology and health-care applications. “Three Leti-developed surfaces will be part of the space-station experiment: a fluorinated thin layer, an organic silica and a biocompatible polymer. They were chosen for their hydrophobicity, or lack of attraction properties, their level of reproducibility and their rapid integration within Pesquet’s six-month mission.”

Now, for Haute Cusine

Pesquet is bringing meals from top French chefs Alain Ducasse and Thierry Marx for delectation. The menu includes beef tongue with truffled foie gras and duck breast confit. Here’s more from a Nov. 17, 2016 article by Thibault Marchand (Agence France Presse) ong phys.org,

“We will have food prepared by a Michelin-starred chef at the station. We have food for the big feasts: for Christmas, New Year’s and birthdays. We’ll have two birthdays, mine and Peggy’s,” said the Frenchman, who is also taking a saxophone up with him.

French space rookie Thomas Pesquet, 38, will lift off from the Baikonur cosmodrome in Kazakhstan with veteran US and Russian colleagues Peggy Whitson and Oleg Novitsky, for a six-month mission to the ISS.

Bon appétit! By the way, this is not the first time astronauts have been treated to haute cuisine (see a Dec. 2, 2006 article on the BBC [British Broadcasting Corporation] website.)

The launch

Mark Garcia’s Nov. 17, 2016 posting on one of the NASA (US National Aeronautics and Space Administration) blogs describes this latest launch into space,

The Soyuz MS-03 launched from the Baikonur Cosmodrome in Kazakhstan to the International Space Station at 3:20 p.m. EST Thursday, Nov. 17 (2:20 a.m. Baikonur time, Nov. 18). At the time of launch, the space station was flying about 250 miles over the south Atlantic east of Argentina. NASA astronaut Peggy Whitson, Oleg Novitskiy of Roscosmos and Thomas Pesquet of ESA (European Space Agency) are now safely in orbit.

Over the next two days, the trio will orbit the Earth for approximately two days before docking to the space station’s Rassvet module, at 5:01 p.m. on Saturday, Nov. 19. NASA TV coverage of the docking will begin at 4:15 p.m. Saturday.

Garcia’s post gives you details about how to access more information about the mission. The European Space Agency also offers more information as does Thomas Pesquet on his website.

A computer that intuitively predicts a molecule’s chemical properties

First, we have emotional artificial intelligence from MIT (Massachusetts Institute of Technology) with their Kismet [emotive AI] project and now we have intuitive computers according to an Oct. 14, 2016 news item on Nanowerk,

Scientists from Moscow Institute of Physics and Technology (MIPT)’s Research Center for Molecular Mechanisms of Aging and Age-Related Diseases together with Inria research center, Grenoble, France have developed a software package called Knodle to determine an atom’s hybridization, bond orders and functional groups’ annotation in molecules. The program streamlines one of the stages of developing new drugs.

An Oct. 14, 2016 Moscow Institute of Physics and Technology press release (also on EurekAlert), which originated the news item, expands on the theme,

Imagine that you were to develop a new drug. Designing a drug with predetermined properties is called drug-design. Once a drug has entered the human body, it needs to take effect on the cause of a disease. On a molecular level this is a malfunction of some proteins and their encoding genes. In drug-design these are called targets. If a drug is antiviral, it must somehow prevent the incorporation of viral DNA into human DNA. In this case the target is viral protein. The structure of the incorporating protein is known, and we also even know which area is the most important – the active site. If we insert a molecular “plug” then the viral protein will not be able to incorporate itself into the human genome and the virus will die. It boils down to this: you find the “plug” – you have your drug.

But how can we find the molecules required? Researchers use an enormous database of substances for this. There are special programs capable of finding a needle in a haystack; they use quantum chemistry approximations to predict the place and force of attraction between a molecular “plug” and a protein. However, databases only store the shape of a substance; information about atom and bond states is also needed for an accurate prediction. Determining these states is what Knodle does. With the help of the new technology, the search area can be reduced from hundreds of thousands to just a hundred. These one hundred can then be tested to find drugs such as Reltagravir – which has actively been used for HIV prevention since 2011.

From science lessons at school everyone is used to seeing organic substances as letters with sticks (substance structure), knowing that in actual fact there are no sticks. Every stick is a bond between electrons which obeys the laws of quantum chemistry. In the case of one simple molecule, like the one in the diagram [diagram follows], the experienced chemist intuitively knows the hybridizations of every atom (the number of neighboring atoms which it is connected to) and after a few hours looking at reference books, he or she can reestablish all the bonds. They can do this because they have seen hundreds and hundreds of similar substances and know that if oxygen is “sticking out like this”, it almost certainly has a double bond. In their research, Maria Kadukova, a MIPT student, and Sergei Grudinin, a researcher from Inria research center located in Grenoble, France, decided to pass on this intuition to a computer by using machine learning.

Compare “A solid hollow object with a handle, opening at the top and an elongation at the side, at the end of which there is another opening” and “A vessel for the preparation of tea”. Both of them describe a teapot rather well, but the latter is simpler and more believable. The same is true for machine learning, the best algorithm for learning is the simplest. This is why the researchers chose to use a nonlinear support vector machines (SVM), a method which has proven itself in recognizing handwritten text and images. On the input it was given the positions of neighboring atoms and on the output collected hybridization.

Good learning needs a lot of examples and the scientists did this using 7605 substances with known structures and atom states. “This is the key advantage of the program we have developed, learning from a larger database gives better predictions. Knodle is now one step ahead of similar programs: it has a margin of error of 3.9%, while for the closest competitor this figure is 4.7%”, explains Maria Kadukova. And that is not the only benefit. The software package can easily be modified for a specific problem. For example, Knodle does not currently work with substances containing metals, because those kind of substances are rather rare. But if it turns out that a drug for Alzheimer’s is much more effective if it has metal, the only thing needed to adapt the program is a database with metallic substances. We are now left to wonder what new drug will be found to treat a previously incurable disease.

Scientists from MIPT's Research Center for Molecular Mechanisms of Aging and Age-Related Diseases together with Inria research center, Grenoble, France have developed a software package called Knodle to determine an atom's hybridization, bond orders and functional groups' annotation in molecules. The program streamlines one of the stages of developing new drugs. Credit: MIPT Press Office

Scientists from MIPT’s Research Center for Molecular Mechanisms of Aging and Age-Related Diseases together with Inria research center, Grenoble, France have developed a software package called Knodle to determine an atom’s hybridization, bond orders and functional groups’ annotation in molecules. The program streamlines one of the stages of developing new drugs. Credit: MIPT Press Office

Here’s a link to and a citation for the paper,

Knodle: A Support Vector Machines-Based Automatic Perception of Organic Molecules from 3D Coordinates by Maria Kadukova and Sergei Grudinin. J. Chem. Inf. Model., 2016, 56 (8), pp 1410–1419 DOI: 10.1021/acs.jcim.5b00512 Publication Date (Web): July 13, 2016

Copyright © 2016 American Chemical Society

This paper is behind a paywall.

Phenomen: a future and emerging information technology project

A Sept. 19, 2016 news item on Nanowerk describes a new research project incorporating photonics, phononics, and radio frequency signal processing,

HENOMEN is a ground breaking project designed to harness the potential of combined phononics, photonics and radio-frequency (RF) electronic signals to lay the foundations of a new information technology. This new Project, funded though the highly competitive H2020 [the European Union’s Horizon 2020 science funding programme] FET [Future and Emerging Technologies]-Open call, joins the efforts of three leading research institutes, three internationally recognised universities and a high-tech SME. The Consortium members kick-offed the project with a meeting on Friday September 16, 2016, at the Catalan Institute of Nanoscience and Nanotechnology (ICN2), coordinated by ICREA Research Prof Dr Clivia M. Sotomayor-Torres, of the ICN2’ Phononic and Photonic Nanostructures (P2N) Group.

A Sept. 16, 2016 ICN2 press release, which originated the news item, provides more detail,

Most information is currently transported by electrical charge (electrons) and by light (photons). Phonons are the quanta of lattice vibrations with frequencies covering a wide range up to tens of THz and provide coupling to the surrounding environment. In PHENOMEN the core of the research will be focused on phonon-based signal processing to enable on-chip synchronisation and transfer information carried between optical channels by phonons.

This ambitious prospect could serve as a future scalable platform for, e.g., hybrid information processing with phonons. To achieve it, PHENOMEN proposes to build the first practical optically-driven phonon sources and detectors including the engineering of phonon lasers to deliver coherent phonons to the rest of the chip pumped by a continuous wave optical source. It brings together interdisciplinary scientific and technology oriented partners in an early-stage research towards the development of a radically new technology.

The experimental implementation of phonons as information carriers in a chip is completely novel and of a clear foundational character. It deals with interaction and manipulation of fundamental particles and their intrinsic dual wave-particle character. Thus, it can only be possible with the participation of an interdisciplinary consortium which will create knowledge in a synergetic fashion and add value in the form of new theoretical tools,  develop novel methods to manipulate coherent phonons with light and build all-optical phononic circuits enabled by optomechanics.

The H2020 FET-Open call “Novel ideas for radically new technologies” aims to support the early stages of joint science and technology research for radically new future technological possibilities. The call is entirely non-prescriptive with regards to the nature or purpose of the technologies that are envisaged and thus targets mainly the unexpected. PHENOMEN is one of the 13 funded Research & Innovation Actions and went through a selection process with a success rate (1.4%) ten times smaller than that for an ERC grant. The retained proposals are expected to foster international collaboration in a multitude of disciplines such as robotics, nanotechnology, neuroscience, information science, biology, artificial intelligence or chemistry.

The Consortium

The PHENOMEN Consortium is made up by:

  • 3 leading research institutes:
  • 3 universities with an internationally recognised track-record in their respective areas of expertise:
  • 1 industrial partner:

2016 Nobel Chemistry Prize for molecular machines

Wednesday, Oct. 5, 2016 was the day three scientists received the Nobel Prize in Chemistry for their work on molecular machines, according to an Oct. 5, 2016 news item on phys.org,

Three scientists won the Nobel Prize in chemistry on Wednesday [Oct. 5, 2016] for developing the world’s smallest machines, 1,000 times thinner than a human hair but with the potential to revolutionize computer and energy systems.

Frenchman Jean-Pierre Sauvage, Scottish-born Fraser Stoddart and Dutch scientist Bernard “Ben” Feringa share the 8 million kronor ($930,000) prize for the “design and synthesis of molecular machines,” the Royal Swedish Academy of Sciences said.

Machines at the molecular level have taken chemistry to a new dimension and “will most likely be used in the development of things such as new materials, sensors and energy storage systems,” the academy said.

Practical applications are still far away—the academy said molecular motors are at the same stage that electrical motors were in the first half of the 19th century—but the potential is huge.

Dexter Johnson in an Oct. 5, 2016 posting on his Nanoclast blog (on the IEEE [Institute of Electrical and Electronics Engineers] website) provides some insight into the matter (Note: A link has been removed),

In what seems to have come both as a shock to some of the recipients and a confirmation to all those who envision molecular nanotechnology as the true future of nanotechnology, Bernard Feringa, Jean-Pierre Sauvage, and Sir J. Fraser Stoddart have been awarded the 2016 Nobel Prize in Chemistry for their development of molecular machines.

The Nobel Prize was awarded to all three of the scientists based on their complementary work over nearly three decades. First, in 1983, Sauvage (currently at Strasbourg University in France) was able to link two ring-shaped molecules to form a chain. Then, eight years later, Stoddart, a professor at Northwestern University in Evanston, Ill., demonstrated that a molecular ring could turn on a thin molecular axle. Then, eight years after that, Feringa, a professor at the University of Groningen, in the Netherlands, built on Stoddardt’s work and fabricated a molecular rotor blade that could spin continually in the same direction.

Speaking of the Nobel committee’s selection, Donna Nelson, a chemist and president of the American Chemical Society told Scientific American: “I think this topic is going to be fabulous for science. When the Nobel Prize is given, it inspires a lot of interest in the topic by other researchers. It will also increase funding.” Nelson added that this line of research will be fascinating for kids. “They can visualize it, and imagine a nanocar. This comes at a great time, when we need to inspire the next generation of scientists.”

The Economist, which appears to be previewing an article about the 2016 Nobel prizes ahead of the print version, has this to say in its Oct. 8, 2016 article,

BIGGER is not always better. Anyone who doubts that has only to look at the explosion of computing power which has marked the past half-century. This was made possible by continual shrinkage of the components computers are made from. That success has, in turn, inspired a search for other areas where shrinkage might also yield dividends.

One such, which has been poised delicately between hype and hope since the 1990s, is nanotechnology. What people mean by this term has varied over the years—to the extent that cynics might be forgiven for wondering if it is more than just a fancy rebranding of the word “chemistry”—but nanotechnology did originally have a fairly clear definition. It was the idea that machines with moving parts could be made on a molecular scale. And in recognition of this goal Sweden’s Royal Academy of Science this week decided to award this year’s Nobel prize for chemistry to three researchers, Jean-Pierre Sauvage, Sir Fraser Stoddart and Bernard Feringa, who have never lost sight of nanotechnology’s original objective.

Optimists talk of manufacturing molecule-sized machines ranging from drug-delivery devices to miniature computers. Pessimists recall that nanotechnology is a field that has been puffed up repeatedly by both researchers and investors, only to deflate in the face of practical difficulties.

There is, though, reason to hope it will work in the end. This is because, as is often the case with human inventions, Mother Nature has got there first. One way to think of living cells is as assemblies of nanotechnological machines. For example, the enzyme that produces adenosine triphosphate (ATP)—a molecule used in almost all living cells to fuel biochemical reactions—includes a spinning molecular machine rather like Dr Feringa’s invention. This works well. The ATP generators in a human body turn out so much of the stuff that over the course of a day they create almost a body-weight’s-worth of it. Do something equivalent commercially, and the hype around nanotechnology might prove itself justified.

Congratulations to the three winners!

Graphene Canada and its second annual conference

An Aug. 31, 2016 news item on Nanotechnology Now announces Canada’s second graphene-themed conference,

The 2nd edition of Graphene & 2D Materials Canada 2016 International Conference & Exhibition (www.graphenecanadaconf.com) will take place in Montreal (Canada): 18-20 October, 2016.

– An industrial forum with focus on Graphene Commercialization (Abalonyx, Alcereco Inc, AMO GmbH, Avanzare, AzTrong Inc, Bosch GmbH, China Innovation Alliance of the Graphene Industry (CGIA), Durham University & Applied Graphene Materials, Fujitsu Laboratories Ltd., Hanwha Techwin, Haydale, IDTechEx, North Carolina Central University & Chaowei Power Ltd, NTNU&CrayoNano, Phantoms Foundation, Southeast University, The Graphene Council, University of Siegen, University of Sunderland and University of Waterloo)
– Extensive thematic workshops in parallel (Materials & Devices Characterization, Chemistry, Biosensors & Energy and Electronic Devices)
– A significant exhibition (Abalonyx, Go Foundation, Grafoid, Group NanoXplore Inc., Raymor | Nanointegris and Suragus GmbH)

As I noted in my 2015 post about Graphene Canada and its conference, the group is organized in a rather interesting fashion and I see the tradition continues, i.e., the lead organizers seem to be situated in countries other than Canada. From the Aug. 31, 2016 news item on Nanotechnology Now,

Organisers: Phantoms Foundation [located in Spain] www.phantomsnet.net
Catalan Institute of Nanoscience and Nanotechnology – ICN2 (Spain) | CEMES/CNRS (France) | GO Foundation (Canada) | Grafoid Inc (Canada) | Graphene Labs – IIT (Italy) | McGill University (Canada) | Texas Instruments (USA) | Université Catholique de Louvain (Belgium) | Université de Montreal (Canada)

You can find the conference website here.

Improving the quality of sight in artificial retinas

Researchers at France’s Centre national de la recherche scientifique (CNRS) and elsewhere have taken a step forward to improving sight derived from artificial retinas according to an Aug. 25, 2016 news item on Nanowerk (Note: A link has been removed),

A major therapeutic challenge, the retinal prostheses that have been under development during the past ten years can enable some blind subjects to perceive light signals, but the image thus restored is still far from being clear. By comparing in rodents the activity of the visual cortex generated artificially by implants against that produced by “natural sight”, scientists from CNRS, CEA [Commissariat à l’énergie atomique et aux énergies alternatives is the French Alternative Energies and Atomic Energy Commission], INSERM [Institut national de la santé et de la recherche médicale is the French National Institute of Health and Medical Research], AP-HM [Assistance Publique – Hôpitaux de Marseille] and Aix-Marseille Université identified two factors that limit the resolution of prostheses.

Based on these findings, they were able to improve the precision of prosthetic activation. These multidisciplinary efforts, published on 23 August 2016 in eLife (“Probing the functional impact of sub-retinal prosthesis”), thus open the way towards further advances in retinal prostheses that will enhance the quality of life of implanted patients.

An Aug. 24, 2015 CNRS press release, which originated the news item, expands on the theme,

A retinal prosthesis comprises three elements: a camera (inserted in the patient’s spectacles), an electronic microcircuit (which transforms data from the camera into an electrical signal) and a matrix of microscopic electrodes (implanted in the eye in contact with the retina). This prosthesis replaces the photoreceptor cells of the retina: like them, it converts visual information into electrical signals which are then transmitted to the brain via the optic nerve. It can treat blindness caused by a degeneration of retinal photoreceptors, on condition that the optical nerve has remained functional1. Equipped with these implants, patients who were totally blind can recover visual perceptions in the form of light spots, or phosphenes.  Unfortunately, at present, the light signals perceived are not clear enough to recognize faces, read or move about independently.

To understand the resolution limits of the image generated by the prosthesis, and to find ways of optimizing the system, the scientists carried out a large-scale experiment on rodents.  By combining their skills in ophthalmology and the physiology of vision, they compared the response of the visual system of rodents to both natural visual stimuli and those generated by the prosthesis.

Their work showed that the prosthesis activated the visual cortex of the rodent in the correct position and at ranges comparable to those obtained under natural conditions.  However, the extent of the activation was much too great, and its shape was much too elongated.  This deformation was due to two separate phenomena observed at the level of the electrode matrix. Firstly, the scientists observed excessive electrical diffusion: the thin layer of liquid situated between the electrode and the retina passively diffused the electrical stimulus to neighboring nerve cells. And secondly, they detected the unwanted activation of retinal fibers situated close to the cells targeted for stimulation.

Armed with these findings, the scientists were able to improve the properties of the interface between the prosthesis and retina, with the help of specialists in interface physics.  Together, they were able to generate less diffuse currents and significantly improve artificial activation, and hence the performance of the prosthesis.

This lengthy study, because of the range of parameters covered (to study the different positions, types and intensities of signals) and the surgical problems encountered (in inserting the implant and recording the images generated in the animal’s brain) has nevertheless opened the way towards making promising improvements to retinal prostheses for humans.

This work was carried out by scientists from the Institut de Neurosciences de la Timone (CNRS/AMU) and AP-HM, in collaboration with CEA-Leti and the Institut de la Vision (CNRS/Inserm/UPMC).

Artificial retinas

© F. Chavane & S. Roux.

Activation (colored circles at the level of the visual cortex) of the visual system by prosthetic stimulation (in the middle, in red, the insert shows an image of an implanted prosthesis) is greater and more elongated than the activation achieved under natural stimulation (on the left, in yellow). Using a protocol to adapt stimulation (on the right, in green), the size and shape of the activation can be controlled and are more similar to natural visual activation (yellow).

Here’s a link to and a citation for the paper,

Probing the functional impact of sub-retinal prosthesis by Sébastien Roux, Frédéric Matonti, Florent Dupont, Louis Hoffart, Sylvain Takerkart, Serge Picaud, Pascale Pham, and Frédéric Chavane. eLife 2016;5:e12687 DOI: http://dx.doi.org/10.7554/eLife.12687 Published August 23, 2016

This paper appears to be open access.

New electrochromic material for ‘smart’ windows

Given that it’s summer, I seem to be increasingly obsessed with windows that help control the heat from the sun. So, this Aug. 22, 2016 news item on ScienceDaily hit my sweet spot,

Researchers in the Cockrell School of Engineering at The University of Texas at Austin have invented a new flexible smart window material that, when incorporated into windows, sunroofs, or even curved glass surfaces, will have the ability to control both heat and light from the sun. …

Delia Milliron, an associate professor in the McKetta Department of Chemical Engineering, and her team’s advancement is a new low-temperature process for coating the new smart material on plastic, which makes it easier and cheaper to apply than conventional coatings made directly on the glass itself. The team demonstrated a flexible electrochromic device, which means a small electric charge (about 4 volts) can lighten or darken the material and control the transmission of heat-producing, near-infrared radiation. Such smart windows are aimed at saving on cooling and heating bills for homes and businesses.

An Aug. 22, 2016 University of Texas at Austin news release (also on EurekAlert), which originated the news item, describes the international team behind this research and offers more details about the research itself,

The research team is an international collaboration, including scientists at the European Synchrotron Radiation Facility and CNRS in France, and Ikerbasque in Spain. Researchers at UT Austin’s College of Natural Sciences provided key theoretical work.

Milliron and her team’s low-temperature process generates a material with a unique nanostructure, which doubles the efficiency of the coloration process compared with a coating produced by a conventional high-temperature process. It can switch between clear and tinted more quickly, using less power.

The new electrochromic material, like its high-temperature processed counterpart, has an amorphous structure, meaning the atoms lack any long-range organization as would be found in a crystal. However, the new process yields a unique local arrangement of the atoms in a linear, chain-like structure. Whereas conventional amorphous materials produced at high temperature have a denser three-dimensionally bonded structure, the researchers’ new linearly structured material, made of chemically condensed niobium oxide, allows ions to flow in and out more freely. As a result, it is twice as energy efficient as the conventionally processed smart window material.

At the heart of the team’s study is their rare insight into the atomic-scale structure of the amorphous materials, whose disordered structures are difficult to characterize. Because there are few techniques for characterizing the atomic-scale structure sufficiently enough to understand properties, it has been difficult to engineer amorphous materials to enhance their performance.

“There’s relatively little insight into amorphous materials and how their properties are impacted by local structure,” Milliron said. “But, we were able to characterize with enough specificity what the local arrangement of the atoms is, so that it sheds light on the differences in properties in a rational way.”

Graeme Henkelman, a co-author on the paper and chemistry professor in UT Austin’s College of Natural Sciences, explains that determining the atomic structure for amorphous materials is far more difficult than for crystalline materials, which have an ordered structure. In this case, the researchers were able to use a combination of techniques and measurements to determine an atomic structure that is consistent in both experiment and theory.

“Such collaborative efforts that combine complementary techniques are, in my view, the key to the rational design of new materials,” Henkelman said.

Milliron believes the knowledge gained here could inspire deliberate engineering of amorphous materials for other applications such as supercapacitors that store and release electrical energy rapidly and efficiently.

The Milliron lab’s next challenge is to develop a flexible material using their low-temperature process that meets or exceeds the best performance of electrochromic materials made by conventional high-temperature processing.

“We want to see if we can marry the best performance with this new low-temperature processing strategy,” she said.

Here’s a link to and a citation for the paper,

Linear topology in amorphous metal oxide electrochromic networks obtained via low-temperature solution processing by Anna Llordés, Yang Wang, Alejandro Fernandez-Martinez, Penghao Xiao, Tom Lee, Agnieszka Poulain, Omid Zandi, Camila A. Saez Cabezas, Graeme Henkelman, & Delia J. Milliron. Nature Materials (2016)  doi:10.1038/nmat4734 Published online 22 August 2016

This paper is behind a paywall.

Deep learning and some history from the Swiss National Science Foundation (SNSF)

A June 27, 2016 news item on phys.org provides a measured analysis of deep learning and its current state of development (from a Swiss perspective),

In March 2016, the world Go champion Lee Sedol lost 1-4 against the artificial intelligence AlphaGo. For many, this was yet another defeat for humanity at the hands of the machines. Indeed, the success of the AlphaGo software was forged in an area of artificial intelligence that has seen huge progress over the last decade. Deep learning, as it’s called, uses artificial neural networks to process algorithmic calculations. This software architecture therefore mimics biological neural networks.

Much of the progress in deep learning is thanks to the work of Jürgen Schmidhuber, director of the IDSIA (Istituto Dalle Molle di Studi sull’Intelligenza Artificiale) which is located in the suburbs of Lugano. The IDSIA doctoral student Shane Legg and a group of former colleagues went on to found DeepMind, the startup acquired by Google in early 2014 for USD 500 million. The DeepMind algorithms eventually wound up in AlphaGo.

“Schmidhuber is one of the best at deep learning,” says Boi Faltings of the EPFL Artificial Intelligence Lab. “He never let go of the need to keep working at it.” According to Stéphane Marchand-Maillet of the University of Geneva computing department, “he’s been in the race since the very beginning.”

A June 27, 2016 SNSF news release (first published as a story in Horizons no. 109 June 2016) by Fabien Goubet, which originated the news item, goes on to provide a brief history,

The real strength of deep learning is structural recognition, and winning at Go is just an illustration of this, albeit a rather resounding one. Elsewhere, and for some years now, we have seen it applied to an entire spectrum of areas, such as visual and vocal recognition, online translation tools and smartphone personal assistants. One underlying principle of machine learning is that algorithms must first be trained using copious examples. Naturally, this has been helped by the deluge of user-generated content spawned by smartphones and web 2.0, stretching from Facebook photo comments to official translations published on the Internet. By feeding a machine thousands of accurately tagged images of cats, for example, it learns first to recognise those cats and later any image of a cat, including those it hasn’t been fed.

Deep learning isn’t new; it just needed modern computers to come of age. As far back as the early 1950s, biologists tried to lay out formal principles to explain the working of the brain’s cells. In 1956, the psychologist Frank Rosenblatt of the New York State Aeronautical Laboratory published a numerical model based on these concepts, thereby creating the very first artificial neural network. Once integrated into a calculator, it learned to recognise rudimentary images.

“This network only contained eight neurones organised in a single layer. It could only recognise simple characters”, says Claude Touzet of the Adaptive and Integrative Neuroscience Laboratory of Aix-Marseille University. “It wasn’t until 1985 that we saw the second generation of artificial neural networks featuring multiple layers and much greater performance”. This breakthrough was made simultaneously by three researchers: Yann LeCun in Paris, Geoffrey Hinton in Toronto and Terrence Sejnowski in Baltimore.

Byte-size learning

In multilayer networks, each layer learns to recognise the precise visual characteristics of a shape. The deeper the layer, the more abstract the characteristics. With cat photos, the first layer analyses pixel colour, and the following layer recognises the general form of the cat. This structural design can support calculations being made upon thousands of layers, and it was this aspect of the architecture that gave rise to the name ‘deep learning’.

Marchand-Maillet explains: “Each artificial neurone is assigned an input value, which it computes using a mathematical function, only firing if the output exceeds a pre-defined threshold”. In this way, it reproduces the behaviour of real neurones, which only fire and transmit information when the input signal (the potential difference across the entire neural circuit) reaches a certain level. In the artificial model, the results of a single layer are weighted, added up and then sent as the input signal to the following layer, which processes that input using different functions, and so on and so forth.

For example, if a system is trained with great quantities of photos of apples and watermelons, it will progressively learn to distinguish them on the basis of diameter, says Marchand-Maillet. If it cannot decide (e.g., when processing a picture of a tiny watermelon), the subsequent layers take over by analysing the colours or textures of the fruit in the photo, and so on. In this way, every step in the process further refines the assessment.

Video games to the rescue

For decades, the frontier of computing held back more complex applications, even at the cutting edge. Industry walked away, and deep learning only survived thanks to the video games sector, which eventually began producing graphics chips, or GPUs, with an unprecedented power at accessible prices: up to 6 teraflops (i.e., 6 trillion calculations per second) for a few hundred dollars. “There’s no doubt that it was this calculating power that laid the ground for the quantum leap in deep learning”, says Touzet. GPUs are also very good at parallel calculations, a useful function for executing the innumerable simultaneous operations required by neural networks.
Although image analysis is getting great results, things are more complicated for sequential data objects such as natural spoken language and video footage. This has formed part of Schmidhuber’s work since 1989, and his response has been to develop recurrent neural networks in which neurones communicate with each other in loops, feeding processed data back into the initial layers.

Such sequential data analysis is highly dependent on context and precursory data. In Lugano, networks have been instructed to memorise the order of a chain of events. Long Short Term Memory (LSTM) networks can distinguish ‘boat’ from ‘float’ by recalling the sound that preceded ‘oat’ (i.e., either ‘b’ or ‘fl’). “Recurrent neural networks are more powerful than other approaches such as the Hidden Markov models”, says Schmidhuber, who also notes that Google Voice integrated LSTMs in 2015. “With looped networks, the number of layers is potentially infinite”, says Faltings [?].

For Schmidhuber, deep learning is just one aspect of artificial intelligence; the real thing will lead to “the most important change in the history of our civilisation”. But Marchand-Maillet sees deep learning as “a bit of hype, leading us to believe that artificial intelligence can learn anything provided there’s data. But it’s still an open question as to whether deep learning can really be applied to every last domain”.

It’s nice to get an historical perspective and eye-opening to realize that scientists have been working on these concepts since the 1950s.

“Brute force” technique for biomolecular information processing

The research is being announced by the University of Tokyo but there is definitely a French flavour to this project. From a June 20, 2016 news item on ScienceDaily,

A Franco-Japanese research group at the University of Tokyo has developed a new “brute force” technique to test thousands of biochemical reactions at once and quickly home in on the range of conditions where they work best. Until now, optimizing such biomolecular systems, which can be applied for example to diagnostics, would have required months or years of trial and error experiments, but with this new technique that could be shortened to days.

A June 20, 2016 University of Tokyo news release on EurekAlert, which originated the news item, describes the project in more detail,

“We are interested in programming complex biochemical systems so that they can process information in a way that is analogous to electronic devices. If you could obtain a high-resolution map of all possible combinations of reaction conditions and their corresponding outcomes, the development of such reactions for specific purposes like diagnostic tests would be quicker than it is today,” explains Centre National de la Recherche Scientifique (CNRS) researcher Yannick Rondelez at the Institute of Industrial Science (IIS) [located at the University of Tokyo].

“Currently researchers use a combination of computer simulations and painstaking experiments. However, while simulations can test millions of conditions, they are based on assumptions about how molecules behave and may not reflect the full detail of reality. On the other hand, testing all possible conditions, even for a relatively simple design, is a daunting job.”

Rondelez and his colleagues at the Laboratory for Integrated Micro-Mechanical Systems (LIMMS), a 20-year collaboration between the IIS and the French CNRS, demonstrated a system that can test ten thousand different biochemical reaction conditions at once. Working with the IIS Applied Microfluidic Laboratory of Professor Teruo Fujii, they developed a platform to generate a myriad of micrometer-sized droplets containing random concentrations of reagents and then sandwich a single layer of them between glass slides. Fluorescent markers combined with the reagents are automatically read by a microscope to determine the precise concentrations in each droplet and also observe how the reaction proceeds.

“It was difficult to fine-tune the device at first,” explains Dr Anthony Genot, a CNRS researcher at LIMMS. “We needed to create generate thousands of droplets containing reagents within a precise range of concentrations to produce high resolution maps of the reactions we were studying. We expected that this would be challenging. But one unanticipated difficulty was immobilizing the droplets for the several days it took for some reactions to unfold. It took a lot of testing to create a glass chamber design that was airtight and firmly held the droplets in place.” Overall, it took nearly two years to fine-tune the device until the researchers could get their droplet experiment to run smoothly.

Seeing the new system producing results was revelatory. “You start with a screen full of randomly-colored dots, and then suddenly the computer rearranges them into a beautiful high-resolution map, revealing hidden information about the reaction dynamics. Seeing them all slide into place to produce something that had only ever been seen before through simulation was almost magical,” enthuses Rondelez.

“The map can tell us not only about the best conditions of biochemical reactions, it can also tell us about how the molecules behave in certain conditions. Using this map we’ve already found a molecular behavior that had been predicted theoretically, but had not been shown experimentally. With our technique we can explore how molecules talk to each other in test tube conditions. Ultimately, we hope to illuminate the intimate machinery of living molecular systems like ourselves,” says Rondelez.

Here’s a link to and a citation for the paper,

High-resolution mapping of bifurcations in nonlinear biochemical circuits by A. J. Genot, A. Baccouche, R. Sieskind, N. Aubert-Kato, N. Bredeche, J. F. Bartolo, V. Taly, T. Fujii, & Y. Rondelez. Nature Chemistry (2016)
doi:10.1038/nchem.2544 Published online 20 June 2016

This paper is behind a paywall.

Update on the International NanoCar race coming up in Autumn 2016

First off, the race seems to be adjusting its brand (it was billed as the International NanoCar Race in my Dec. 21, 2015 posting), from a May 20, 2016 news item on Nanowerk,

The first-ever international race of molecule-cars (Nanocar Race) will take place at the CEMES laboratory in Toulouse this fall [2016].

A May 9, 2016 notice on France’s Centre national de la recherce scientifique’s (CNRS) news website, which originated the news item, fills in a few more details,

Five teams are fine-tuning their cars—each made up of around a hundred atoms and measuring a few nanometers in length. They will be propelled by an electric current on a gold atom “race track.” We take you behind the scenes to see how these researcher-racers are preparing for the NanoCar Race.

About this video

Original title: The NanoCar Race

Production year: 2016

Length: 6 min 23

Director: Pierre de Parscau

Producer: CNRS Images

Speaker(s) :

Christian Joachim
Centre d’Elaboration des Matériaux et d’Etudes Structurales

Gwénaël Rapenne

Corentin Durand

Pierre Abeilhou

Frank Eisenhut
Technical University of Dresden

You can find the video which is embedded in both the Nanowerk news item and here with the CNRS notice.