Tag Archives: EPFL

Brain stuff: quantum entanglement and a multi-dimensional universe

I have two brain news bits, one about neural networks and quantum entanglement and another about how the brain operates on more than three dimensions.

Quantum entanglement and neural networks

A June 13, 2017 news item on phys.org describes how machine learning can be used to solve problems in physics (Note: Links have been removed),

Machine learning, the field that’s driving a revolution in artificial intelligence, has cemented its role in modern technology. Its tools and techniques have led to rapid improvements in everything from self-driving cars and speech recognition to the digital mastery of an ancient board game.

Now, physicists are beginning to use machine learning tools to tackle a different kind of problem, one at the heart of quantum physics. In a paper published recently in Physical Review X, researchers from JQI [Joint Quantum Institute] and the Condensed Matter Theory Center (CMTC) at the University of Maryland showed that certain neural networks—abstract webs that pass information from node to node like neurons in the brain—can succinctly describe wide swathes of quantum systems.

An artist’s rendering of a neural network with two layers. At the top is a real quantum system, like atoms in an optical lattice. Below is a network of hidden neurons that capture their interactions (Credit: E. Edwards/JQI)

A June 12, 2017 JQI news release by Chris Cesare, which originated the news item, describes how neural networks can represent quantum entanglement,

Dongling Deng, a JQI Postdoctoral Fellow who is a member of CMTC and the paper’s first author, says that researchers who use computers to study quantum systems might benefit from the simple descriptions that neural networks provide. “If we want to numerically tackle some quantum problem,” Deng says, “we first need to find an efficient representation.”

On paper and, more importantly, on computers, physicists have many ways of representing quantum systems. Typically these representations comprise lists of numbers describing the likelihood that a system will be found in different quantum states. But it becomes difficult to extract properties or predictions from a digital description as the number of quantum particles grows, and the prevailing wisdom has been that entanglement—an exotic quantum connection between particles—plays a key role in thwarting simple representations.

The neural networks used by Deng and his collaborators—CMTC Director and JQI Fellow Sankar Das Sarma and Fudan University physicist and former JQI Postdoctoral Fellow Xiaopeng Li—can efficiently represent quantum systems that harbor lots of entanglement, a surprising improvement over prior methods.

What’s more, the new results go beyond mere representation. “This research is unique in that it does not just provide an efficient representation of highly entangled quantum states,” Das Sarma says. “It is a new way of solving intractable, interacting quantum many-body problems that uses machine learning tools to find exact solutions.”

Neural networks and their accompanying learning techniques powered AlphaGo, the computer program that beat some of the world’s best Go players last year (link is external) (and the top player this year (link is external)). The news excited Deng, an avid fan of the board game. Last year, around the same time as AlphaGo’s triumphs, a paper appeared that introduced the idea of using neural networks to represent quantum states (link is external), although it gave no indication of exactly how wide the tool’s reach might be. “We immediately recognized that this should be a very important paper,” Deng says, “so we put all our energy and time into studying the problem more.”

The result was a more complete account of the capabilities of certain neural networks to represent quantum states. In particular, the team studied neural networks that use two distinct groups of neurons. The first group, called the visible neurons, represents real quantum particles, like atoms in an optical lattice or ions in a chain. To account for interactions between particles, the researchers employed a second group of neurons—the hidden neurons—which link up with visible neurons. These links capture the physical interactions between real particles, and as long as the number of connections stays relatively small, the neural network description remains simple.

Specifying a number for each connection and mathematically forgetting the hidden neurons can produce a compact representation of many interesting quantum states, including states with topological characteristics and some with surprising amounts of entanglement.

Beyond its potential as a tool in numerical simulations, the new framework allowed Deng and collaborators to prove some mathematical facts about the families of quantum states represented by neural networks. For instance, neural networks with only short-range interactions—those in which each hidden neuron is only connected to a small cluster of visible neurons—have a strict limit on their total entanglement. This technical result, known as an area law, is a research pursuit of many condensed matter physicists.

These neural networks can’t capture everything, though. “They are a very restricted regime,” Deng says, adding that they don’t offer an efficient universal representation. If they did, they could be used to simulate a quantum computer with an ordinary computer, something physicists and computer scientists think is very unlikely. Still, the collection of states that they do represent efficiently, and the overlap of that collection with other representation methods, is an open problem that Deng says is ripe for further exploration.

Here’s a link to and a citation for the paper,

Quantum Entanglement in Neural Network States by Dong-Ling Deng, Xiaopeng Li, and S. Das Sarma. Phys. Rev. X 7, 021021 – Published 11 May 2017

This paper is open access.

Blue Brain and the multidimensional universe

Blue Brain is a Swiss government brain research initiative which officially came to life in 2006 although the initial agreement between the École Politechnique Fédérale de Lausanne (EPFL) and IBM was signed in 2005 (according to the project’s Timeline page). Moving on, the project’s latest research reveals something astounding (from a June 12, 2017 Frontiers Publishing press release on EurekAlert),

For most people, it is a stretch of the imagination to understand the world in four dimensions but a new study has discovered structures in the brain with up to eleven dimensions – ground-breaking work that is beginning to reveal the brain’s deepest architectural secrets.

Using algebraic topology in a way that it has never been used before in neuroscience, a team from the Blue Brain Project has uncovered a universe of multi-dimensional geometrical structures and spaces within the networks of the brain.

The research, published today in Frontiers in Computational Neuroscience, shows that these structures arise when a group of neurons forms a clique: each neuron connects to every other neuron in the group in a very specific way that generates a precise geometric object. The more neurons there are in a clique, the higher the dimension of the geometric object.

“We found a world that we had never imagined,” says neuroscientist Henry Markram, director of Blue Brain Project and professor at the EPFL in Lausanne, Switzerland, “there are tens of millions of these objects even in a small speck of the brain, up through seven dimensions. In some networks, we even found structures with up to eleven dimensions.”

Markram suggests this may explain why it has been so hard to understand the brain. “The mathematics usually applied to study networks cannot detect the high-dimensional structures and spaces that we now see clearly.”

If 4D worlds stretch our imagination, worlds with 5, 6 or more dimensions are too complex for most of us to comprehend. This is where algebraic topology comes in: a branch of mathematics that can describe systems with any number of dimensions. The mathematicians who brought algebraic topology to the study of brain networks in the Blue Brain Project were Kathryn Hess from EPFL and Ran Levi from Aberdeen University.

“Algebraic topology is like a telescope and microscope at the same time. It can zoom into networks to find hidden structures – the trees in the forest – and see the empty spaces – the clearings – all at the same time,” explains Hess.

In 2015, Blue Brain published the first digital copy of a piece of the neocortex – the most evolved part of the brain and the seat of our sensations, actions, and consciousness. In this latest research, using algebraic topology, multiple tests were performed on the virtual brain tissue to show that the multi-dimensional brain structures discovered could never be produced by chance. Experiments were then performed on real brain tissue in the Blue Brain’s wet lab in Lausanne confirming that the earlier discoveries in the virtual tissue are biologically relevant and also suggesting that the brain constantly rewires during development to build a network with as many high-dimensional structures as possible.

When the researchers presented the virtual brain tissue with a stimulus, cliques of progressively higher dimensions assembled momentarily to enclose high-dimensional holes, that the researchers refer to as cavities. “The appearance of high-dimensional cavities when the brain is processing information means that the neurons in the network react to stimuli in an extremely organized manner,” says Levi. “It is as if the brain reacts to a stimulus by building then razing a tower of multi-dimensional blocks, starting with rods (1D), then planks (2D), then cubes (3D), and then more complex geometries with 4D, 5D, etc. The progression of activity through the brain resembles a multi-dimensional sandcastle that materializes out of the sand and then disintegrates.”

The big question these researchers are asking now is whether the intricacy of tasks we can perform depends on the complexity of the multi-dimensional “sandcastles” the brain can build. Neuroscience has also been struggling to find where the brain stores its memories. “They may be ‘hiding’ in high-dimensional cavities,” Markram speculates.


About Blue Brain

The aim of the Blue Brain Project, a Swiss brain initiative founded and directed by Professor Henry Markram, is to build accurate, biologically detailed digital reconstructions and simulations of the rodent brain, and ultimately, the human brain. The supercomputer-based reconstructions and simulations built by Blue Brain offer a radically new approach for understanding the multilevel structure and function of the brain. http://bluebrain.epfl.ch

About Frontiers

Frontiers is a leading community-driven open-access publisher. By taking publishing entirely online, we drive innovation with new technologies to make peer review more efficient and transparent. We provide impact metrics for articles and researchers, and merge open access publishing with a research network platform – Loop – to catalyse research dissemination, and popularize research to the public, including children. Our goal is to increase the reach and impact of research articles and their authors. Frontiers has received the ALPSP Gold Award for Innovation in Publishing in 2014. http://www.frontiersin.org.

Here’s a link to and a citation for the paper,

Cliques of Neurons Bound into Cavities Provide a Missing Link between Structure and Function by Michael W. Reimann, Max Nolte, Martina Scolamiero, Katharine Turner, Rodrigo Perin, Giuseppe Chindemi, Paweł Dłotko, Ran Levi, Kathryn Hess, and Henry Markram. Front. Comput. Neurosci., 12 June 2017 | https://doi.org/10.3389/fncom.2017.00048

This paper is open access.

Imprinting fibres at the nanometric scale

Switzerland’s École Polytechnique Fédérale de Lausanne (EPFL) announces a discovery in a Jan. 24, 2017 press release (also on EurkeAlert),

Researchers at EPFL have come up with a way of imprinting nanometric patterns on the inside and outside of polymer fibers. These fibers could prove useful in guiding nerve regeneration and producing optical effects, for example, as well as in eventually creating artificial tissue and smart bandages.

Researchers at EPFL’s Laboratory of Photonic Materials and Fibre Devices, which is run by Fabien Sorin, have come up with a simple and innovative technique for drawing or imprinting complex, nanometric patterns on hollow polymer fibers. Their work has been published in Advanced Functional Materials.

The potential applications of this breakthrough are numerous. The imprinted designs could be used to impart certain optical effects on a fiber or make it water-resistant. They could also guide stem-cell growth in textured fiber channels or be used to break down the fiber at a specific location and point in time in order to release drugs as part of a smart bandage.

Stretching the fiber like molten plastic

To make their nanometric imprints, the researchers began with a technique called thermal drawing, which is the technique used to fabricate optical fibers. Thermal drawing involves engraving or imprinting millimeter-sized patterns on a preform, which is a macroscopic version of the target fiber. The imprinted preform is heated to change its viscosity, stretched like molten plastic into a long, thin fiber and then allowed to harden again. Stretching causes the pattern to shrink while maintaining its proportions and position. Yet this method has a major shortcoming: the pattern does not remain intact below the micrometer scale. “When the fiber is stretched, the surface tension of the structured polymer causes the pattern to deform and even disappear below a certain size, around several microns,” said Sorin.

To avoid this problem, the EPFL researchers came up with the idea of sandwiching the imprinted preform in a sacrificial polymer [emphasis mine]. This polymer protects the pattern during stretching by reducing the surface tension. It is discarded once the stretching is complete. Thanks to this trick, the researchers are able to apply tiny and highly complex patterns to various types of fibers. “We have achieved 300-nanometer patterns, but we could easily make them as small as several tens of nanometers,” said Sorin. This is the first time that such minute and highly complex patterns have been imprinted on flexible fiber on a very large scale. “This technique enables to achieve textures with feature sizes two order of magnitude smaller than previously reported,” said Sorin. “It could be applied to kilometers of fibers at a highly reasonable cost.”

To highlight potential applications of their achievement, the researchers teamed up with the Bertarelli Foundation Chair in Neuroprosthetic Technology, led by Stéphanie Lacour. Working in vitro, they were able to use their fibers to guide neurites from a spinal ganglion (on the spinal nerve). This was an encouraging step toward using these fibers to help nerves regenerate or to create artificial tissue.

This development could have implications in many other fields besides biology. “Fibers that are rendered water-resistant by the pattern could be used to make clothes. Or we could give the fibers special optical effects for design or detection purposes. There is also much to be done with the many new microfluidic systems out there,” said Sorin. The next step for the researchers will be to join forces with other EPFL labs on initiatives such as studying in vivo nerve regeneration. All this, thanks to the wonder of imprinted polymer fibers.

I like the term “sacrificial polymer.”

Here’s a link to and a citation for the paper,

Controlled Sub-Micrometer Hierarchical Textures Engineered in Polymeric Fibers and Microchannels via Thermal Drawing by Tung Nguyen-Dang, Alba C. de Luca, Wei Yan, Yunpeng Qu, Alexis G. Page, Marco Volpi, Tapajyoti Das Gupta, Stéphanie P. Lacour, and Fabien Sorin. Advanced Functional Materials DOI: 10.1002/adfm.201605935 Version of Record online: 24 JAN 2017

© 2017 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim

This paper is behind a paywall.

Better technique for growing organoids taking them from the lab to the clinic

A Nov. 16, 2016 École Polytechnique Fédérale de Lausanne (EPFL) press release (also on EurekAlert) describes a new material for growing organoids,

Organoids are miniature organs that can be grown in the lab from a person’s stem cells. They can be used to model diseases, and in the future could be used to test drugs or even replace damaged tissue in patients. But currently organoids are very difficult to grow in a standardized and controlled way, which is key to designing and using them. EPFL scientists have now solved the problem by developing a patent-pending “hydrogel” that provides a fully controllable and tunable way to grow organoids. …

Organoids need a 3D scaffold

Growing organoids begins with stem cells — immature cells that can grow into any cell type of the human body and that play key roles in tissue function and regeneration. To form an organoid, the stem cells are grown inside three-dimensional gels that contain a mix of biomolecules that promote stem cell renewal and differentiation.

The role of these gels is to mimic the natural environment of the stem cells, which provides them with a protein- and sugar-rich scaffold called the “extracellular matrix”, upon which the stem cells build specific body tissues. The stem cells stick to the extracellular matrix gel, and then “self-organize” into miniature organs like retinas, kidneys, or the gut. These tiny organs retain key aspects of their real-life biology, and can be used to study diseases or test drugs before moving on to human trials.

But the current gels used for organoid growth are derived from mice, and have problems. First, it is impossible to control their makeup from batch to batch, which can cause stem cells to behave inconsistently. Second, their biochemical complexity makes them very difficult to fine-tune for studying the effect of different parameters (e.g. biological molecules, mechanical properties, etc.) on the growth of organoids. Finally, the gels can carry pathogens or immunogens, which means that they are not suitable for growing organoids to be used in the clinic.

A hydrogel solution

The lab of Matthias Lütolf at EPFL’s Institute of Bioengineering has developed a synthetic “hydrogel” that eschews the limitations of conventional, naturally derived gels. The patent-pending gel is made of water and polyethylene glycol, a substance used widely today in various forms, from skin creams and toothpastes to industrial applications and, as in this case, bioengineering.

Nikolce Gjorevski, the first author of the study, and his colleagues used the hydrogel to grow stem cells of the gut into a miniature intestine. The functional hydrogel was not only a goal in and of itself, but also a means to identify the factors that influence the stem cells’ ability to expand and form organoids. By carefully tweaking the hydrogel’s properties, they discovered that separate stages of the organoid formation process require different mechanical environments and biological components.

One such factor is a protein called fibronectin, which helps the stem cells attach to the hydrogel. Lütolf’s lab found that this attachment itself is immensely important for growing organoids, as it triggers a whole host of signals to the stem cell that tell it to grow and build an intestine-like structure. The researchers also discovered an essential role for the mechanical properties, i.e. the physical stiffness, of the gel in regulating intestinal stem cell behavior, shedding light on how cells are able to sense, process and respond to physical stimuli. This insight is particularly valuable – while the influence of biochemical signals on stem cells is well-understood, the effect of physical factors has been more mysterious.

Because the hydrogel is man-made, it is easy to control its chemical composition and key properties, and ensure consistency from batch to batch. And because it is artificial, it does not carry any risk of infection or triggering immune responses. As such, it provides a means of moving organoids from basic research to actual pharmaceutical and clinical applications in the future.

Lütolf’s lab is now researching other types of stem cells in order to extend the capacities of their hydrogel into other tissues.

Here’s a link to and a citation for the paper,

Designer matrices for intestinal stem cell and organoid culture by Nikolce Gjorevski, Norman Sachs, Andrea Manfrin, Sonja Giger, Maiia E. Bragina, Paloma Ordóñez-Morán, Hans Clevers, & Matthias P. Lutolf.  Nature (2016) doi:10.1038/nature20168 Published online 16 November 2016

This paper is behind a paywall.

Sustainable Nanotechnologies (SUN) project draws to a close in March 2017

Two Oct. 31, 2016 news item on Nanowerk signal the impending sunset date for the European Union’s Sustainable Nanotechnologies (SUN) project. The first Oct. 31, 2016 news item on Nanowerk describes the projects latest achievements,

The results from the 3rd SUN annual meeting showed great advancement of the project. The meeting was held in Edinburgh, Scotland, UK on 4-5 October 2016 where the project partners presented the results obtained during the second reporting period of the project.

SUN is a three and a half year EU project, running from 2013 to 2017, with a budget of about €14 million. Its main goal is to evaluate the risks along the supply chain of engineered nanomaterials and incorporate the results into tools and guidelines for sustainable manufacturing.

The ultimate goal of the SUN Project is the development of an online software Decision Support System – SUNDS – aimed at estimating and managing occupational, consumer, environmental and public health risks from nanomaterials in real industrial products along their lifecycles. The SUNDS beta prototype has been released last October, 2015, and since then the main focus has been on refining the methodologies and testing them on selected case studies i.e. nano-copper oxide based wood preserving paint and nano- sized colourants for plastic car part: organic pigment and carbon black. Obtained results and open issues were discussed during the third annual meeting in order collect feedbacks from the consortium that will inform, in the next months, the implementation of the final version of the SUNDS software system, due by March 2017.

An Oct. 27, 2016 SUN project press release, which originated the news item, adds more information,

Significant interest has been payed towards the results obtained in WP2 (Lifecycle Thinking) which main objectives are to assess the environmental impacts arising from each life cycle stage of the SUN case studies (i.e. Nano-WC-Cobalt (Tungsten Carbide-cobalt) sintered ceramics, Nanocopper wood preservatives, Carbon Nano Tube (CNT) in plastics, Silicon Dioxide (SiO2) as food additive, Nano-Titanium Dioxide (TiO2) air filter system, Organic pigment in plastics and Nanosilver (Ag) in textiles), and compare them to conventional products with similar uses and functionality, in order to develop and validate criteria and guiding principles for green nano-manufacturing. Specifically, the consortium partner COLOROBBIA CONSULTING S.r.l. expressed its willingness to exploit the results obtained from the life cycle assessment analysis related to nanoTiO2 in their industrial applications.

On 6th October [2016], the discussions about the SUNDS advancement continued during a Stakeholder Workshop, where representatives from industry, regulatory and insurance sectors shared their feedback on the use of the decision support system. The recommendations collected during the workshop will be used for the further refinement and implemented in the final version of the software which will be released by March 2017.

The second Oct. 31, 2016 news item on Nanowerk led me to this Oct. 27, 2016 SUN project press release about the activities in the upcoming final months,

The project has designed its final events to serve as an effective platform to communicate the main results achieved in its course within the Nanosafety community and bridge them to a wider audience addressing the emerging risks of Key Enabling Technologies (KETs).

The series of events include the New Tools and Approaches for Nanomaterial Safety Assessment: A joint conference organized by NANOSOLUTIONS, SUN, NanoMILE, GUIDEnano and eNanoMapper to be held on 7 – 9 February 2017 in Malaga, Spain, the SUN-CaLIBRAte Stakeholders workshop to be held on 28 February – 1 March 2017 in Venice, Italy and the SRA Policy Forum: Risk Governance for Key Enabling Technologies to be held on 1- 3 March in Venice, Italy.

Jointly organized by the Society for Risk Analysis (SRA) and the SUN Project, the SRA Policy Forum will address current efforts put towards refining the risk governance of emerging technologies through the integration of traditional risk analytic tools alongside considerations of social and economic concerns. The parallel sessions will be organized in 4 tracks:  Risk analysis of engineered nanomaterials along product lifecycle, Risks and benefits of emerging technologies used in medical applications, Challenges of governing SynBio and Biotech, and Methods and tools for risk governance.

The SRA Policy Forum has announced its speakers and preliminary Programme. Confirmed speakers include:

  • Keld Alstrup Jensen (National Research Centre for the Working Environment, Denmark)
  • Elke Anklam (European Commission, Belgium)
  • Adam Arkin (University of California, Berkeley, USA)
  • Phil Demokritou (Harvard University, USA)
  • Gerard Escher (École polytechnique fédérale de Lausanne, Switzerland)
  • Lisa Friedersdor (National Nanotechnology Initiative, USA)
  • James Lambert (President, Society for Risk Analysis, USA)
  • Andre Nel (The University of California, Los Angeles, USA)
  • Bernd Nowack (EMPA, Switzerland)
  • Ortwin Renn (University of Stuttgart, Germany)
  • Vicki Stone (Heriot-Watt University, UK)
  • Theo Vermeire (National Institute for Public Health and the Environment (RIVM), Netherlands)
  • Tom van Teunenbroek (Ministry of Infrastructure and Environment, The Netherlands)
  • Wendel Wohlleben (BASF, Germany)

The New Tools and Approaches for Nanomaterial Safety Assessment (NMSA) conference aims at presenting the main results achieved in the course of the organizing projects fostering a discussion about their impact in the nanosafety field and possibilities for future research programmes.  The conference welcomes consortium partners, as well as representatives from other EU projects, industry, government, civil society and media. Accordingly, the conference topics include: Hazard assessment along the life cycle of nano-enabled products, Exposure assessment along the life cycle of nano-enabled products, Risk assessment & management, Systems biology approaches in nanosafety, Categorization & grouping of nanomaterials, Nanosafety infrastructure, Safe by design. The NMSA conference key note speakers include:

  • Harri Alenius (University of Helsinki, Finland,)
  • Antonio Marcomini (Ca’ Foscari University of Venice, Italy)
  • Wendel Wohlleben (BASF, Germany)
  • Danail Hristozov (Ca’ Foscari University of Venice, Italy)
  • Eva Valsami-Jones (University of Birmingham, UK)
  • Socorro Vázquez-Campos (LEITAT Technolоgical Center, Spain)
  • Barry Hardy (Douglas Connect GmbH, Switzerland)
  • Egon Willighagen (Maastricht University, Netherlands)
  • Nina Jeliazkova (IDEAconsult Ltd., Bulgaria)
  • Haralambos Sarimveis (The National Technical University of Athens, Greece)

During the SUN-caLIBRAte Stakeholder workshop the final version of the SUN user-friendly, software-based Decision Support System (SUNDS) for managing the environmental, economic and social impacts of nanotechnologies will be presented and discussed with its end users: industries, regulators and insurance sector representatives. The results from the discussion will be used as a foundation of the development of the caLIBRAte’s Risk Governance framework for assessment and management of human and environmental risks of MN and MN-enabled products.

The SRA Policy Forum: Risk Governance for Key Enabling Technologies and the New Tools and Approaches for Nanomaterial Safety Assessment conference are now open for registration. Abstracts for the SRA Policy Forum can be submitted till 15th November 2016.
For further information go to:

There you have it.

Atomic force microscope with nanowire sensors

Measuring the size and direction of forces may become reality with a nanotechnology-enabled atomic force microscope designed by Swiss scientists, according to an Oct. 17, 2016 news item on phys.org,

A new type of atomic force microscope (AFM) uses nanowires as tiny sensors. Unlike standard AFM, the device with a nanowire sensor enables measurements of both the size and direction of forces. Physicists at the University of Basel and at the EPF Lausanne have described these results in the recent issue of Nature Nanotechnology.

A nanowire sensor measures size and direction of forces (Image: University of Basel, Department of Physics)

A nanowire sensor measures size and direction of forces (Image: University of Basel, Department of Physics)

An Oct. 17, 2016 University of Basel press release (also on EurekAlert), which originated the news item, expands on the theme,

Nanowires are extremely tiny filamentary crystals which are built-up molecule by molecule from various materials and which are now being very actively studied by scientists all around the world because of their exceptional properties.

The wires normally have a diameter of 100 nanometers and therefore possess only about one thousandth of a hair thickness. Because of this tiny dimension, they have a very large surface in comparison to their volume. This fact, their small mass and flawless crystal lattice make them very attractive in a variety of nanometer-scale sensing applications, including as sensors of biological and chemical samples, and as pressure or charge sensors.

Measurement of direction and size

The team of Argovia Professor Martino Poggio from the Swiss Nanoscience Institute (SNI) and the Department of Physics at the University of Basel has now demonstrated that nanowires can also be used as force sensors in atomic force microscopes. Based on their special mechanical properties, nanowires vibrate along two perpendicular axes at nearly the same frequency. When they are integrated into an AFM, the researchers can measure changes in the perpendicular vibrations caused by different forces. Essentially, they use the nanowires like tiny mechanical compasses that point out both the direction and size of the surrounding forces.

Image of the two-dimensional force field

The scientists from Basel describe how they imaged a patterned sample surface using a nanowire sensor. Together with colleagues from the EPF Lausanne, who grew the nanowires, they mapped the two-dimensional force field above the sample surface using their nanowire “compass”. As a proof-of-principle, they also mapped out test force fields produced by tiny electrodes.

The most challenging technical aspect of the experiments was the realization of an apparatus that could simultaneously scan a nanowire above a surface and monitor its vibration along two perpendicular directions. With their study, the scientists have demonstrated a new type of AFM that could extend the technique’s numerous applications even further.

AFM – today widely used

The development of AFM 30 years ago was honored with the conferment of the Kavli-Prize [2016 Kavli Prize in Nanoscience] beginning of September this year. Professor Christoph Gerber of the SNI and Department of Physics at the University of Basel is one of the awardees, who has substantially contributed to the wide use of AFM in different fields, including solid-state physics, materials science, biology, and medicine.

The various different types of AFM are most often carried out using cantilevers made from crystalline Si as the mechanical sensor. “Moving to much smaller nanowire sensors may now allow for even further improvements on an already amazingly successful technique”, Martino Poggio comments his approach.

I featured an interview article with Christoph Gerber and Gerd Binnig about their shared Kavli prize and about inventing the AFM in a Sept. 20, 2016 posting.

As for the latest innovation, here’s a link to and a citation for the paper,

Vectorial scanning force microscopy using a nanowire sensor by Nicola Rossi, Floris R. Braakman, Davide Cadeddu, Denis Vasyukov, Gözde Tütüncüoglu, Anna Fontcuberta i Morral, & Martino Poggio. Nature Nanotechnology (2016) doi:10.1038/nnano.2016.189 Published online 17 October 2016

This paper is behind a paywall.

Tiny sensors produced by nanoscale 3D printing could lead to new generation of atomic force microscopes

A Sept. 26, 2016 news item on Nanowerk features research into producing smaller sensors for atomic force microscopes (AFMs) to achieve greater sensitivity,

Tiny sensors made through nanoscale 3D printing may be the basis for the next generation of atomic force microscopes. These nanosensors can enhance the microscopes’ sensitivity and detection speed by miniaturizing their detection component up to 100 times. The sensors were used in a real-world application for the first time at EPFL, and the results are published in Nature Communications.

A Sept. 26, 2016 École Polytechnique Fédérale de Lausanne (EPFL; Switzerland) press release by Laure-Anne Pessina, which originated the news item, expands on the theme (Note: A link has been removed),

Atomic force microscopy is based on powerful technology that works a little like a miniature turntable. A tiny cantilever with a nanometric tip passes over a sample and traces its relief, atom by atom. The tip’s infinitesimal up-and-down movements are picked up by a sensor so that the sample’s topography can be determined. (…)

One way to improve atomic force microscopes is to miniaturize the cantilever, as this will reduce inertia, increase sensitivity, and speed up detection. Researchers at EPFL’s Laboratory for Bio- and Nano-Instrumentation achieved this by equipping the cantilever with a 5-nanometer thick sensor made with a nanoscale 3D-printing technique. “Using our method, the cantilever can be 100 times smaller,” says Georg Fantner, the lab’s director.

Electrons that jump over obstacles

The nanometric tip’s up-and-down movements can be measured through the deformation of the sensor placed at the fixed end of the cantilever. But because the researchers were dealing with minute movements – smaller than an atom – they had to pull a trick out of their hat.

Together with Michael Huth’s lab at Goethe Universität at Frankfurt am Main, they developed a sensor made up of highly conductive platinum nanoparticles surrounded by an insulating carbon matrix. Under normal conditions, the carbon isolates the electrons. But at the nano-scale, a quantum effect comes into play: some electrons jump through the insulating material and travel from one nanoparticle to the next. “It’s sort of like if people walking on a path came up against a wall and only the courageous few managed to climb over it,” said Fantner.

When the shape of the sensor changes, the nanoparticles move further away from each other and the electrons jump between them less frequently. Changes in the current thus reveal the deformation of the sensor and the composition of the sample.

Tailor-made sensors

The researchers’ real feat was in finding a way to produce these sensors in nanoscale dimensions while carefully controlling their structure and, by extension, their properties. “In a vacuum, we distribute a precursor gas containing platinum and carbon atoms over a substrate. Then we apply an electron beam. The platinum atoms gather and form nanoparticles, and the carbon atoms naturally form a matrix around them,” said Maja Dukic, the article’s lead author. “By repeating this process, we can build sensors with any thickness and shape we want. We have proven that we could build these sensors and that they work on existing infrastructures. Our technique can now be used for broader applications, ranging from biosensors, ABS sensors for cars, to touch sensors on flexible membranes in prosthetics and artificial skin.”

Here’s a link to and a citation for the paper,

Direct-write nanoscale printing of nanogranular tunnelling strain sensors for sub-micrometre cantilevers by Maja Dukic, Marcel Winhold, Christian H. Schwalb, Jonathan D. Adams, Vladimir Stavrov, Michael Huth, & Georg E. Fantner. Nature Communications 7, Article number: 12487 doi:10.1038/ncomms12487 Published  26 September 2016

This is an open access paper.

Windows in Swiss trains are about to combine mobile reception and thermal insulation

A Sept. 2, 2016 news item on Nanowerk announces a whole new kind of train window,

EPFL [École polytechnique fédérale de Lausanne; Switzerland] researchers have developed a type of glass that offers excellent energy efficiency and lets mobile telephone signals through. And by teaming up with Swiss manufacturers, they have produced innovative windows. Railway company BLS is about to install them on some of its trains in order to improve energy efficiency.

An Aug. 26, 2016 EPFL press release, by Anne-Muriel Brouet, which originated the news item,

Train travel may be fast, but mobile connectivity onboard often lags behind. This is because the modern train car is a metal box that blocks out microwaves – in physics, this is called a Faraday cage. Even the windows contain an ultra-thin metal coating to improve thermal insulation. But EPFL researchers, working with manufacturing partners, have developed a new type of window that guarantees a comfortable temperature for passengers while at the same time letting mobile phone signals through.

In the rail industry, energy use is critical: around one third of the energy consumed by trains goes into providing heating and air conditioning in the train cars. And around 3% of this escapes through the windows. Double-glazed windows with an ultra-thin metal coating increase energy efficiency by a factor of four compared with untreated windows.

But the problem is that the metal sharply weakens the telecommunication signals. The solution that mobile phone operators and railway companies have used until now consists of placing signal boosters – or repeaters – in the trains. But they are expensive to install and maintain and have to be replaced regularly to keep pace with rapidly changing technologies. And each repeater consumes electricity.

A laser-scribed coating

Andreas Schüler, from EPFL’s Nanotechnology for Solar Energy Conversion Group, had another idea: “A metal coating that reflects heat waves (which are micrometric in size) but lets through both visible light (which is nanometric in size) and the electromagnetic waves of mobile phones (microwaves, which are centimetric in size).” But how is this done? “We breach the Faraday cage by modifying the metal coating with a special laser treatment. The windows then let the signals through,” said Schüler, a specialist in the optical and electronic properties of ultra-thin coatings.

To do this, a special structure is scribed into the metal coating with the aid of a high-precision laser. No more than 2.5% of the surface area of the metal coating is ablated by laser scribing. The resulting pattern is nearly invisible to the naked eye and does not affect the window’s insulating properties.

A manufacturing partnership pays off

Initial laboratory tests were extremely convincing. Several manufacturing partners were brought into the team in order to apply the method on a large scale. Thanks to the skills of glassmaker AGC Verres Industriels and the expertise of Class4Laser, prototype glass samples were produced and tested. “Measurements taken by experts from the University of Applied Sciences and Arts of Southern Switzerland (SUPSI) have demonstrated that this works,” said Schüler.

Energy savings for BLS

But the innovative glass needed to prove its mettle under real-life conditions. BLS was enthusiastic about testing the new windows as part of ongoing studies aimed at improving the energy efficiency of its trains. The first full-size windows were produced in the AGC Verres Industriels workshop and installed throughout a NINA-type self-propelled regional train.

The field tests met the partners’ expectations. Swisscom and SUPSI tested the efficacy of the new windows, both in BLS’s workshops and on the Bern-Thun train line. “Mobile reception is just as good in the train through laser-treated insulating glass as it is through ordinary glass,” said Schüler.

As a result, BLS has decided to install the new windows in most of its 36 NINA regional trains, replacing the old, non-insulating windows. Installation will begin in September 2016 as part of the company’s train modernization program. “Our commitment will help bring to market an innovative product designed to improve the energy efficiency of trains without compromising mobile reception for passengers,” said Quentin Sauvagnat, NINA fleet manager at BLS. Thanks to this product, those expensive signal repeaters will no longer be needed.

Are frequency-selective buildings next?

This proven and developed technology could be applied to buildings next. This is because, according to Schüler, “some glass buildings also act like Faraday cages. And as the internet of things continues to grow, there is a real interest in improving the properties of building materials that allow mobile signals through. More broadly, by making materials more frequency-selective, we could, for example, imagine a building that lets electromagnetic waves through but blocks Wi-Fi waves, thus enhancing corporate security.”

I have a friend who may find this train window innovation quite handy. As for frequency selective buildings, I imagine that would open up many possibilities for hackers.

Could your photo be a solar cell?

Scientists at Aalto University (Finland) have found a way to print photographs that produce energy (like a solar cell does) according to a July 25, 2016 news item on Nanowerk,

Solar cells have been manufactured already for a long from inexpensive materials with different printing techniques. Especially organic solar cells and dye-sensitized solar cells are suitable for printing.

“We wanted to take the idea of printed solar cells even further, and see if their materials could be inkjet-printed as pictures and text like traditional printing inks,” tells University Lecturer Janne Halme.

A semi-transparent dye-sensitized solar cell with inkjet-printed photovoltaic portraits of the Aalto researchers (Ghufran Hashmi, Merve Özkan, Janne Halme) and a QR code that links to the original research paper. Courtesy: Aalto University

A semi-transparent dye-sensitized solar cell with inkjet-printed photovoltaic portraits of the Aalto researchers (Ghufran Hashmi, Merve Özkan, Janne Halme) and a QR code that links to the original research paper. Courtesy: Aalto University

A July 26, 2016 Aalto University press release, which originated the news item, describes the innovation in more detail,

When light is absorbed in an ordinary ink, it generates heat. A photovoltaic ink, however, coverts part of that energy to electricity. The darker the color, the more electricity is produced, because the human eye is most sensitive to that part of the solar radiation spectrum which has highest energy density. The most efficient solar cell is therefore pitch-black.

The idea of a colorful, patterned solar cell is to combine also other properties that take advantage of light on the same surface, such as visual information and graphics.

– For example, installed on a sufficiently low-power electrical device, this kind of solar cell could be part of its visual design, and at the same time produce energy for its needs, ponders Halme.

With inkjet printing, the photovoltaic dye could be printed to a shape determined by a selected image file, and the darkness and transparency of the different parts of the image could be adjusted accurately.

– The inkjet-dyed solar cells were as efficient and durable as the corresponding solar cells prepared in a traditional way. They endured more than one thousand hours of continuous light and heat stress without any signs of performance degradation, says Postdoctoral Researcher Ghufran Hashmi.

The dye and electrolyte that turned out to be best were obtained from the research group in the Swiss École Polytechnique Fédérale de Lausanne, where Dr. Hashmi worked as a visiting researcher.

– The most challenging thing was to find suitable solvent for the dye and the right jetting parameters that gave precise and uniform print quality, tells Doctoral Candidate Merve Özkan.

This puts solar cells (pun alert) in a whole new light.

Here’s a link to and a citation for the paper,

Dye-sensitized solar cells with inkjet-printed dyes by Syed Ghufran Hashmi, Merve Özkan, Janne Halme, Shaik Mohammed Zakeeruddin, Jouni Paltakari, Michael Grätzel, and Peter D. Lund. Energy Environ. Sci., 2016,9, 2453-2462 DOI: 10.1039/C6EE00826G First published online 09 Jun 2016

This paper is behind a paywall.

Osmotic power: electricity generated with water, salt and a 3-atoms-thick membrane

EPFL researchers have developed a system that generates electricity from osmosis with unparalleled efficiency. Their work, featured in “Nature”, uses seawater, fresh water, and a new type of membrane just three atoms thick.

A July 13, 2016 news item on Nanowerk highlights  research on osmotic power at École polytechnique fédérale de Lausanne (EPFL; Switzerland),

Proponents of clean energy will soon have a new source to add to their existing array of solar, wind, and hydropower: osmotic power. Or more specifically, energy generated by a natural phenomenon occurring when fresh water comes into contact with seawater through a membrane.

Researchers at EPFL’s Laboratory of Nanoscale Biology have developed an osmotic power generation system that delivers never-before-seen yields. Their innovation lies in a three atoms thick membrane used to separate the two fluids. …

A July 14, 2016 EPFL press release (also on EurekAlert but published July 13, 2016), which originated the news item, describes the research,

The concept is fairly simple. A semipermeable membrane separates two fluids with different salt concentrations. Salt ions travel through the membrane until the salt concentrations in the two fluids reach equilibrium. That phenomenon is precisely osmosis.

If the system is used with seawater and fresh water, salt ions in the seawater pass through the membrane into the fresh water until both fluids have the same salt concentration. And since an ion is simply an atom with an electrical charge, the movement of the salt ions can be harnessed to generate electricity.

A 3 atoms thick, selective membrane that does the job

EPFL’s system consists of two liquid-filled compartments separated by a thin membrane made of molybdenum disulfide. The membrane has a tiny hole, or nanopore, through which seawater ions pass into the fresh water until the two fluids’ salt concentrations are equal. As the ions pass through the nanopore, their electrons are transferred to an electrode – which is what is used to generate an electric current.

Thanks to its properties the membrane allows positively-charged ions to pass through, while pushing away most of the negatively-charged ones. That creates voltage between the two liquids as one builds up a positive charge and the other a negative charge. This voltage is what causes the current generated by the transfer of ions to flow.

“We had to first fabricate and then investigate the optimal size of the nanopore. If it’s too big, negative ions can pass through and the resulting voltage would be too low. If it’s too small, not enough ions can pass through and the current would be too weak,” said Jiandong Feng, lead author of the research.

What sets EPFL’s system apart is its membrane. In these types of systems, the current increases with a thinner membrane. And EPFL’s membrane is just a few atoms thick. The material it is made of – molybdenum disulfide – is ideal for generating an osmotic current. “This is the first time a two-dimensional material has been used for this type of application,” said Aleksandra Radenovic, head of the laboratory of Nanoscale Biology

Powering 50’000 energy-saving light bulbs with 1m2 membrane

The potential of the new system is huge. According to their calculations, a 1m2 membrane with 30% of its surface covered by nanopores should be able to produce 1MW of electricity – or enough to power 50,000 standard energy-saving light bulbs. And since molybdenum disulfide (MoS2) is easily found in nature or can be grown by chemical vapor deposition, the system could feasibly be ramped up for large-scale power generation. The major challenge in scaling-up this process is finding out how to make relatively uniform pores.

Until now, researchers have worked on a membrane with a single nanopore, in order to understand precisely what was going on. ” From an engineering perspective, single nanopore system is ideal to further our fundamental understanding of 8=-based processes and provide useful information for industry-level commercialization”, said Jiandong Feng.

The researchers were able to run a nanotransistor from the current generated by a single nanopore and thus demonstrated a self-powered nanosystem. Low-power single-layer MoS2 transistors were fabricated in collaboration with Andras Kis’ team at at EPFL, while molecular dynamics simulations were performed by collaborators at University of Illinois at Urbana–Champaign

Harnessing the potential of estuaries

EPFL’s research is part of a growing trend. For the past several years, scientists around the world have been developing systems that leverage osmotic power to create electricity. Pilot projects have sprung up in places such as Norway, the Netherlands, Japan, and the United States to generate energy at estuaries, where rivers flow into the sea. For now, the membranes used in most systems are organic and fragile, and deliver low yields. Some systems use the movement of water, rather than ions, to power turbines that in turn produce electricity.

Once the systems become more robust, osmotic power could play a major role in the generation of renewable energy. While solar panels require adequate sunlight and wind turbines adequate wind, osmotic energy can be produced just about any time of day or night – provided there’s an estuary nearby.

Here’s a link to and a citation for the paper,

Single-layer MoS2 nanopores as nanopower generators by Jiandong Feng, Michael Graf, Ke Liu, Dmitry Ovchinnikov, Dumitru Dumcenco, Mohammad Heiranian, Vishal Nandigana, Narayana R. Aluru, Andras Kis, & Aleksandra Radenovic. Nature (2016)  doi:10.1038/nature18593 Published online 13 July 2016

This paper is behind a paywall.

Deep learning and some history from the Swiss National Science Foundation (SNSF)

A June 27, 2016 news item on phys.org provides a measured analysis of deep learning and its current state of development (from a Swiss perspective),

In March 2016, the world Go champion Lee Sedol lost 1-4 against the artificial intelligence AlphaGo. For many, this was yet another defeat for humanity at the hands of the machines. Indeed, the success of the AlphaGo software was forged in an area of artificial intelligence that has seen huge progress over the last decade. Deep learning, as it’s called, uses artificial neural networks to process algorithmic calculations. This software architecture therefore mimics biological neural networks.

Much of the progress in deep learning is thanks to the work of Jürgen Schmidhuber, director of the IDSIA (Istituto Dalle Molle di Studi sull’Intelligenza Artificiale) which is located in the suburbs of Lugano. The IDSIA doctoral student Shane Legg and a group of former colleagues went on to found DeepMind, the startup acquired by Google in early 2014 for USD 500 million. The DeepMind algorithms eventually wound up in AlphaGo.

“Schmidhuber is one of the best at deep learning,” says Boi Faltings of the EPFL Artificial Intelligence Lab. “He never let go of the need to keep working at it.” According to Stéphane Marchand-Maillet of the University of Geneva computing department, “he’s been in the race since the very beginning.”

A June 27, 2016 SNSF news release (first published as a story in Horizons no. 109 June 2016) by Fabien Goubet, which originated the news item, goes on to provide a brief history,

The real strength of deep learning is structural recognition, and winning at Go is just an illustration of this, albeit a rather resounding one. Elsewhere, and for some years now, we have seen it applied to an entire spectrum of areas, such as visual and vocal recognition, online translation tools and smartphone personal assistants. One underlying principle of machine learning is that algorithms must first be trained using copious examples. Naturally, this has been helped by the deluge of user-generated content spawned by smartphones and web 2.0, stretching from Facebook photo comments to official translations published on the Internet. By feeding a machine thousands of accurately tagged images of cats, for example, it learns first to recognise those cats and later any image of a cat, including those it hasn’t been fed.

Deep learning isn’t new; it just needed modern computers to come of age. As far back as the early 1950s, biologists tried to lay out formal principles to explain the working of the brain’s cells. In 1956, the psychologist Frank Rosenblatt of the New York State Aeronautical Laboratory published a numerical model based on these concepts, thereby creating the very first artificial neural network. Once integrated into a calculator, it learned to recognise rudimentary images.

“This network only contained eight neurones organised in a single layer. It could only recognise simple characters”, says Claude Touzet of the Adaptive and Integrative Neuroscience Laboratory of Aix-Marseille University. “It wasn’t until 1985 that we saw the second generation of artificial neural networks featuring multiple layers and much greater performance”. This breakthrough was made simultaneously by three researchers: Yann LeCun in Paris, Geoffrey Hinton in Toronto and Terrence Sejnowski in Baltimore.

Byte-size learning

In multilayer networks, each layer learns to recognise the precise visual characteristics of a shape. The deeper the layer, the more abstract the characteristics. With cat photos, the first layer analyses pixel colour, and the following layer recognises the general form of the cat. This structural design can support calculations being made upon thousands of layers, and it was this aspect of the architecture that gave rise to the name ‘deep learning’.

Marchand-Maillet explains: “Each artificial neurone is assigned an input value, which it computes using a mathematical function, only firing if the output exceeds a pre-defined threshold”. In this way, it reproduces the behaviour of real neurones, which only fire and transmit information when the input signal (the potential difference across the entire neural circuit) reaches a certain level. In the artificial model, the results of a single layer are weighted, added up and then sent as the input signal to the following layer, which processes that input using different functions, and so on and so forth.

For example, if a system is trained with great quantities of photos of apples and watermelons, it will progressively learn to distinguish them on the basis of diameter, says Marchand-Maillet. If it cannot decide (e.g., when processing a picture of a tiny watermelon), the subsequent layers take over by analysing the colours or textures of the fruit in the photo, and so on. In this way, every step in the process further refines the assessment.

Video games to the rescue

For decades, the frontier of computing held back more complex applications, even at the cutting edge. Industry walked away, and deep learning only survived thanks to the video games sector, which eventually began producing graphics chips, or GPUs, with an unprecedented power at accessible prices: up to 6 teraflops (i.e., 6 trillion calculations per second) for a few hundred dollars. “There’s no doubt that it was this calculating power that laid the ground for the quantum leap in deep learning”, says Touzet. GPUs are also very good at parallel calculations, a useful function for executing the innumerable simultaneous operations required by neural networks.
Although image analysis is getting great results, things are more complicated for sequential data objects such as natural spoken language and video footage. This has formed part of Schmidhuber’s work since 1989, and his response has been to develop recurrent neural networks in which neurones communicate with each other in loops, feeding processed data back into the initial layers.

Such sequential data analysis is highly dependent on context and precursory data. In Lugano, networks have been instructed to memorise the order of a chain of events. Long Short Term Memory (LSTM) networks can distinguish ‘boat’ from ‘float’ by recalling the sound that preceded ‘oat’ (i.e., either ‘b’ or ‘fl’). “Recurrent neural networks are more powerful than other approaches such as the Hidden Markov models”, says Schmidhuber, who also notes that Google Voice integrated LSTMs in 2015. “With looped networks, the number of layers is potentially infinite”, says Faltings [?].

For Schmidhuber, deep learning is just one aspect of artificial intelligence; the real thing will lead to “the most important change in the history of our civilisation”. But Marchand-Maillet sees deep learning as “a bit of hype, leading us to believe that artificial intelligence can learn anything provided there’s data. But it’s still an open question as to whether deep learning can really be applied to every last domain”.

It’s nice to get an historical perspective and eye-opening to realize that scientists have been working on these concepts since the 1950s.