Tag Archives: École Polytechnique Fédérale de Lausanne

Deep learning and some history from the Swiss National Science Foundation (SNSF)

A June 27, 2016 news item on phys.org provides a measured analysis of deep learning and its current state of development (from a Swiss perspective),

In March 2016, the world Go champion Lee Sedol lost 1-4 against the artificial intelligence AlphaGo. For many, this was yet another defeat for humanity at the hands of the machines. Indeed, the success of the AlphaGo software was forged in an area of artificial intelligence that has seen huge progress over the last decade. Deep learning, as it’s called, uses artificial neural networks to process algorithmic calculations. This software architecture therefore mimics biological neural networks.

Much of the progress in deep learning is thanks to the work of Jürgen Schmidhuber, director of the IDSIA (Istituto Dalle Molle di Studi sull’Intelligenza Artificiale) which is located in the suburbs of Lugano. The IDSIA doctoral student Shane Legg and a group of former colleagues went on to found DeepMind, the startup acquired by Google in early 2014 for USD 500 million. The DeepMind algorithms eventually wound up in AlphaGo.

“Schmidhuber is one of the best at deep learning,” says Boi Faltings of the EPFL Artificial Intelligence Lab. “He never let go of the need to keep working at it.” According to Stéphane Marchand-Maillet of the University of Geneva computing department, “he’s been in the race since the very beginning.”

A June 27, 2016 SNSF news release (first published as a story in Horizons no. 109 June 2016) by Fabien Goubet, which originated the news item, goes on to provide a brief history,

The real strength of deep learning is structural recognition, and winning at Go is just an illustration of this, albeit a rather resounding one. Elsewhere, and for some years now, we have seen it applied to an entire spectrum of areas, such as visual and vocal recognition, online translation tools and smartphone personal assistants. One underlying principle of machine learning is that algorithms must first be trained using copious examples. Naturally, this has been helped by the deluge of user-generated content spawned by smartphones and web 2.0, stretching from Facebook photo comments to official translations published on the Internet. By feeding a machine thousands of accurately tagged images of cats, for example, it learns first to recognise those cats and later any image of a cat, including those it hasn’t been fed.

Deep learning isn’t new; it just needed modern computers to come of age. As far back as the early 1950s, biologists tried to lay out formal principles to explain the working of the brain’s cells. In 1956, the psychologist Frank Rosenblatt of the New York State Aeronautical Laboratory published a numerical model based on these concepts, thereby creating the very first artificial neural network. Once integrated into a calculator, it learned to recognise rudimentary images.

“This network only contained eight neurones organised in a single layer. It could only recognise simple characters”, says Claude Touzet of the Adaptive and Integrative Neuroscience Laboratory of Aix-Marseille University. “It wasn’t until 1985 that we saw the second generation of artificial neural networks featuring multiple layers and much greater performance”. This breakthrough was made simultaneously by three researchers: Yann LeCun in Paris, Geoffrey Hinton in Toronto and Terrence Sejnowski in Baltimore.

Byte-size learning

In multilayer networks, each layer learns to recognise the precise visual characteristics of a shape. The deeper the layer, the more abstract the characteristics. With cat photos, the first layer analyses pixel colour, and the following layer recognises the general form of the cat. This structural design can support calculations being made upon thousands of layers, and it was this aspect of the architecture that gave rise to the name ‘deep learning’.

Marchand-Maillet explains: “Each artificial neurone is assigned an input value, which it computes using a mathematical function, only firing if the output exceeds a pre-defined threshold”. In this way, it reproduces the behaviour of real neurones, which only fire and transmit information when the input signal (the potential difference across the entire neural circuit) reaches a certain level. In the artificial model, the results of a single layer are weighted, added up and then sent as the input signal to the following layer, which processes that input using different functions, and so on and so forth.

For example, if a system is trained with great quantities of photos of apples and watermelons, it will progressively learn to distinguish them on the basis of diameter, says Marchand-Maillet. If it cannot decide (e.g., when processing a picture of a tiny watermelon), the subsequent layers take over by analysing the colours or textures of the fruit in the photo, and so on. In this way, every step in the process further refines the assessment.

Video games to the rescue

For decades, the frontier of computing held back more complex applications, even at the cutting edge. Industry walked away, and deep learning only survived thanks to the video games sector, which eventually began producing graphics chips, or GPUs, with an unprecedented power at accessible prices: up to 6 teraflops (i.e., 6 trillion calculations per second) for a few hundred dollars. “There’s no doubt that it was this calculating power that laid the ground for the quantum leap in deep learning”, says Touzet. GPUs are also very good at parallel calculations, a useful function for executing the innumerable simultaneous operations required by neural networks.
Although image analysis is getting great results, things are more complicated for sequential data objects such as natural spoken language and video footage. This has formed part of Schmidhuber’s work since 1989, and his response has been to develop recurrent neural networks in which neurones communicate with each other in loops, feeding processed data back into the initial layers.

Such sequential data analysis is highly dependent on context and precursory data. In Lugano, networks have been instructed to memorise the order of a chain of events. Long Short Term Memory (LSTM) networks can distinguish ‘boat’ from ‘float’ by recalling the sound that preceded ‘oat’ (i.e., either ‘b’ or ‘fl’). “Recurrent neural networks are more powerful than other approaches such as the Hidden Markov models”, says Schmidhuber, who also notes that Google Voice integrated LSTMs in 2015. “With looped networks, the number of layers is potentially infinite”, says Faltings [?].

For Schmidhuber, deep learning is just one aspect of artificial intelligence; the real thing will lead to “the most important change in the history of our civilisation”. But Marchand-Maillet sees deep learning as “a bit of hype, leading us to believe that artificial intelligence can learn anything provided there’s data. But it’s still an open question as to whether deep learning can really be applied to every last domain”.

It’s nice to get an historical perspective and eye-opening to realize that scientists have been working on these concepts since the 1950s.

Cleaning up nuclear waste gases with nanotechnology-enabled materials

Swiss and US scientists have developed a nanoporous crystal that could be used to clean up nuclear waste gases according to a June 13, 2016 news item on Nanowerk (Note: A link has been removed),

An international team of scientists at EPFL [École polytechnique fédérale de Lausanne in Switzerland] and the US have discovered a material that can clear out radioactive waste from nuclear plants more efficiently, cheaply, and safely than current methods.

Nuclear energy is one of the cheapest alternatives to carbon-based fossil fuels. But nuclear-fuel reprocessing plants generate waste gas that is currently too expensive and dangerous to deal with. Scanning hundreds of thousands of materials, scientists led by EPFL and their US colleagues have now discovered a material that can absorb nuclear waste gases much more efficiently, cheaply and safely. The work is published in Nature Communications (“Metal–organic framework with optimally selective xenon adsorption and separation”).

A June 14, 2016 EPFL press release (also on EurekAlert), which originated the news item, explains further,

Nuclear-fuel reprocessing plants generate volatile radionuclides such as xenon and krypton, which escape in the so-called “off-gas” of these facilities – the gases emitted as byproducts of the chemical process. Current ways of capturing and clearing out these gases involve distillation at very low temperatures, which is expensive in both terms of energy and capital costs, and poses a risk of explosion.

Scientists led by Berend Smit’s lab at EPFL (Sion) and colleagues in the US, have now identified a material that can be used as an efficient, cheaper, and safer alternative to separate xenon and krypton – and at room temperature. The material, abbreviated as SBMOF-1, is a nanoporous crystal and belongs a class of materials that are currently used to clear out CO2 emissions and other dangerous pollutants. These materials are also very versatile, and scientists can tweak them to self-assemble into ordered, pre-determined crystal structures. In this way, they can synthesize millions of tailor-made materials that can be optimized for gas storage separation, catalysis, chemical sensing and optics.

The scientists carried out high-throughput screening of large material databases of over 125,000 candidates. To do this, they used molecular simulations to find structures that can separate xenon and krypton, and under conditions that match those involved in reprocessing nuclear waste.

Because xenon has a much shorter half-life than krypton – a month versus a decade – the scientists had to find a material that would be selective for both but would capture them separately. As xenon is used in commercial lighting, propulsion, imaging, anesthesia and insulation, it can also be sold back into the chemical market to offset costs.

The scientists identified and confirmed that SBMOF-1 shows remarkable xenon capturing capacity and xenon/krypton selectivity under nuclear-plant conditions and at room temperature.

The US partners have also made an announcement with this June 13, 2016 Pacific Northwest National Laboratory (PNNL) news release (also on EurekAlert), Note: It is a little repetitive but there’s good additional information,

Researchers are investigating a new material that might help in nuclear fuel recycling and waste reduction by capturing certain gases released during reprocessing. Conventional technologies to remove these radioactive gases operate at extremely low, energy-intensive temperatures. By working at ambient temperature, the new material has the potential to save energy, make reprocessing cleaner and less expensive. The reclaimed materials can also be reused commercially.

Appearing in Nature Communications, the work is a collaboration between experimentalists and computer modelers exploring the characteristics of materials known as metal-organic frameworks.

“This is a great example of computer-inspired material discovery,” said materials scientist Praveen Thallapally of the Department of Energy’s Pacific Northwest National Laboratory. “Usually the experimental results are more realistic than computational ones. This time, the computer modeling showed us something the experiments weren’t telling us.”

Waste avoidance

Recycling nuclear fuel can reuse uranium and plutonium — the majority of the used fuel — that would otherwise be destined for waste. Researchers are exploring technologies that enable safe, efficient, and reliable recycling of nuclear fuel for use in the future.

A multi-institutional, international collaboration is studying materials to replace costly, inefficient recycling steps. One important step is collecting radioactive gases xenon and krypton, which arise during reprocessing. To capture xenon and krypton, conventional technologies use cryogenic methods in which entire gas streams are brought to a temperature far below where water freezes — such methods are energy intensive and expensive.

Thallapally, working with Maciej Haranczyk and Berend Smit of Lawrence Berkeley National Laboratory [LBNL] and others, has been studying materials called metal-organic frameworks, also known as MOFs, that could potentially trap xenon and krypton without having to use cryogenics.

These materials have tiny pores inside, so small that often only a single molecule can fit inside each pore. When one gas species has a higher affinity for the pore walls than other gas species, metal-organic frameworks can be used to separate gaseous mixtures by selectively adsorbing.

To find the best MOF for xenon and krypton separation, computational chemists led by Haranczyk and Smit screened 125,000 possible MOFs for their ability to trap the gases. Although these gases can come in radioactive varieties, they are part of a group of chemically inert elements called “noble gases.” The team used computing resources at NERSC, the National Energy Research Scientific Computing Center, a DOE Office of Science User Facility at LBNL.

“Identifying the optimal material for a given process, out of thousands of possible structures, is a challenge due to the sheer number of materials. Given that the characterization of each material can take up to a few hours of simulations, the entire screening process may fill a supercomputer for weeks,” said Haranczyk. “Instead, we developed an approach to assess the performance of materials based on their easily computable characteristics. In this case, seven different characteristics were necessary for predicting how the materials behaved, and our team’s grad student Cory Simon’s application of machine learning techniques greatly sped up the material discovery process by eliminating those that didn’t meet the criteria.”

The team’s models identified the MOF that trapped xenon most selectively and had a pore size close to the size of a xenon atom — SBMOF-1, which they then tested in the lab at PNNL.

After optimizing the preparation of SBMOF-1, Thallapally and his team at PNNL tested the material by running a mixture of gases through it — including a non-radioactive form of xenon and krypton — and measuring what came out the other end. Oxygen, helium, nitrogen, krypton, and carbon dioxide all beat xenon out. This indicated that xenon becomes trapped within SBMOF-1’s pores until the gas saturates the material.

Other tests also showed that in the absence of xenon, SBMOF-1 captures krypton. During actual separations, then, operators would pass the gas streams through SBMOF-1 twice to capture both gases.

The team also tested SBMOF-1’s ability to hang onto xenon in conditions of high humidity. Humidity interferes with cryogenics, and gases must be dehydrated before putting them through the ultra-cold method, another time-consuming expense. SBMOF-1, however, performed quite admirably, retaining more than 85 percent of the amount of xenon in high humidity as it did in dry conditions.

The final step in collecting xenon or krypton gas would be to put the MOF material under a vacuum, which sucks the gas out of the molecular cages for safe storage. A last laboratory test examined how stable the material was by repeatedly filling it up with xenon gas and then vacuuming out the xenon. After 10 cycles of this, SBMOF-1 collected just as much xenon as the first cycle, indicating a high degree of stability for long-term use.

Thallapally attributes this stability to the manner in which SBMOF-1 interacts with xenon. Rather than chemical reactions between the molecular cages and the gases, the relationship is purely physical. The material can last a lot longer without constantly going through chemical reactions, he said.

A model finding

Although the researchers showed that SBMOF-1 is a good candidate for nuclear fuel reprocessing, getting these results wasn’t smooth sailing. In the lab, the researchers had followed a previously worked out protocol from Stony Brook University to prepare SBMOF-1. Part of that protocol requires them to “activate” SBMOF-1 by heating it up to 300 degrees Celsius, three times the temperature of boiling water.

Activation cleans out material left in the pores from MOF synthesis. Laboratory tests of the activated SBMOF-1, however, showed the material didn’t behave as well as it should, based on the computer modeling results.

The researchers at PNNL repeated the lab experiments. This time, however, they activated SBMOF-1 at a lower temperature, 100 degrees Celsius, or the actual temperature of boiling water. Subjecting the material to the same lab tests, the researchers found SBMOF-1 behaving as expected, and better than at the higher activation temperature.

But why? To figure out where the discrepancy came from, the researchers modeled what happened to SBMOF-1 at 300 degrees Celsius. Unexpectedly, the pores squeezed in on themselves.

“When we heated the crystal that high, atoms within the pore tilted and partially blocked the pores,” said Thallapally. “The xenon doesn’t fit.”

Armed with these new computational and experimental insights, the researchers can explore SBMOF-1 and other MOFs further for nuclear fuel recycling. These MOFs might also be able to capture other noble gases such as radon, a gas known to pool in some basements.

Researchers hailed from several other institutions as well as those listed earlier, including University of California, Berkeley, Ecole Polytechnique Fédérale de Lausanne (EPFL) in Switzerland, Brookhaven National Laboratory, and IMDEA Materials Institute in Spain. This work was supported by the [US] Department of Energy Offices of Nuclear Energy and Science.

Here’s an image the researchers have provided to illustrate their work,

Caption: The crystal structure of SBMOF-1 (green = Ca, yellow = S, red = O, gray = C, white = H). The light blue surface is a visualization of the one-dimensional channel that SBMOF-1 creates for the gas molecules to move through. The darker blue surface illustrates where a Xe atom sits in the pores of SBMOF-1 when it adsorbs. Credit: Berend Smit/EPFL/University of California Berkley

Caption: The crystal structure of SBMOF-1 (green = Ca, yellow = S, red = O, gray = C, white = H). The light blue surface is a visualization of the one-dimensional channel that SBMOF-1 creates for the gas molecules to move through. The darker blue surface illustrates where a Xe atom sits in the pores of SBMOF-1 when it adsorbs. Credit: Berend Smit/EPFL/University of California Berkley

Here’s a link to and a citation for the paper,

Metal–organic framework with optimally selective xenon adsorption and separation by Debasis Banerjee, Cory M. Simon, Anna M. Plonka, Radha K. Motkuri, Jian Liu, Xianyin Chen, Berend Smit, John B. Parise, Maciej Haranczyk, & Praveen K. Thallapally. Nature Communications 7, Article number: ncomms11831  doi:10.1038/ncomms11831 Published 13 June 2016

This paper is open access.

Final comment, this is the second time in the last month I’ve stumbled across more positive approaches to nuclear energy. The first time was a talk (Why Nuclear Power is Necessary) held in Vancouver, Canada in May 2016 (details here). I’m not trying to suggest anything unduly sinister but it is interesting since most of my adult life nuclear power has been viewed with fear and suspicion.

The Weyl fermion and new electronics

This story concerns a quasiparticle (Weyl fermion) which is a different kind of particle than the nanoparticles usually mentioned here. A March 17, 2016 news item on Nanowerk profiles research that suggests the Weyl fermion may find applications in the field of electronics,

The Weyl fermion, just discovered in the past year, moves through materials practically without resistance. Now researchers are showing how it could be put to use in electronic components.

Today electronic devices consume a lot of energy and require elaborate cooling mechanisms. One approach for the development of future energy-saving electronics is to use special particles that exist only in the interior of materials but can move there practically undisturbed. Electronic components based on these so-called Weyl fermions would consume considerably less energy than present-day chips. That’s because up to now devices have relied on the movement of electrons, which is inhibited by resistance and thus wastes energy.

Evidence for Weyl fermions was discovered only in the past year, by several research teams including scientists from the Paul Scherrer Institute (PSI). Now PSI researchers have shown — within the framework of an international collaboration with two research institutions in China and the two Swiss technical universities, ETH Zurich and EPF Lausanne — that there are materials in which only one kind of Weyl fermion exists. That could prove decisive for applications in electronic components, because it makes it possible to guide the particles’ flow in the material.

A March 17, 2016 Paul Scherrer Institute (PSI) press release by Paul Piwnicki, which originated the news item, describes the work in more detail (Note: There is some redundancy),

In the past year, researchers of the Paul Scherrer Institute PSI were among those who found experimental evidence for a particle whose existence had been predicted in the 1920s — the Weyl fermion. One of the particle’s peculiarities is that it can only exist in the interior of materials. Now the PSI researchers, together with colleagues at two Chinese research institutions as well as at ETH Zurich and EPF Lausanne, have made a subsequent discovery that opens the possibility of using the movement of Weyl fermions in future electronic devices. …

Today’s computer chips use the flow of electrons that move through the device’s conductive channels. Because, along the way, electrons are always colliding with each other or with other particles in the material, a relatively high amount of energy is needed to maintain the flow. That means not only that the device wastes a lot of energy, but also that it heats itself up enough to necessitate an elaborate cooling mechanism, which in turn requires additional space and energy.

In contrast, Weyl fermions move virtually undisturbed through the material and thus encounter practically no resistance. “You can compare it to driving on a highway where all of the cars are moving freely in the same direction,” explains Ming Shi, a senior scientist at the PSI. “The electron flow in present-day chips is more comparable to driving in congested city traffic, with cars coming from all directions and getting in each other’s way.”

Important for electronics: only one kind of particle

While in the materials examined last year there were always several kinds of Weyl fermions, all moving in different ways, the PSI researchers and their colleagues have now produced a material in which only one kind of Weyl fermion occurs. “This is important for applications in electronics, because here you must be able to precisely steer the particle flow,” explains Nan Xu, a postdoctoral researcher at the PSI.

Weyl fermions are named for the German mathematician Hermann Weyl, who predicted their existence in 1929. These particles have some striking characteristics, such as having no mass and moving at the speed of light. Weyl fermions were observed as quasiparticles in so-called Weyl semimetals. In contrast to “real” particles, quasiparticles can only exist inside materials. Weyl fermions are generated through the collective motion of electrons in suitable materials. In general, quasiparticles can be compared to waves on the surface of a body of water — without the water, the waves would not exist. At the same time, their movement is independent of the water’s motion.

The material that the researchers have now investigated is a compound of the chemical elements tantalum and phosphorus, with the chemical formula TaP. The crucial experiments were carried out with X-rays at the Swiss Light Source (SLS) of the Paul Scherrer Institute.

Studying novel materials with properties that could make them useful in future electronic devices is a central research area of the Paul Scherrer Institute. In the process, the researchers pursue a variety of approaches and use many different experimental methods.

Here’s a link to and a citation for the paper,

Observation of Weyl nodes and Fermi arcs in tantalum phosphide by N. Xu, H. M. Weng, B. Q. Lv, C. E. Matt, J. Park, F. Bisti, V. N. Strocov, D. Gawryluk, E. Pomjakushina, K. Conder, N. C. Plumb, M. Radovic, G. Autès, O. V. Yazyev, Z. Fang, X. Dai, T. Qian, J. Mesot, H. Ding & M. Shi. Nature Communications 7, Article number: 11006  doi:10.1038/ncomms11006 Published 17 March 2016

This paper is open access.

Identifying performance problems in nanoresonators

Use of nanoelectromechanical systems (NEMS) can now be maximised due to a technique developed by researchers at the Commissariat a l’Energie Atomique (CEA) and the University of Grenoble-Alpes (France). From a March 7, 2016 news item on ScienceDaily,

A joint CEA / University of Grenoble-Alpes research team, together with their international partners, have developed a diagnostic technique capable of identifying performance problems in nanoresonators, a type of nanodetector used in research and industry. These nanoelectromechanical systems, or NEMS, have never been used to their maximum capabilities. The detection limits observed in practice have always been well below the theoretical limit and, until now, this difference has remained unexplained. Using a totally new approach, the researchers have now succeeded in evaluating and explaining this phenomenon. Their results, described in the February 29 [2016] issue of Nature Nanotechnology, should now make it possible to find ways of overcoming this performance shortfall.

A Feb. 29, 2016 CEA press release, which originated the news item, provides more detail about NEMS and about the new technique,

NEMS have many applications, including the measurement of mass or force. Like a tiny violin string, a nanoresonator vibrates at a precise resonant frequency. This frequency changes if gas molecules or biological particles settle on the nanoresonator surface. This change in frequency can then be used to detect or identify the substance, enabling a medical diagnosis, for example. The extremely small dimensions of these devices (less than one millionth of a meter) make the detectors highly sensitive.

However, this resolution is constrained by a detection limit. Background noise is present in addition to the wanted measurement signal. Researchers have always considered this background noise to be an intrinsic characteristic of these systems (see Figure 2 [not reproduced here]). Despite the noise levels being significantly greater than predicted by theory, the impossibility of understanding the underlying phenomena has, until now, led the research community to ignore them.

The CEA-Leti research team and their partners reviewed all the frequency stability measurements in the literature, and identified a difference of several orders of magnitude between the accepted theoretical limits and experimental measurements.

In addition to evaluating this shortfall, the researchers also developed a diagnostic technique that could be applied to each individual nanoresonator, using their own high-purity monocrystalline silicon resonators to investigate the problem.

The resonant frequency of a nanoresonator is determined by the geometry of the resonator and the type of material used in its manufacture. It is therefore theoretically fixed. By forcing the resonator to vibrate at defined frequencies close to the resonant frequency, the CEA-Leti researchers have been able to demonstrate a secondary effect that interferes with the resolution of the system and its detection limit in addition to the background noise. This effect causes slight variations in the resonant frequency. These fluctuations in the resonant frequency result from the extreme sensitivity of these systems. While capable of detecting tiny changes in mass and force, they are also very sensitive to minute variations in temperature and the movements of molecules on their surface. At the nano scale, these parameters cannot be ignored as they impose a significant limit on the performance of nanoresonators. For example, a tiny change in temperature can change the parameters of the device material, and hence its frequency. These variations can be rapid and random.

The experimental technique developed by the team makes it possible to evaluate the loss of resolution and to determine whether it is caused by the intrinsic limits of the system or by a secondary fluctuation that can therefore by corrected. A patent has been applied for covering this technique. The research team has also shown that none of the theoretical hypotheses so far advanced to explain these fluctuations in the resonant frequency can currently explain the observed level of variation.

The research team will therefore continue experimental work to explore the physical origin of these fluctuations, with the aim of achieving a significant improvement in the performance of nanoresonators.

The Swiss Federal Institute of Technology in Lausanne, the Indian Institute of Science in Bangalore, and the California Institute of Technology (USA) have also participated in this study. The authors have received funding from the Leti Carnot Institute (NEMS-MS project) and the European Union (ERC Consolidator Grant – Enlightened project).

Here’s a link to and a citation for the paper,

Frequency fluctuations in silicon nanoresonators by Marc Sansa, Eric Sage, Elizabeth C. Bullard, Marc Gély, Thomas Alava, Eric Colinet, Akshay K. Naik, Luis Guillermo Villanueva, Laurent Duraffourg, Michael L. Roukes, Guillaume Jourdan & Sébastien Hentz. Nature Nanotechnology (2016) doi:10.1038/nnano.2016.19 Published online 29 February 2016

This paper is behind a paywall.

Feeling with a bionic finger

From what I understand one of the most difficult aspects of an amputation is the loss of touch, so, bravo to the engineers. From a March 8, 2016 news item on ScienceDaily,

An amputee was able to feel smoothness and roughness in real-time with an artificial fingertip that was surgically connected to nerves in his upper arm. Moreover, the nerves of non-amputees can also be stimulated to feel roughness, without the need of surgery, meaning that prosthetic touch for amputees can now be developed and safely tested on intact individuals.

The technology to deliver this sophisticated tactile information was developed by Silvestro Micera and his team at EPFL (Ecole polytechnique fédérale de Lausanne) and SSSA (Scuola Superiore Sant’Anna) together with Calogero Oddo and his team at SSSA. The results, published today in eLife, provide new and accelerated avenues for developing bionic prostheses, enhanced with sensory feedback.

A March 8, 2016 EPFL press release (also on EurekAlert), which originated the news item, provides more information about Sorenson’s experience and about the other tests the research team performed,

“The stimulation felt almost like what I would feel with my hand,” says amputee Dennis Aabo Sørensen about the artificial fingertip connected to his stump. He continues, “I still feel my missing hand, it is always clenched in a fist. I felt the texture sensations at the tip of the index finger of my phantom hand.”

Sørensen is the first person in the world to recognize texture using a bionic fingertip connected to electrodes that were surgically implanted above his stump.

Nerves in Sørensen’s arm were wired to an artificial fingertip equipped with sensors. A machine controlled the movement of the fingertip over different pieces of plastic engraved with different patterns, smooth or rough. As the fingertip moved across the textured plastic, the sensors generated an electrical signal. This signal was translated into a series of electrical spikes, imitating the language of the nervous system, then delivered to the nerves.

Sørensen could distinguish between rough and smooth surfaces 96% of the time.

In a previous study, Sorensen’s implants were connected to a sensory-enhanced prosthetic hand that allowed him to recognize shape and softness. In this new publication about texture in the journal eLife, the bionic fingertip attains a superior level of touch resolution.

Simulating touch in non-amputees

This same experiment testing coarseness was performed on non-amputees, without the need of surgery. The tactile information was delivered through fine, needles that were temporarily attached to the arm’s median nerve through the skin. The non-amputees were able to distinguish roughness in textures 77% of the time.

But does this information about touch from the bionic fingertip really resemble the feeling of touch from a real finger? The scientists tested this by comparing brain-wave activity of the non-amputees, once with the artificial fingertip and then with their own finger. The brain scans collected by an EEG cap on the subject’s head revealed that activated regions in the brain were analogous.

The research demonstrates that the needles relay the information about texture in much the same way as the implanted electrodes, giving scientists new protocols to accelerate for improving touch resolution in prosthetics.

“This study merges fundamental sciences and applied engineering: it provides additional evidence that research in neuroprosthetics can contribute to the neuroscience debate, specifically about the neuronal mechanisms of the human sense of touch,” says Calogero Oddo of the BioRobotics Institute of SSSA. “It will also be translated to other applications such as artificial touch in robotics for surgery, rescue, and manufacturing.”

Here’s a link to and a citation for the paper,

Intraneural stimulation elicits discrimination of textural features by artificial fingertip in intact and amputee humans by Calogero Maria Oddo, Stanisa Raspopovic, Fiorenzo Artoni, Alberto Mazzoni, Giacomo Spigler, Francesco Petrini, Federica Giambattistelli, Fabrizio Vecchio, Francesca Miraglia, Loredana Zollo, Giovanni Di Pino, Domenico Camboni, Maria Chiara Carrozza, Eugenio Guglielmelli, Paolo Maria Rossini, Ugo Faraguna, Silvestro Micera. eLife, 2016; 5 DOI: 10.7554/eLife.09148 Published March 8, 2016

This paper appears to be open access.

Blue Brain Project builds a digital piece of brain

Caption: This is a photo of a virtual brain slice. Credit: Makram et al./Cell 2015

Caption: This is a photo of a virtual brain slice. Credit: Makram et al./Cell 2015

Here’s more *about this virtual brain slice* from an Oct. 8, 2015 Cell (magazine) news release on EurekAlert,

If you want to learn how something works, one strategy is to take it apart and put it back together again [also known as reverse engineering]. For 10 years, a global initiative called the Blue Brain Project–hosted at the Ecole Polytechnique Federale de Lausanne (EPFL)–has been attempting to do this digitally with a section of juvenile rat brain. The project presents a first draft of this reconstruction, which contains over 31,000 neurons, 55 layers of cells, and 207 different neuron subtypes, on October 8 [2015] in Cell.

Heroic efforts are currently being made to define all the different types of neurons in the brain, to measure their electrical firing properties, and to map out the circuits that connect them to one another. These painstaking efforts are giving us a glimpse into the building blocks and logic of brain wiring. However, getting a full, high-resolution picture of all the features and activity of the neurons within a brain region and the circuit-level behaviors of these neurons is a major challenge.

Henry Markram and colleagues have taken an engineering approach to this question by digitally reconstructing a slice of the neocortex, an area of the brain that has benefitted from extensive characterization. Using this wealth of data, they built a virtual brain slice representing the different neuron types present in this region and the key features controlling their firing and, most notably, modeling their connectivity, including nearly 40 million synapses and 2,000 connections between each brain cell type.

“The reconstruction required an enormous number of experiments,” says Markram, of the EPFL. “It paves the way for predicting the location, numbers, and even the amount of ion currents flowing through all 40 million synapses.”

Once the reconstruction was complete, the investigators used powerful supercomputers to simulate the behavior of neurons under different conditions. Remarkably, the researchers found that, by slightly adjusting just one parameter, the level of calcium ions, they could produce broader patterns of circuit-level activity that could not be predicted based on features of the individual neurons. For instance, slow synchronous waves of neuronal activity, which have been observed in the brain during sleep, were triggered in their simulations, suggesting that neural circuits may be able to switch into different “states” that could underlie important behaviors.

“An analogy would be a computer processor that can reconfigure to focus on certain tasks,” Markram says. “The experiments suggest the existence of a spectrum of states, so this raises new types of questions, such as ‘what if you’re stuck in the wrong state?'” For instance, Markram suggests that the findings may open up new avenues for explaining how initiating the fight-or-flight response through the adrenocorticotropic hormone yields tunnel vision and aggression.

The Blue Brain Project researchers plan to continue exploring the state-dependent computational theory while improving the model they’ve built. All of the results to date are now freely available to the scientific community at https://bbp.epfl.ch/nmc-portal.

An Oct. 8, 2015 Hebrew University of Jerusalem press release on the Canadian Friends of the Hebrew University of Jerusalem website provides more detail,

Published by the renowned journal Cell, the paper is the result of a massive effort by 82 scientists and engineers at EPFL and at institutions in Israel, Spain, Hungary, USA, China, Sweden, and the UK. It represents the culmination of 20 years of biological experimentation that generated the core dataset, and 10 years of computational science work that developed the algorithms and built the software ecosystem required to digitally reconstruct and simulate the tissue.

The Hebrew University of Jerusalem’s Prof. Idan Segev, a senior author of the research paper, said: “With the Blue Brain Project, we are creating a digital reconstruction of the brain and using supercomputer simulations of its electrical behavior to reveal a variety of brain states. This allows us to examine brain phenomena within a purely digital environment and conduct experiments previously only possible using biological tissue. The insights we gather from these experiments will help us to understand normal and abnormal brain states, and in the future may have the potential to help us develop new avenues for treating brain disorders.”

Segev, a member of the Hebrew University’s Edmond and Lily Safra Center for Brain Sciences and director of the university’s Department of Neurobiology, sees the paper as building on the pioneering work of the Spanish anatomist Ramon y Cajal from more than 100 years ago: “Ramon y Cajal began drawing every type of neuron in the brain by hand. He even drew in arrows to describe how he thought the information was flowing from one neuron to the next. Today, we are doing what Cajal would be doing with the tools of the day: building a digital representation of the neurons and synapses, and simulating the flow of information between neurons on supercomputers. Furthermore, the digitization of the tissue is open to the community and allows the data and the models to be preserved and reused for future generations.”

While a long way from digitizing the whole brain, the study demonstrates that it is feasible to digitally reconstruct and simulate brain tissue, and most importantly, to reveal novel insights into the brain’s functioning. Simulating the emergent electrical behavior of this virtual tissue on supercomputers reproduced a range of previous observations made in experiments on the brain, validating its biological accuracy and providing new insights into the functioning of the neocortex. This is a first step and a significant contribution to Europe’s Human Brain Project, which Henry Markram founded, and where EPFL is the coordinating partner.

Cell has made a video abstract available (it can be found with the Hebrew University of Jerusalem press release)

Here’s a link to and a citation for the paper,

Reconstruction and Simulation of Neocortical Microcircuitry by Henry Markram, Eilif Muller, Srikanth Ramaswamy, Michael W. Reimann, Marwan Abdellah, Carlos Aguado Sanchez, Anastasia Ailamaki, Lidia Alonso-Nanclares, Nicolas Antille, Selim Arsever, Guy Antoine Atenekeng Kahou, Thomas K. Berger, Ahmet Bilgili, Nenad Buncic, Athanassia Chalimourda, Giuseppe Chindemi, Jean-Denis Courcol, Fabien Delalondre, Vincent Delattre, Shaul Druckmann, Raphael Dumusc, James Dynes, Stefan Eilemann, Eyal Gal, Michael Emiel Gevaert, Jean-Pierre Ghobril, Albert Gidon, Joe W. Graham, Anirudh Gupta, Valentin Haenel, Etay Hay, Thomas Heinis, Juan B. Hernando, Michael Hines, Lida Kanari, Daniel Keller, John Kenyon, Georges Khazen, Yihwa Kim, James G. King, Zoltan Kisvarday, Pramod Kumbhar, Sébastien Lasserre, Jean-Vincent Le Bé, Bruno R.C. Magalhães, Angel Merchán-Pérez, Julie Meystre, Benjamin Roy Morrice, Jeffrey Muller, Alberto Muñoz-Céspedes, Shruti Muralidhar, Keerthan Muthurasa, Daniel Nachbaur, Taylor H. Newton, Max Nolte, Aleksandr Ovcharenko, Juan Palacios, Luis Pastor, Rodrigo Perin, Rajnish Ranjan, Imad Riachi, José-Rodrigo Rodríguez, Juan Luis Riquelme, Christian Rössert, Konstantinos Sfyrakis, Ying Shi, Julian C. Shillcock, Gilad Silberberg, Ricardo Silva, Farhan Tauheed, Martin Telefont, Maria Toledo-Rodriguez, Thomas Tränkler, Werner Van Geit, Jafet Villafranca Díaz, Richard Walker, Yun Wang, Stefano M. Zaninetta, Javier DeFelipe, Sean L. Hill, Idan Segev, Felix Schürmann. Cell, Volume 163, Issue 2, p456–492, 8 October 2015 DOI: http://dx.doi.org/10.1016/j.cell.2015.09.029

This paper appears to be open access.

My most substantive description of the Blue Brain Project , previous to this, was in a Jan. 29, 2013 posting featuring the European Union’s (EU) Human Brain project and involvement from countries that are not members.

* I edited a redundant lede (That’s a virtual slice of a rat brain.), moved the second sentence to the lede while adding this:  *about this virtual brain slice* on Oct. 16, 2015 at 0955 hours PST.

Putting the speed on spin, spintronics that is

This is for physics fans, if you plan on looking at the published paper. Otherwise, the July 20, 2015 news item on ScienceDaily is more accessible to the rest of us,

In a tremendous boost for spintronic technologies, EPFL scientists have shown that electrons can jump through spins much faster than previously thought.

Electrons spin around atoms, but also spin around themselves, and can cross over from one spin state to another. A property which can be exploited for next-generation hard drives. However, “spin cross-over” has been considered too slow to be efficient. Using ultrafast measurements, EPFL scientists have now shown for the first time that electrons can cross spins at least 100,000 times faster than previously thought. Aside for its enormous implications for fundamental physics, the finding can also propel the field of spintronics forward. …

A July 20, 2015 EPFL press release on EurekAlert, which originated the news item, provides context for the research,

The rules of spin

Although difficult to describe in everyday terms, electron spin can be loosely compared to the rotation of a planet or a spinning top around its axis. Electrons can spin in different manners referred to as “spin states” and designated by the numbers 0, 1/2, 1, 3/2, 2 etc. During chemical reactions, electrons can cross from one spin state to another, e.g. from 0 to 1 or 1/2 to 3/2.

Spin cross-over is already used in many technologies, e.g. optical light-emitting devices (OLED), energy conversion systems, and cancer phototherapy. Most prominently, spin cross-over is the basis of the fledgling field of spintronics. The problem is that spin cross-over has been thought to be too slow to be efficient enough in circuits.

Spin cross-over is extremely fast

The lab of Majed Chergui at EPFL has now demonstrated that spin cross-over is considerably faster than previously thought. Using the highest time-resolution technology in the world, the lab was able to “see” electrons crossing through four spin states within 50 quadrillionths of a second — or 50 femtoseconds.

“Time resolution has always been a limitation,” says Chergui. “Over the years, labs have used techniques that could only measure spin changes to a billionth to a millionth of a second. So they thought that spin cross-over happened in this timeframe.”

Chergui’s lab focused on materials that show much promise in spintronics applications. In these materials, electrons jump through four spin-states: from 0 to 1 to 2. In 2009, Chergui’s lab pushed the boundaries of time resolution to show that this 0-2 “jump” can happen within 150 femtoseconds — suggesting that it was a direct event. Despite this, the community still maintained that such spin cross-overs go through intermediate steps.

But Chergui had his doubts. Working with his postdoc Gerald Auböck, they used the lab’s world-recognized expertise in ultrafast spectroscopy to “crank up” the time resolution. Briefly, a laser shines on the material sample under investigation, causing its electrons to move. Another laser measures their spin changes over time in the ultraviolet light range.

The finding essentially demolishes the notion of intermediate steps between spin jumps, as it does not allow enough time for them: only 50 quadrillionths of a second to go from the “0” to the “2” spin state. This is the first study to ever push time resolution to this limit in the ultraviolet domain. “This probably means that it’s even faster,” says Chergui. “But, more importantly, that it is a direct process.”

From observation to explanation

With profound implications for both technology and fundamental physics and chemistry, the study is an observation without an explanation. Chergui believes that the key is electrons shuttling back-and-forth between the iron atom at the center of the material’s molecules and its surrounding elements. “When the laser light shines on the atom, it changes the electron’s spin angle, affecting the entire spin dynamics in the molecule.”

It is now up to theoreticians to develop a new model for ultrafast spin changes. On the experimental side of things, Chergui’s lab is now focusing on actually observing electrons shuttling inside the molecules. This will require even more sophisticated approaches, such as core-level spectroscopy. Nonetheless, the study challenges ideas about spin cross-over, and might offer long-awaited solutions to the limitations of spintronics.

Here’s a link to and citation for the paper,

Sub-50-fs photoinduced spin crossover in [Fe(bpy)3]2+ by Gerald Auböck & Majed Chergui. Nature Chemistry (2015) doi:10.1038/nchem.2305 Published online 20 July 2015

This paper is behind a paywall.

A ‘sweat’mometer—sensing your health through your sweat

At this point, it’s more fitness monitor than diagnostic tool, so, you’ll still need to submit blood, stool, and urine samples when the doctor requests it but the device does offer some tantalizing possibilities according to a May 15, 2015 news item on phys.org,

Made from state-of-the-art silicon transistors, an ultra-low power sensor enables real-time scanning of the contents of liquids such as perspiration. Compatible with advanced electronics, this technology boasts exceptional accuracy – enough to manufacture mobile sensors that monitor health.

Imagine that it is possible, through a tiny adhesive electronic stamp attached to the arm, to know in real time one’s level of hydration, stress or fatigue while jogging. A new sensor developed at the Nanoelectronic Devices Laboratory (Nanolab) at EPFL [École Polytechnique Fédérale de Lausanne in Switzerland] is the first step toward this application. “The ionic equilibrium in a person’s sweat could provide significant information on the state of his health,” says Adrian Ionescu, director of Nanolab. “Our technology detects the presence of elementary charged particles in ultra-small concentrations such as ions and protons, which reflects not only the pH balance of sweat but also more complex hydration of fatigues states. By an adapted functionalization I can also track different kinds of proteins.”

A May 15, 2015 EPFL press release by Laure-Anne Pessina, which originated the news item, includes a good technical explanation of the device for non-experts in the field,

Published in the journal ACS Nano, the device is based on transistors that are comparable to those used by the company Intel in advanced microprocessors. On the state-of-the-art “FinFET” transistor, researchers fixed a microfluidic channel through which the fluid to be analyzed flows. When the molecules pass, their electrical charge disturbs the sensor, which makes it possible to deduce the fluid’s composition.

The new device doesn’t host only sensors, but also transistors and circuits enabling the amplification of the signals – a significant innovation. The feat relies on a layered design that isolates the electronic part from the liquid substance. “Usually it is necessary to use separately a sensor for detection and a circuit for computing and signal amplification,” says Sara Rigante, lead author of the publication. “In our chip, sensors and circuits are in the same device – making it a ‘Sensing integrated circuit’. This proximity ensures that the signal is not disturbed or altered. We can thereby obtain extremely stable and accurate measurements.”

But that’s not all. Due to the size of the transistors – 20 nanometers, which is one hundred to one thousand times smaller than the thickness of a hair – it is possible to place a whole network of sensors on one chip, with each sensor locating a different particle. “We could also detect calcium, sodium or potassium in sweat,” the researcher elaborates.

As to what makes the device special (from the press release),

The technology developed at EPFL stands out from its competitors because it is extremely stable, compatible with existing electronics (CMOS), ultra-low power and easy to reproduce in large arrays of sensors. “In the field of biosensors, research around nanotechnology is intense, particularly regarding silicon nanowires and nanotubes. But these technologies are frequently unstable and therefore unusable for now in industrial applications,” says Ionescu. “In the case of our sensor, we started from extremely powerful, advanced technology and adapted it for sensing need in a liquid-gate FinFET configurations. The precision of the electronics is such that it is easy to clone our device in millions with identical characteristics.”

In addition, the technology is not energy intensive. “We could feed 10,000 sensors with a single solar cell,” Professor Ionescu asserts.

Of course, there does seem to be one shortcoming (from the press release),

Thus far, the tests have been carried out by circulating the liquid with a tiny pump. Researchers are currently working on a means of sucking the sweat into the microfluidic tube via wicking. This would rid the small analyzing “band-aid” of the need for an attached pump.

While they work on eliminating the pump part of the device, here’s  a link to and a citation for the paper,

Sensing with Advanced Computing Technology: Fin Field-Effect Transistors with High-k Gate Stack on Bulk Silicon by Sara Rigante, Paolo Scarbolo, Mathias Wipf, Ralph L. Stoop, Kristine Bedner, Elizabeth Buitrago, Antonios Bazigos, Didier Bouvet, Michel Calame, Christian Schönenberger, and Adrian M. Ionescu. ACS Nano, Article ASAP DOI: 10.1021/nn5064216 Publication Date (Web): March 27, 2015

Copyright © 2015 American Chemical Society

This paper is behind a paywall.

As for the ‘sweat’mometer in the headline, I was combining sweat with thermometer.

Capturing the particle and the wave: photographing light

On returning to school to get a bachelor’s degree, I registered in a communications course and my first paper was about science, light, and communication. The particle/wave situation still fascinates me (and I imagine many others).

A March 2, 2015 news item on phys.org describes the first successful photography of light as both particle and wave,

Light behaves both as a particle and as a wave. Since the days of Einstein, scientists have been trying to directly observe both of these aspects of light at the same time. Now, scientists at EPFL [École polytechnique fédérale de Lausanne in Switzerland] have succeeded in capturing the first-ever snapshot of this dual behavior.

Quantum mechanics tells us that light can behave simultaneously as a particle or a wave. However, there has never been an experiment able to capture both natures of light at the same time; the closest we have come is seeing either wave or particle, but always at different times. Taking a radically different experimental approach, EPFL scientists have now been able to take the first ever snapshot of light behaving both as a wave and as a particle. The breakthrough work is published in Nature Communications.

A March 2, 2015 EPFL press release (also on EurekAlert), which originated the news item, describes the science and the research,

When UV light hits a metal surface, it causes an emission of electrons. Albert Einstein explained this “photoelectric” effect by proposing that light – thought to only be a wave – is also a stream of particles. Even though a variety of experiments have successfully observed both the particle- and wave-like behaviors of light, they have never been able to observe both at the same time.

A research team led by Fabrizio Carbone at EPFL has now carried out an experiment with a clever twist: using electrons to image light. The researchers have captured, for the first time ever, a single snapshot of light behaving simultaneously as both a wave and a stream of particles particle.

The experiment is set up like this: A pulse of laser light is fired at a tiny metallic nanowire. The laser adds energy to the charged particles in the nanowire, causing them to vibrate. Light travels along this tiny wire in two possible directions, like cars on a highway. When waves traveling in opposite directions meet each other they form a new wave that looks like it is standing in place. Here, this standing wave becomes the source of light for the experiment, radiating around the nanowire.

This is where the experiment’s trick comes in: The scientists shot a stream of electrons close to the nanowire, using them to image the standing wave of light. As the electrons interacted with the confined light on the nanowire, they either sped up or slowed down. Using the ultrafast microscope to image the position where this change in speed occurred, Carbone’s team could now visualize the standing wave, which acts as a fingerprint of the wave-nature of light.

While this phenomenon shows the wave-like nature of light, it simultaneously demonstrated its particle aspect as well. As the electrons pass close to the standing wave of light, they “hit” the light’s particles, the photons. As mentioned above, this affects their speed, making them move faster or slower. This change in speed appears as an exchange of energy “packets” (quanta) between electrons and photons. The very occurrence of these energy packets shows that the light on the nanowire behaves as a particle.

“This experiment demonstrates that, for the first time ever, we can film quantum mechanics – and its paradoxical nature – directly,” says Fabrizio Carbone. In addition, the importance of this pioneering work can extend beyond fundamental science and to future technologies. As Carbone explains: “Being able to image and control quantum phenomena at the nanometer scale like this opens up a new route towards quantum computing.”

This work represents a collaboration between the Laboratory for Ultrafast Microscopy and Electron Scattering of EPFL, the Department of Physics of Trinity College (US) and the Physical and Life Sciences Directorate of the Lawrence Livermore National Laboratory. The imaging was carried out EPFL’s ultrafast energy-filtered transmission electron microscope – one of the two in the world.

For anyone who prefers videos, the EPFL researchers have  prepared a brief description (loaded with some amusing images) of their work,


Here’s a link to and a citation for the research paper,

Simultaneous observation of the quantization and the interference pattern of a plasmonic near-field by L Piazza, T.T.A. Lummen, E Quiñonez, Y Murooka, B.W. Reed, B Barwick & F Carbone. Nature Communications 6, Article number: 6407 doi:10.1038/ncomms7407 Published 02 March 2015

This is an open access paper.

Solar cells and ‘tinkertoys’

A Nov. 3, 2014 news item on Nanowerk features a project researchers hope will improve photovoltaic efficiency and make solar cells competitive with other sources of energy,

 Researchers at Sandia National Laboratories have received a $1.2 million award from the U.S. Department of Energy’s SunShot Initiative to develop a technique that they believe will significantly improve the efficiencies of photovoltaic materials and help make solar electricity cost-competitive with other sources of energy.

The work builds on Sandia’s recent successes with metal-organic framework (MOF) materials by combining them with dye-sensitized solar cells (DSSC).

“A lot of people are working with DSSCs, but we think our expertise with MOFs gives us a tool that others don’t have,” said Sandia’s Erik Spoerke, a materials scientist with a long history of solar cell exploration at the labs.

A Nov. 3, 2014 Sandia National Laboratories news release, which originated the news item, describes the project and the technology in more detail,

Sandia’s project is funded through SunShot’s Next Generation Photovoltaic Technologies III program, which sponsors projects that apply promising basic materials science that has been proven at the materials properties level to demonstrate photovoltaic conversion improvements to address or exceed SunShot goals.

The SunShot Initiative is a collaborative national effort that aggressively drives innovation with the aim of making solar energy fully cost-competitive with traditional energy sources before the end of the decade. Through SunShot, the Energy Department supports efforts by private companies, universities and national laboratories to drive down the cost of solar electricity to 6 cents per kilowatt-hour.

DSSCs provide basis for future advancements in solar electricity production

Dye-sensitized solar cells, invented in the 1980s, use dyes designed to efficiently absorb light in the solar spectrum. The dye is mated with a semiconductor, typically titanium dioxide, that facilitates conversion of the energy in the optically excited dye into usable electrical current.

DSSCs are considered a significant advancement in photovoltaic technology since they separate the various processes of generating current from a solar cell. Michael Grätzel, a professor at the École Polytechnique Fédérale de Lausanne in Switzerland, was awarded the 2010 Millennium Technology Prize for inventing the first high-efficiency DSSC.

“If you don’t have everything in the DSSC dependent on everything else, it’s a lot easier to optimize your photovoltaic device in the most flexible and effective way,” explained Sandia senior scientist Mark Allendorf. DSSCs, for example, can capture more of the sun’s energy than silicon-based solar cells by using varied or multiple dyes and also can use different molecular systems, Allendorf said.

“It becomes almost modular in terms of the cell’s components, all of which contribute to making electricity out of sunlight more efficiently,” said Spoerke.

MOFs’ structure, versatility and porosity help overcome DSSC limitations

Though a source of optimism for the solar research community, DSSCs possess certain challenges that the Sandia research team thinks can be overcome by combining them with MOFs.

Allendorf said researchers hope to use the ordered structure and versatile chemistry of MOFs to help the dyes in DSSCs absorb more solar light, which he says is a fundamental limit on their efficiency.

“Our hypothesis is that we can put a thin layer of MOF on top of the titanium dioxide, thus enabling us to order the dye in exactly the way we want it,” Allendorf explained. That, he said, should avoid the efficiency-decreasing problem of dye aggregation, since the dye would then be locked into the MOF’s crystalline structure.

MOFs are highly-ordered materials that also offer high levels of porosity, said Allendorf, a MOF expert and 29-year veteran of Sandia. He calls the materials “Tinkertoys for chemists” because of the ease with which new structures can be envisioned and assembled. [emphasis mine]

Allendorf said the unique porosity of MOFs will allow researchers to add a second dye, placed into the pores of the MOF, that will cover additional parts of the solar spectrum that weren’t covered with the initial dye. Finally, he and Spoerke are convinced that MOFs can help improve the overall electron charge and flow of the solar cell, which currently faces instability issues.

“Essentially, we believe MOFs can help to more effectively organize the electronic and nano-structure of the molecules in the solar cell,” said Spoerke. “This can go a long way toward improving the efficiency and stability of these assembled devices.”

In addition to the Sandia team, the project includes researchers at the University of Colorado-Boulder, particularly Steve George, an expert in a thin film technology known as atomic layer deposition.

The technique, said Spoerke, is important in that it offers a pathway for highly controlled materials chemistry with potentially low-cost manufacturing of the DSSC/MOF process.

“With the combination of MOFs, dye-sensitized solar cells and atomic layer deposition, we think we can figure out how to control all of the key cell interfaces and material elements in a way that’s never been done before,” said Spoerke. “That’s what makes this project exciting.”

Here’s a picture showing an early Tinkertoy set,

Original Tinkertoy, Giant Engineer #155. Questor Education Products Co., c.1950 [downloaded from http://en.wikipedia.org/wiki/Tinkertoy#mediaviewer/File:Tinkertoy_300126232168.JPG]

Original Tinkertoy, Giant Engineer #155. Questor Education Products Co., c.1950 [downloaded from http://en.wikipedia.org/wiki/Tinkertoy#mediaviewer/File:Tinkertoy_300126232168.JPG]

The Tinkertoy entry on Wikipedia has this,

The Tinkertoy Construction Set is a toy construction set for children. It was created in 1914—six years after the Frank Hornby’s Meccano sets—by Charles H. Pajeau and Robert Pettit and Gordon Tinker in Evanston, Illinois. Pajeau, a stonemason, designed the toy after seeing children play with sticks and empty spools of thread. He and Pettit set out to market a toy that would allow and inspire children to use their imaginations. At first, this did not go well, but after a year or two over a million were sold.

Shrinky Dinks, tinkertoys, Lego have all been mentioned here in conjunction with lab work. I’m always delighted to see scientists working with or using children’s toys as inspiration of one type or another.