Tag Archives: École Polytechnique Fédérale de Lausanne

Atomic force microscope with nanowire sensors

Measuring the size and direction of forces may become reality with a nanotechnology-enabled atomic force microscope designed by Swiss scientists, according to an Oct. 17, 2016 news item on phys.org,

A new type of atomic force microscope (AFM) uses nanowires as tiny sensors. Unlike standard AFM, the device with a nanowire sensor enables measurements of both the size and direction of forces. Physicists at the University of Basel and at the EPF Lausanne have described these results in the recent issue of Nature Nanotechnology.

A nanowire sensor measures size and direction of forces (Image: University of Basel, Department of Physics)

A nanowire sensor measures size and direction of forces (Image: University of Basel, Department of Physics)

An Oct. 17, 2016 University of Basel press release (also on EurekAlert), which originated the news item, expands on the theme,

Nanowires are extremely tiny filamentary crystals which are built-up molecule by molecule from various materials and which are now being very actively studied by scientists all around the world because of their exceptional properties.

The wires normally have a diameter of 100 nanometers and therefore possess only about one thousandth of a hair thickness. Because of this tiny dimension, they have a very large surface in comparison to their volume. This fact, their small mass and flawless crystal lattice make them very attractive in a variety of nanometer-scale sensing applications, including as sensors of biological and chemical samples, and as pressure or charge sensors.

Measurement of direction and size

The team of Argovia Professor Martino Poggio from the Swiss Nanoscience Institute (SNI) and the Department of Physics at the University of Basel has now demonstrated that nanowires can also be used as force sensors in atomic force microscopes. Based on their special mechanical properties, nanowires vibrate along two perpendicular axes at nearly the same frequency. When they are integrated into an AFM, the researchers can measure changes in the perpendicular vibrations caused by different forces. Essentially, they use the nanowires like tiny mechanical compasses that point out both the direction and size of the surrounding forces.

Image of the two-dimensional force field

The scientists from Basel describe how they imaged a patterned sample surface using a nanowire sensor. Together with colleagues from the EPF Lausanne, who grew the nanowires, they mapped the two-dimensional force field above the sample surface using their nanowire “compass”. As a proof-of-principle, they also mapped out test force fields produced by tiny electrodes.

The most challenging technical aspect of the experiments was the realization of an apparatus that could simultaneously scan a nanowire above a surface and monitor its vibration along two perpendicular directions. With their study, the scientists have demonstrated a new type of AFM that could extend the technique’s numerous applications even further.

AFM – today widely used

The development of AFM 30 years ago was honored with the conferment of the Kavli-Prize [2016 Kavli Prize in Nanoscience] beginning of September this year. Professor Christoph Gerber of the SNI and Department of Physics at the University of Basel is one of the awardees, who has substantially contributed to the wide use of AFM in different fields, including solid-state physics, materials science, biology, and medicine.

The various different types of AFM are most often carried out using cantilevers made from crystalline Si as the mechanical sensor. “Moving to much smaller nanowire sensors may now allow for even further improvements on an already amazingly successful technique”, Martino Poggio comments his approach.

I featured an interview article with Christoph Gerber and Gerd Binnig about their shared Kavli prize and about inventing the AFM in a Sept. 20, 2016 posting.

As for the latest innovation, here’s a link to and a citation for the paper,

Vectorial scanning force microscopy using a nanowire sensor by Nicola Rossi, Floris R. Braakman, Davide Cadeddu, Denis Vasyukov, Gözde Tütüncüoglu, Anna Fontcuberta i Morral, & Martino Poggio. Nature Nanotechnology (2016) doi:10.1038/nnano.2016.189 Published online 17 October 2016

This paper is behind a paywall.

Tiny sensors produced by nanoscale 3D printing could lead to new generation of atomic force microscopes

A Sept. 26, 2016 news item on Nanowerk features research into producing smaller sensors for atomic force microscopes (AFMs) to achieve greater sensitivity,

Tiny sensors made through nanoscale 3D printing may be the basis for the next generation of atomic force microscopes. These nanosensors can enhance the microscopes’ sensitivity and detection speed by miniaturizing their detection component up to 100 times. The sensors were used in a real-world application for the first time at EPFL, and the results are published in Nature Communications.

A Sept. 26, 2016 École Polytechnique Fédérale de Lausanne (EPFL; Switzerland) press release by Laure-Anne Pessina, which originated the news item, expands on the theme (Note: A link has been removed),

Atomic force microscopy is based on powerful technology that works a little like a miniature turntable. A tiny cantilever with a nanometric tip passes over a sample and traces its relief, atom by atom. The tip’s infinitesimal up-and-down movements are picked up by a sensor so that the sample’s topography can be determined. (…)

One way to improve atomic force microscopes is to miniaturize the cantilever, as this will reduce inertia, increase sensitivity, and speed up detection. Researchers at EPFL’s Laboratory for Bio- and Nano-Instrumentation achieved this by equipping the cantilever with a 5-nanometer thick sensor made with a nanoscale 3D-printing technique. “Using our method, the cantilever can be 100 times smaller,” says Georg Fantner, the lab’s director.

Electrons that jump over obstacles

The nanometric tip’s up-and-down movements can be measured through the deformation of the sensor placed at the fixed end of the cantilever. But because the researchers were dealing with minute movements – smaller than an atom – they had to pull a trick out of their hat.

Together with Michael Huth’s lab at Goethe Universität at Frankfurt am Main, they developed a sensor made up of highly conductive platinum nanoparticles surrounded by an insulating carbon matrix. Under normal conditions, the carbon isolates the electrons. But at the nano-scale, a quantum effect comes into play: some electrons jump through the insulating material and travel from one nanoparticle to the next. “It’s sort of like if people walking on a path came up against a wall and only the courageous few managed to climb over it,” said Fantner.

When the shape of the sensor changes, the nanoparticles move further away from each other and the electrons jump between them less frequently. Changes in the current thus reveal the deformation of the sensor and the composition of the sample.

Tailor-made sensors

The researchers’ real feat was in finding a way to produce these sensors in nanoscale dimensions while carefully controlling their structure and, by extension, their properties. “In a vacuum, we distribute a precursor gas containing platinum and carbon atoms over a substrate. Then we apply an electron beam. The platinum atoms gather and form nanoparticles, and the carbon atoms naturally form a matrix around them,” said Maja Dukic, the article’s lead author. “By repeating this process, we can build sensors with any thickness and shape we want. We have proven that we could build these sensors and that they work on existing infrastructures. Our technique can now be used for broader applications, ranging from biosensors, ABS sensors for cars, to touch sensors on flexible membranes in prosthetics and artificial skin.”

Here’s a link to and a citation for the paper,

Direct-write nanoscale printing of nanogranular tunnelling strain sensors for sub-micrometre cantilevers by Maja Dukic, Marcel Winhold, Christian H. Schwalb, Jonathan D. Adams, Vladimir Stavrov, Michael Huth, & Georg E. Fantner. Nature Communications 7, Article number: 12487 doi:10.1038/ncomms12487 Published  26 September 2016

This is an open access paper.

Windows in Swiss trains are about to combine mobile reception and thermal insulation

A Sept. 2, 2016 news item on Nanowerk announces a whole new kind of train window,

EPFL [École polytechnique fédérale de Lausanne; Switzerland] researchers have developed a type of glass that offers excellent energy efficiency and lets mobile telephone signals through. And by teaming up with Swiss manufacturers, they have produced innovative windows. Railway company BLS is about to install them on some of its trains in order to improve energy efficiency.

An Aug. 26, 2016 EPFL press release, by Anne-Muriel Brouet, which originated the news item,

Train travel may be fast, but mobile connectivity onboard often lags behind. This is because the modern train car is a metal box that blocks out microwaves – in physics, this is called a Faraday cage. Even the windows contain an ultra-thin metal coating to improve thermal insulation. But EPFL researchers, working with manufacturing partners, have developed a new type of window that guarantees a comfortable temperature for passengers while at the same time letting mobile phone signals through.

In the rail industry, energy use is critical: around one third of the energy consumed by trains goes into providing heating and air conditioning in the train cars. And around 3% of this escapes through the windows. Double-glazed windows with an ultra-thin metal coating increase energy efficiency by a factor of four compared with untreated windows.

But the problem is that the metal sharply weakens the telecommunication signals. The solution that mobile phone operators and railway companies have used until now consists of placing signal boosters – or repeaters – in the trains. But they are expensive to install and maintain and have to be replaced regularly to keep pace with rapidly changing technologies. And each repeater consumes electricity.

A laser-scribed coating

Andreas Schüler, from EPFL’s Nanotechnology for Solar Energy Conversion Group, had another idea: “A metal coating that reflects heat waves (which are micrometric in size) but lets through both visible light (which is nanometric in size) and the electromagnetic waves of mobile phones (microwaves, which are centimetric in size).” But how is this done? “We breach the Faraday cage by modifying the metal coating with a special laser treatment. The windows then let the signals through,” said Schüler, a specialist in the optical and electronic properties of ultra-thin coatings.

To do this, a special structure is scribed into the metal coating with the aid of a high-precision laser. No more than 2.5% of the surface area of the metal coating is ablated by laser scribing. The resulting pattern is nearly invisible to the naked eye and does not affect the window’s insulating properties.

A manufacturing partnership pays off

Initial laboratory tests were extremely convincing. Several manufacturing partners were brought into the team in order to apply the method on a large scale. Thanks to the skills of glassmaker AGC Verres Industriels and the expertise of Class4Laser, prototype glass samples were produced and tested. “Measurements taken by experts from the University of Applied Sciences and Arts of Southern Switzerland (SUPSI) have demonstrated that this works,” said Schüler.

Energy savings for BLS

But the innovative glass needed to prove its mettle under real-life conditions. BLS was enthusiastic about testing the new windows as part of ongoing studies aimed at improving the energy efficiency of its trains. The first full-size windows were produced in the AGC Verres Industriels workshop and installed throughout a NINA-type self-propelled regional train.

The field tests met the partners’ expectations. Swisscom and SUPSI tested the efficacy of the new windows, both in BLS’s workshops and on the Bern-Thun train line. “Mobile reception is just as good in the train through laser-treated insulating glass as it is through ordinary glass,” said Schüler.

As a result, BLS has decided to install the new windows in most of its 36 NINA regional trains, replacing the old, non-insulating windows. Installation will begin in September 2016 as part of the company’s train modernization program. “Our commitment will help bring to market an innovative product designed to improve the energy efficiency of trains without compromising mobile reception for passengers,” said Quentin Sauvagnat, NINA fleet manager at BLS. Thanks to this product, those expensive signal repeaters will no longer be needed.

Are frequency-selective buildings next?

This proven and developed technology could be applied to buildings next. This is because, according to Schüler, “some glass buildings also act like Faraday cages. And as the internet of things continues to grow, there is a real interest in improving the properties of building materials that allow mobile signals through. More broadly, by making materials more frequency-selective, we could, for example, imagine a building that lets electromagnetic waves through but blocks Wi-Fi waves, thus enhancing corporate security.”

I have a friend who may find this train window innovation quite handy. As for frequency selective buildings, I imagine that would open up many possibilities for hackers.

Osmotic power: electricity generated with water, salt and a 3-atoms-thick membrane


EPFL researchers have developed a system that generates electricity from osmosis with unparalleled efficiency. Their work, featured in “Nature”, uses seawater, fresh water, and a new type of membrane just three atoms thick.

A July 13, 2016 news item on Nanowerk highlights  research on osmotic power at École polytechnique fédérale de Lausanne (EPFL; Switzerland),

Proponents of clean energy will soon have a new source to add to their existing array of solar, wind, and hydropower: osmotic power. Or more specifically, energy generated by a natural phenomenon occurring when fresh water comes into contact with seawater through a membrane.

Researchers at EPFL’s Laboratory of Nanoscale Biology have developed an osmotic power generation system that delivers never-before-seen yields. Their innovation lies in a three atoms thick membrane used to separate the two fluids. …

A July 14, 2016 EPFL press release (also on EurekAlert but published July 13, 2016), which originated the news item, describes the research,

The concept is fairly simple. A semipermeable membrane separates two fluids with different salt concentrations. Salt ions travel through the membrane until the salt concentrations in the two fluids reach equilibrium. That phenomenon is precisely osmosis.

If the system is used with seawater and fresh water, salt ions in the seawater pass through the membrane into the fresh water until both fluids have the same salt concentration. And since an ion is simply an atom with an electrical charge, the movement of the salt ions can be harnessed to generate electricity.

A 3 atoms thick, selective membrane that does the job

EPFL’s system consists of two liquid-filled compartments separated by a thin membrane made of molybdenum disulfide. The membrane has a tiny hole, or nanopore, through which seawater ions pass into the fresh water until the two fluids’ salt concentrations are equal. As the ions pass through the nanopore, their electrons are transferred to an electrode – which is what is used to generate an electric current.

Thanks to its properties the membrane allows positively-charged ions to pass through, while pushing away most of the negatively-charged ones. That creates voltage between the two liquids as one builds up a positive charge and the other a negative charge. This voltage is what causes the current generated by the transfer of ions to flow.

“We had to first fabricate and then investigate the optimal size of the nanopore. If it’s too big, negative ions can pass through and the resulting voltage would be too low. If it’s too small, not enough ions can pass through and the current would be too weak,” said Jiandong Feng, lead author of the research.

What sets EPFL’s system apart is its membrane. In these types of systems, the current increases with a thinner membrane. And EPFL’s membrane is just a few atoms thick. The material it is made of – molybdenum disulfide – is ideal for generating an osmotic current. “This is the first time a two-dimensional material has been used for this type of application,” said Aleksandra Radenovic, head of the laboratory of Nanoscale Biology

Powering 50’000 energy-saving light bulbs with 1m2 membrane

The potential of the new system is huge. According to their calculations, a 1m2 membrane with 30% of its surface covered by nanopores should be able to produce 1MW of electricity – or enough to power 50,000 standard energy-saving light bulbs. And since molybdenum disulfide (MoS2) is easily found in nature or can be grown by chemical vapor deposition, the system could feasibly be ramped up for large-scale power generation. The major challenge in scaling-up this process is finding out how to make relatively uniform pores.

Until now, researchers have worked on a membrane with a single nanopore, in order to understand precisely what was going on. ” From an engineering perspective, single nanopore system is ideal to further our fundamental understanding of 8=-based processes and provide useful information for industry-level commercialization”, said Jiandong Feng.

The researchers were able to run a nanotransistor from the current generated by a single nanopore and thus demonstrated a self-powered nanosystem. Low-power single-layer MoS2 transistors were fabricated in collaboration with Andras Kis’ team at at EPFL, while molecular dynamics simulations were performed by collaborators at University of Illinois at Urbana–Champaign

Harnessing the potential of estuaries

EPFL’s research is part of a growing trend. For the past several years, scientists around the world have been developing systems that leverage osmotic power to create electricity. Pilot projects have sprung up in places such as Norway, the Netherlands, Japan, and the United States to generate energy at estuaries, where rivers flow into the sea. For now, the membranes used in most systems are organic and fragile, and deliver low yields. Some systems use the movement of water, rather than ions, to power turbines that in turn produce electricity.

Once the systems become more robust, osmotic power could play a major role in the generation of renewable energy. While solar panels require adequate sunlight and wind turbines adequate wind, osmotic energy can be produced just about any time of day or night – provided there’s an estuary nearby.

Here’s a link to and a citation for the paper,

Single-layer MoS2 nanopores as nanopower generators by Jiandong Feng, Michael Graf, Ke Liu, Dmitry Ovchinnikov, Dumitru Dumcenco, Mohammad Heiranian, Vishal Nandigana, Narayana R. Aluru, Andras Kis, & Aleksandra Radenovic. Nature (2016)  doi:10.1038/nature18593 Published online 13 July 2016

This paper is behind a paywall.

Deep learning and some history from the Swiss National Science Foundation (SNSF)

A June 27, 2016 news item on phys.org provides a measured analysis of deep learning and its current state of development (from a Swiss perspective),

In March 2016, the world Go champion Lee Sedol lost 1-4 against the artificial intelligence AlphaGo. For many, this was yet another defeat for humanity at the hands of the machines. Indeed, the success of the AlphaGo software was forged in an area of artificial intelligence that has seen huge progress over the last decade. Deep learning, as it’s called, uses artificial neural networks to process algorithmic calculations. This software architecture therefore mimics biological neural networks.

Much of the progress in deep learning is thanks to the work of Jürgen Schmidhuber, director of the IDSIA (Istituto Dalle Molle di Studi sull’Intelligenza Artificiale) which is located in the suburbs of Lugano. The IDSIA doctoral student Shane Legg and a group of former colleagues went on to found DeepMind, the startup acquired by Google in early 2014 for USD 500 million. The DeepMind algorithms eventually wound up in AlphaGo.

“Schmidhuber is one of the best at deep learning,” says Boi Faltings of the EPFL Artificial Intelligence Lab. “He never let go of the need to keep working at it.” According to Stéphane Marchand-Maillet of the University of Geneva computing department, “he’s been in the race since the very beginning.”

A June 27, 2016 SNSF news release (first published as a story in Horizons no. 109 June 2016) by Fabien Goubet, which originated the news item, goes on to provide a brief history,

The real strength of deep learning is structural recognition, and winning at Go is just an illustration of this, albeit a rather resounding one. Elsewhere, and for some years now, we have seen it applied to an entire spectrum of areas, such as visual and vocal recognition, online translation tools and smartphone personal assistants. One underlying principle of machine learning is that algorithms must first be trained using copious examples. Naturally, this has been helped by the deluge of user-generated content spawned by smartphones and web 2.0, stretching from Facebook photo comments to official translations published on the Internet. By feeding a machine thousands of accurately tagged images of cats, for example, it learns first to recognise those cats and later any image of a cat, including those it hasn’t been fed.

Deep learning isn’t new; it just needed modern computers to come of age. As far back as the early 1950s, biologists tried to lay out formal principles to explain the working of the brain’s cells. In 1956, the psychologist Frank Rosenblatt of the New York State Aeronautical Laboratory published a numerical model based on these concepts, thereby creating the very first artificial neural network. Once integrated into a calculator, it learned to recognise rudimentary images.

“This network only contained eight neurones organised in a single layer. It could only recognise simple characters”, says Claude Touzet of the Adaptive and Integrative Neuroscience Laboratory of Aix-Marseille University. “It wasn’t until 1985 that we saw the second generation of artificial neural networks featuring multiple layers and much greater performance”. This breakthrough was made simultaneously by three researchers: Yann LeCun in Paris, Geoffrey Hinton in Toronto and Terrence Sejnowski in Baltimore.

Byte-size learning

In multilayer networks, each layer learns to recognise the precise visual characteristics of a shape. The deeper the layer, the more abstract the characteristics. With cat photos, the first layer analyses pixel colour, and the following layer recognises the general form of the cat. This structural design can support calculations being made upon thousands of layers, and it was this aspect of the architecture that gave rise to the name ‘deep learning’.

Marchand-Maillet explains: “Each artificial neurone is assigned an input value, which it computes using a mathematical function, only firing if the output exceeds a pre-defined threshold”. In this way, it reproduces the behaviour of real neurones, which only fire and transmit information when the input signal (the potential difference across the entire neural circuit) reaches a certain level. In the artificial model, the results of a single layer are weighted, added up and then sent as the input signal to the following layer, which processes that input using different functions, and so on and so forth.

For example, if a system is trained with great quantities of photos of apples and watermelons, it will progressively learn to distinguish them on the basis of diameter, says Marchand-Maillet. If it cannot decide (e.g., when processing a picture of a tiny watermelon), the subsequent layers take over by analysing the colours or textures of the fruit in the photo, and so on. In this way, every step in the process further refines the assessment.

Video games to the rescue

For decades, the frontier of computing held back more complex applications, even at the cutting edge. Industry walked away, and deep learning only survived thanks to the video games sector, which eventually began producing graphics chips, or GPUs, with an unprecedented power at accessible prices: up to 6 teraflops (i.e., 6 trillion calculations per second) for a few hundred dollars. “There’s no doubt that it was this calculating power that laid the ground for the quantum leap in deep learning”, says Touzet. GPUs are also very good at parallel calculations, a useful function for executing the innumerable simultaneous operations required by neural networks.
Although image analysis is getting great results, things are more complicated for sequential data objects such as natural spoken language and video footage. This has formed part of Schmidhuber’s work since 1989, and his response has been to develop recurrent neural networks in which neurones communicate with each other in loops, feeding processed data back into the initial layers.

Such sequential data analysis is highly dependent on context and precursory data. In Lugano, networks have been instructed to memorise the order of a chain of events. Long Short Term Memory (LSTM) networks can distinguish ‘boat’ from ‘float’ by recalling the sound that preceded ‘oat’ (i.e., either ‘b’ or ‘fl’). “Recurrent neural networks are more powerful than other approaches such as the Hidden Markov models”, says Schmidhuber, who also notes that Google Voice integrated LSTMs in 2015. “With looped networks, the number of layers is potentially infinite”, says Faltings [?].

For Schmidhuber, deep learning is just one aspect of artificial intelligence; the real thing will lead to “the most important change in the history of our civilisation”. But Marchand-Maillet sees deep learning as “a bit of hype, leading us to believe that artificial intelligence can learn anything provided there’s data. But it’s still an open question as to whether deep learning can really be applied to every last domain”.

It’s nice to get an historical perspective and eye-opening to realize that scientists have been working on these concepts since the 1950s.

Cleaning up nuclear waste gases with nanotechnology-enabled materials

Swiss and US scientists have developed a nanoporous crystal that could be used to clean up nuclear waste gases according to a June 13, 2016 news item on Nanowerk (Note: A link has been removed),

An international team of scientists at EPFL [École polytechnique fédérale de Lausanne in Switzerland] and the US have discovered a material that can clear out radioactive waste from nuclear plants more efficiently, cheaply, and safely than current methods.

Nuclear energy is one of the cheapest alternatives to carbon-based fossil fuels. But nuclear-fuel reprocessing plants generate waste gas that is currently too expensive and dangerous to deal with. Scanning hundreds of thousands of materials, scientists led by EPFL and their US colleagues have now discovered a material that can absorb nuclear waste gases much more efficiently, cheaply and safely. The work is published in Nature Communications (“Metal–organic framework with optimally selective xenon adsorption and separation”).

A June 14, 2016 EPFL press release (also on EurekAlert), which originated the news item, explains further,

Nuclear-fuel reprocessing plants generate volatile radionuclides such as xenon and krypton, which escape in the so-called “off-gas” of these facilities – the gases emitted as byproducts of the chemical process. Current ways of capturing and clearing out these gases involve distillation at very low temperatures, which is expensive in both terms of energy and capital costs, and poses a risk of explosion.

Scientists led by Berend Smit’s lab at EPFL (Sion) and colleagues in the US, have now identified a material that can be used as an efficient, cheaper, and safer alternative to separate xenon and krypton – and at room temperature. The material, abbreviated as SBMOF-1, is a nanoporous crystal and belongs a class of materials that are currently used to clear out CO2 emissions and other dangerous pollutants. These materials are also very versatile, and scientists can tweak them to self-assemble into ordered, pre-determined crystal structures. In this way, they can synthesize millions of tailor-made materials that can be optimized for gas storage separation, catalysis, chemical sensing and optics.

The scientists carried out high-throughput screening of large material databases of over 125,000 candidates. To do this, they used molecular simulations to find structures that can separate xenon and krypton, and under conditions that match those involved in reprocessing nuclear waste.

Because xenon has a much shorter half-life than krypton – a month versus a decade – the scientists had to find a material that would be selective for both but would capture them separately. As xenon is used in commercial lighting, propulsion, imaging, anesthesia and insulation, it can also be sold back into the chemical market to offset costs.

The scientists identified and confirmed that SBMOF-1 shows remarkable xenon capturing capacity and xenon/krypton selectivity under nuclear-plant conditions and at room temperature.

The US partners have also made an announcement with this June 13, 2016 Pacific Northwest National Laboratory (PNNL) news release (also on EurekAlert), Note: It is a little repetitive but there’s good additional information,

Researchers are investigating a new material that might help in nuclear fuel recycling and waste reduction by capturing certain gases released during reprocessing. Conventional technologies to remove these radioactive gases operate at extremely low, energy-intensive temperatures. By working at ambient temperature, the new material has the potential to save energy, make reprocessing cleaner and less expensive. The reclaimed materials can also be reused commercially.

Appearing in Nature Communications, the work is a collaboration between experimentalists and computer modelers exploring the characteristics of materials known as metal-organic frameworks.

“This is a great example of computer-inspired material discovery,” said materials scientist Praveen Thallapally of the Department of Energy’s Pacific Northwest National Laboratory. “Usually the experimental results are more realistic than computational ones. This time, the computer modeling showed us something the experiments weren’t telling us.”

Waste avoidance

Recycling nuclear fuel can reuse uranium and plutonium — the majority of the used fuel — that would otherwise be destined for waste. Researchers are exploring technologies that enable safe, efficient, and reliable recycling of nuclear fuel for use in the future.

A multi-institutional, international collaboration is studying materials to replace costly, inefficient recycling steps. One important step is collecting radioactive gases xenon and krypton, which arise during reprocessing. To capture xenon and krypton, conventional technologies use cryogenic methods in which entire gas streams are brought to a temperature far below where water freezes — such methods are energy intensive and expensive.

Thallapally, working with Maciej Haranczyk and Berend Smit of Lawrence Berkeley National Laboratory [LBNL] and others, has been studying materials called metal-organic frameworks, also known as MOFs, that could potentially trap xenon and krypton without having to use cryogenics.

These materials have tiny pores inside, so small that often only a single molecule can fit inside each pore. When one gas species has a higher affinity for the pore walls than other gas species, metal-organic frameworks can be used to separate gaseous mixtures by selectively adsorbing.

To find the best MOF for xenon and krypton separation, computational chemists led by Haranczyk and Smit screened 125,000 possible MOFs for their ability to trap the gases. Although these gases can come in radioactive varieties, they are part of a group of chemically inert elements called “noble gases.” The team used computing resources at NERSC, the National Energy Research Scientific Computing Center, a DOE Office of Science User Facility at LBNL.

“Identifying the optimal material for a given process, out of thousands of possible structures, is a challenge due to the sheer number of materials. Given that the characterization of each material can take up to a few hours of simulations, the entire screening process may fill a supercomputer for weeks,” said Haranczyk. “Instead, we developed an approach to assess the performance of materials based on their easily computable characteristics. In this case, seven different characteristics were necessary for predicting how the materials behaved, and our team’s grad student Cory Simon’s application of machine learning techniques greatly sped up the material discovery process by eliminating those that didn’t meet the criteria.”

The team’s models identified the MOF that trapped xenon most selectively and had a pore size close to the size of a xenon atom — SBMOF-1, which they then tested in the lab at PNNL.

After optimizing the preparation of SBMOF-1, Thallapally and his team at PNNL tested the material by running a mixture of gases through it — including a non-radioactive form of xenon and krypton — and measuring what came out the other end. Oxygen, helium, nitrogen, krypton, and carbon dioxide all beat xenon out. This indicated that xenon becomes trapped within SBMOF-1’s pores until the gas saturates the material.

Other tests also showed that in the absence of xenon, SBMOF-1 captures krypton. During actual separations, then, operators would pass the gas streams through SBMOF-1 twice to capture both gases.

The team also tested SBMOF-1’s ability to hang onto xenon in conditions of high humidity. Humidity interferes with cryogenics, and gases must be dehydrated before putting them through the ultra-cold method, another time-consuming expense. SBMOF-1, however, performed quite admirably, retaining more than 85 percent of the amount of xenon in high humidity as it did in dry conditions.

The final step in collecting xenon or krypton gas would be to put the MOF material under a vacuum, which sucks the gas out of the molecular cages for safe storage. A last laboratory test examined how stable the material was by repeatedly filling it up with xenon gas and then vacuuming out the xenon. After 10 cycles of this, SBMOF-1 collected just as much xenon as the first cycle, indicating a high degree of stability for long-term use.

Thallapally attributes this stability to the manner in which SBMOF-1 interacts with xenon. Rather than chemical reactions between the molecular cages and the gases, the relationship is purely physical. The material can last a lot longer without constantly going through chemical reactions, he said.

A model finding

Although the researchers showed that SBMOF-1 is a good candidate for nuclear fuel reprocessing, getting these results wasn’t smooth sailing. In the lab, the researchers had followed a previously worked out protocol from Stony Brook University to prepare SBMOF-1. Part of that protocol requires them to “activate” SBMOF-1 by heating it up to 300 degrees Celsius, three times the temperature of boiling water.

Activation cleans out material left in the pores from MOF synthesis. Laboratory tests of the activated SBMOF-1, however, showed the material didn’t behave as well as it should, based on the computer modeling results.

The researchers at PNNL repeated the lab experiments. This time, however, they activated SBMOF-1 at a lower temperature, 100 degrees Celsius, or the actual temperature of boiling water. Subjecting the material to the same lab tests, the researchers found SBMOF-1 behaving as expected, and better than at the higher activation temperature.

But why? To figure out where the discrepancy came from, the researchers modeled what happened to SBMOF-1 at 300 degrees Celsius. Unexpectedly, the pores squeezed in on themselves.

“When we heated the crystal that high, atoms within the pore tilted and partially blocked the pores,” said Thallapally. “The xenon doesn’t fit.”

Armed with these new computational and experimental insights, the researchers can explore SBMOF-1 and other MOFs further for nuclear fuel recycling. These MOFs might also be able to capture other noble gases such as radon, a gas known to pool in some basements.

Researchers hailed from several other institutions as well as those listed earlier, including University of California, Berkeley, Ecole Polytechnique Fédérale de Lausanne (EPFL) in Switzerland, Brookhaven National Laboratory, and IMDEA Materials Institute in Spain. This work was supported by the [US] Department of Energy Offices of Nuclear Energy and Science.

Here’s an image the researchers have provided to illustrate their work,

Caption: The crystal structure of SBMOF-1 (green = Ca, yellow = S, red = O, gray = C, white = H). The light blue surface is a visualization of the one-dimensional channel that SBMOF-1 creates for the gas molecules to move through. The darker blue surface illustrates where a Xe atom sits in the pores of SBMOF-1 when it adsorbs. Credit: Berend Smit/EPFL/University of California Berkley

Caption: The crystal structure of SBMOF-1 (green = Ca, yellow = S, red = O, gray = C, white = H). The light blue surface is a visualization of the one-dimensional channel that SBMOF-1 creates for the gas molecules to move through. The darker blue surface illustrates where a Xe atom sits in the pores of SBMOF-1 when it adsorbs. Credit: Berend Smit/EPFL/University of California Berkley

Here’s a link to and a citation for the paper,

Metal–organic framework with optimally selective xenon adsorption and separation by Debasis Banerjee, Cory M. Simon, Anna M. Plonka, Radha K. Motkuri, Jian Liu, Xianyin Chen, Berend Smit, John B. Parise, Maciej Haranczyk, & Praveen K. Thallapally. Nature Communications 7, Article number: ncomms11831  doi:10.1038/ncomms11831 Published 13 June 2016

This paper is open access.

Final comment, this is the second time in the last month I’ve stumbled across more positive approaches to nuclear energy. The first time was a talk (Why Nuclear Power is Necessary) held in Vancouver, Canada in May 2016 (details here). I’m not trying to suggest anything unduly sinister but it is interesting since most of my adult life nuclear power has been viewed with fear and suspicion.

The Weyl fermion and new electronics

This story concerns a quasiparticle (Weyl fermion) which is a different kind of particle than the nanoparticles usually mentioned here. A March 17, 2016 news item on Nanowerk profiles research that suggests the Weyl fermion may find applications in the field of electronics,

The Weyl fermion, just discovered in the past year, moves through materials practically without resistance. Now researchers are showing how it could be put to use in electronic components.

Today electronic devices consume a lot of energy and require elaborate cooling mechanisms. One approach for the development of future energy-saving electronics is to use special particles that exist only in the interior of materials but can move there practically undisturbed. Electronic components based on these so-called Weyl fermions would consume considerably less energy than present-day chips. That’s because up to now devices have relied on the movement of electrons, which is inhibited by resistance and thus wastes energy.

Evidence for Weyl fermions was discovered only in the past year, by several research teams including scientists from the Paul Scherrer Institute (PSI). Now PSI researchers have shown — within the framework of an international collaboration with two research institutions in China and the two Swiss technical universities, ETH Zurich and EPF Lausanne — that there are materials in which only one kind of Weyl fermion exists. That could prove decisive for applications in electronic components, because it makes it possible to guide the particles’ flow in the material.

A March 17, 2016 Paul Scherrer Institute (PSI) press release by Paul Piwnicki, which originated the news item, describes the work in more detail (Note: There is some redundancy),

In the past year, researchers of the Paul Scherrer Institute PSI were among those who found experimental evidence for a particle whose existence had been predicted in the 1920s — the Weyl fermion. One of the particle’s peculiarities is that it can only exist in the interior of materials. Now the PSI researchers, together with colleagues at two Chinese research institutions as well as at ETH Zurich and EPF Lausanne, have made a subsequent discovery that opens the possibility of using the movement of Weyl fermions in future electronic devices. …

Today’s computer chips use the flow of electrons that move through the device’s conductive channels. Because, along the way, electrons are always colliding with each other or with other particles in the material, a relatively high amount of energy is needed to maintain the flow. That means not only that the device wastes a lot of energy, but also that it heats itself up enough to necessitate an elaborate cooling mechanism, which in turn requires additional space and energy.

In contrast, Weyl fermions move virtually undisturbed through the material and thus encounter practically no resistance. “You can compare it to driving on a highway where all of the cars are moving freely in the same direction,” explains Ming Shi, a senior scientist at the PSI. “The electron flow in present-day chips is more comparable to driving in congested city traffic, with cars coming from all directions and getting in each other’s way.”

Important for electronics: only one kind of particle

While in the materials examined last year there were always several kinds of Weyl fermions, all moving in different ways, the PSI researchers and their colleagues have now produced a material in which only one kind of Weyl fermion occurs. “This is important for applications in electronics, because here you must be able to precisely steer the particle flow,” explains Nan Xu, a postdoctoral researcher at the PSI.

Weyl fermions are named for the German mathematician Hermann Weyl, who predicted their existence in 1929. These particles have some striking characteristics, such as having no mass and moving at the speed of light. Weyl fermions were observed as quasiparticles in so-called Weyl semimetals. In contrast to “real” particles, quasiparticles can only exist inside materials. Weyl fermions are generated through the collective motion of electrons in suitable materials. In general, quasiparticles can be compared to waves on the surface of a body of water — without the water, the waves would not exist. At the same time, their movement is independent of the water’s motion.

The material that the researchers have now investigated is a compound of the chemical elements tantalum and phosphorus, with the chemical formula TaP. The crucial experiments were carried out with X-rays at the Swiss Light Source (SLS) of the Paul Scherrer Institute.

Studying novel materials with properties that could make them useful in future electronic devices is a central research area of the Paul Scherrer Institute. In the process, the researchers pursue a variety of approaches and use many different experimental methods.

Here’s a link to and a citation for the paper,

Observation of Weyl nodes and Fermi arcs in tantalum phosphide by N. Xu, H. M. Weng, B. Q. Lv, C. E. Matt, J. Park, F. Bisti, V. N. Strocov, D. Gawryluk, E. Pomjakushina, K. Conder, N. C. Plumb, M. Radovic, G. Autès, O. V. Yazyev, Z. Fang, X. Dai, T. Qian, J. Mesot, H. Ding & M. Shi. Nature Communications 7, Article number: 11006  doi:10.1038/ncomms11006 Published 17 March 2016

This paper is open access.

Identifying performance problems in nanoresonators

Use of nanoelectromechanical systems (NEMS) can now be maximised due to a technique developed by researchers at the Commissariat a l’Energie Atomique (CEA) and the University of Grenoble-Alpes (France). From a March 7, 2016 news item on ScienceDaily,

A joint CEA / University of Grenoble-Alpes research team, together with their international partners, have developed a diagnostic technique capable of identifying performance problems in nanoresonators, a type of nanodetector used in research and industry. These nanoelectromechanical systems, or NEMS, have never been used to their maximum capabilities. The detection limits observed in practice have always been well below the theoretical limit and, until now, this difference has remained unexplained. Using a totally new approach, the researchers have now succeeded in evaluating and explaining this phenomenon. Their results, described in the February 29 [2016] issue of Nature Nanotechnology, should now make it possible to find ways of overcoming this performance shortfall.

A Feb. 29, 2016 CEA press release, which originated the news item, provides more detail about NEMS and about the new technique,

NEMS have many applications, including the measurement of mass or force. Like a tiny violin string, a nanoresonator vibrates at a precise resonant frequency. This frequency changes if gas molecules or biological particles settle on the nanoresonator surface. This change in frequency can then be used to detect or identify the substance, enabling a medical diagnosis, for example. The extremely small dimensions of these devices (less than one millionth of a meter) make the detectors highly sensitive.

However, this resolution is constrained by a detection limit. Background noise is present in addition to the wanted measurement signal. Researchers have always considered this background noise to be an intrinsic characteristic of these systems (see Figure 2 [not reproduced here]). Despite the noise levels being significantly greater than predicted by theory, the impossibility of understanding the underlying phenomena has, until now, led the research community to ignore them.

The CEA-Leti research team and their partners reviewed all the frequency stability measurements in the literature, and identified a difference of several orders of magnitude between the accepted theoretical limits and experimental measurements.

In addition to evaluating this shortfall, the researchers also developed a diagnostic technique that could be applied to each individual nanoresonator, using their own high-purity monocrystalline silicon resonators to investigate the problem.

The resonant frequency of a nanoresonator is determined by the geometry of the resonator and the type of material used in its manufacture. It is therefore theoretically fixed. By forcing the resonator to vibrate at defined frequencies close to the resonant frequency, the CEA-Leti researchers have been able to demonstrate a secondary effect that interferes with the resolution of the system and its detection limit in addition to the background noise. This effect causes slight variations in the resonant frequency. These fluctuations in the resonant frequency result from the extreme sensitivity of these systems. While capable of detecting tiny changes in mass and force, they are also very sensitive to minute variations in temperature and the movements of molecules on their surface. At the nano scale, these parameters cannot be ignored as they impose a significant limit on the performance of nanoresonators. For example, a tiny change in temperature can change the parameters of the device material, and hence its frequency. These variations can be rapid and random.

The experimental technique developed by the team makes it possible to evaluate the loss of resolution and to determine whether it is caused by the intrinsic limits of the system or by a secondary fluctuation that can therefore by corrected. A patent has been applied for covering this technique. The research team has also shown that none of the theoretical hypotheses so far advanced to explain these fluctuations in the resonant frequency can currently explain the observed level of variation.

The research team will therefore continue experimental work to explore the physical origin of these fluctuations, with the aim of achieving a significant improvement in the performance of nanoresonators.

The Swiss Federal Institute of Technology in Lausanne, the Indian Institute of Science in Bangalore, and the California Institute of Technology (USA) have also participated in this study. The authors have received funding from the Leti Carnot Institute (NEMS-MS project) and the European Union (ERC Consolidator Grant – Enlightened project).

Here’s a link to and a citation for the paper,

Frequency fluctuations in silicon nanoresonators by Marc Sansa, Eric Sage, Elizabeth C. Bullard, Marc Gély, Thomas Alava, Eric Colinet, Akshay K. Naik, Luis Guillermo Villanueva, Laurent Duraffourg, Michael L. Roukes, Guillaume Jourdan & Sébastien Hentz. Nature Nanotechnology (2016) doi:10.1038/nnano.2016.19 Published online 29 February 2016

This paper is behind a paywall.

Feeling with a bionic finger

From what I understand one of the most difficult aspects of an amputation is the loss of touch, so, bravo to the engineers. From a March 8, 2016 news item on ScienceDaily,

An amputee was able to feel smoothness and roughness in real-time with an artificial fingertip that was surgically connected to nerves in his upper arm. Moreover, the nerves of non-amputees can also be stimulated to feel roughness, without the need of surgery, meaning that prosthetic touch for amputees can now be developed and safely tested on intact individuals.

The technology to deliver this sophisticated tactile information was developed by Silvestro Micera and his team at EPFL (Ecole polytechnique fédérale de Lausanne) and SSSA (Scuola Superiore Sant’Anna) together with Calogero Oddo and his team at SSSA. The results, published today in eLife, provide new and accelerated avenues for developing bionic prostheses, enhanced with sensory feedback.

A March 8, 2016 EPFL press release (also on EurekAlert), which originated the news item, provides more information about Sorenson’s experience and about the other tests the research team performed,

“The stimulation felt almost like what I would feel with my hand,” says amputee Dennis Aabo Sørensen about the artificial fingertip connected to his stump. He continues, “I still feel my missing hand, it is always clenched in a fist. I felt the texture sensations at the tip of the index finger of my phantom hand.”

Sørensen is the first person in the world to recognize texture using a bionic fingertip connected to electrodes that were surgically implanted above his stump.

Nerves in Sørensen’s arm were wired to an artificial fingertip equipped with sensors. A machine controlled the movement of the fingertip over different pieces of plastic engraved with different patterns, smooth or rough. As the fingertip moved across the textured plastic, the sensors generated an electrical signal. This signal was translated into a series of electrical spikes, imitating the language of the nervous system, then delivered to the nerves.

Sørensen could distinguish between rough and smooth surfaces 96% of the time.

In a previous study, Sorensen’s implants were connected to a sensory-enhanced prosthetic hand that allowed him to recognize shape and softness. In this new publication about texture in the journal eLife, the bionic fingertip attains a superior level of touch resolution.

Simulating touch in non-amputees

This same experiment testing coarseness was performed on non-amputees, without the need of surgery. The tactile information was delivered through fine, needles that were temporarily attached to the arm’s median nerve through the skin. The non-amputees were able to distinguish roughness in textures 77% of the time.

But does this information about touch from the bionic fingertip really resemble the feeling of touch from a real finger? The scientists tested this by comparing brain-wave activity of the non-amputees, once with the artificial fingertip and then with their own finger. The brain scans collected by an EEG cap on the subject’s head revealed that activated regions in the brain were analogous.

The research demonstrates that the needles relay the information about texture in much the same way as the implanted electrodes, giving scientists new protocols to accelerate for improving touch resolution in prosthetics.

“This study merges fundamental sciences and applied engineering: it provides additional evidence that research in neuroprosthetics can contribute to the neuroscience debate, specifically about the neuronal mechanisms of the human sense of touch,” says Calogero Oddo of the BioRobotics Institute of SSSA. “It will also be translated to other applications such as artificial touch in robotics for surgery, rescue, and manufacturing.”

Here’s a link to and a citation for the paper,

Intraneural stimulation elicits discrimination of textural features by artificial fingertip in intact and amputee humans by Calogero Maria Oddo, Stanisa Raspopovic, Fiorenzo Artoni, Alberto Mazzoni, Giacomo Spigler, Francesco Petrini, Federica Giambattistelli, Fabrizio Vecchio, Francesca Miraglia, Loredana Zollo, Giovanni Di Pino, Domenico Camboni, Maria Chiara Carrozza, Eugenio Guglielmelli, Paolo Maria Rossini, Ugo Faraguna, Silvestro Micera. eLife, 2016; 5 DOI: 10.7554/eLife.09148 Published March 8, 2016

This paper appears to be open access.

Blue Brain Project builds a digital piece of brain

Caption: This is a photo of a virtual brain slice. Credit: Makram et al./Cell 2015

Caption: This is a photo of a virtual brain slice. Credit: Makram et al./Cell 2015

Here’s more *about this virtual brain slice* from an Oct. 8, 2015 Cell (magazine) news release on EurekAlert,

If you want to learn how something works, one strategy is to take it apart and put it back together again [also known as reverse engineering]. For 10 years, a global initiative called the Blue Brain Project–hosted at the Ecole Polytechnique Federale de Lausanne (EPFL)–has been attempting to do this digitally with a section of juvenile rat brain. The project presents a first draft of this reconstruction, which contains over 31,000 neurons, 55 layers of cells, and 207 different neuron subtypes, on October 8 [2015] in Cell.

Heroic efforts are currently being made to define all the different types of neurons in the brain, to measure their electrical firing properties, and to map out the circuits that connect them to one another. These painstaking efforts are giving us a glimpse into the building blocks and logic of brain wiring. However, getting a full, high-resolution picture of all the features and activity of the neurons within a brain region and the circuit-level behaviors of these neurons is a major challenge.

Henry Markram and colleagues have taken an engineering approach to this question by digitally reconstructing a slice of the neocortex, an area of the brain that has benefitted from extensive characterization. Using this wealth of data, they built a virtual brain slice representing the different neuron types present in this region and the key features controlling their firing and, most notably, modeling their connectivity, including nearly 40 million synapses and 2,000 connections between each brain cell type.

“The reconstruction required an enormous number of experiments,” says Markram, of the EPFL. “It paves the way for predicting the location, numbers, and even the amount of ion currents flowing through all 40 million synapses.”

Once the reconstruction was complete, the investigators used powerful supercomputers to simulate the behavior of neurons under different conditions. Remarkably, the researchers found that, by slightly adjusting just one parameter, the level of calcium ions, they could produce broader patterns of circuit-level activity that could not be predicted based on features of the individual neurons. For instance, slow synchronous waves of neuronal activity, which have been observed in the brain during sleep, were triggered in their simulations, suggesting that neural circuits may be able to switch into different “states” that could underlie important behaviors.

“An analogy would be a computer processor that can reconfigure to focus on certain tasks,” Markram says. “The experiments suggest the existence of a spectrum of states, so this raises new types of questions, such as ‘what if you’re stuck in the wrong state?'” For instance, Markram suggests that the findings may open up new avenues for explaining how initiating the fight-or-flight response through the adrenocorticotropic hormone yields tunnel vision and aggression.

The Blue Brain Project researchers plan to continue exploring the state-dependent computational theory while improving the model they’ve built. All of the results to date are now freely available to the scientific community at https://bbp.epfl.ch/nmc-portal.

An Oct. 8, 2015 Hebrew University of Jerusalem press release on the Canadian Friends of the Hebrew University of Jerusalem website provides more detail,

Published by the renowned journal Cell, the paper is the result of a massive effort by 82 scientists and engineers at EPFL and at institutions in Israel, Spain, Hungary, USA, China, Sweden, and the UK. It represents the culmination of 20 years of biological experimentation that generated the core dataset, and 10 years of computational science work that developed the algorithms and built the software ecosystem required to digitally reconstruct and simulate the tissue.

The Hebrew University of Jerusalem’s Prof. Idan Segev, a senior author of the research paper, said: “With the Blue Brain Project, we are creating a digital reconstruction of the brain and using supercomputer simulations of its electrical behavior to reveal a variety of brain states. This allows us to examine brain phenomena within a purely digital environment and conduct experiments previously only possible using biological tissue. The insights we gather from these experiments will help us to understand normal and abnormal brain states, and in the future may have the potential to help us develop new avenues for treating brain disorders.”

Segev, a member of the Hebrew University’s Edmond and Lily Safra Center for Brain Sciences and director of the university’s Department of Neurobiology, sees the paper as building on the pioneering work of the Spanish anatomist Ramon y Cajal from more than 100 years ago: “Ramon y Cajal began drawing every type of neuron in the brain by hand. He even drew in arrows to describe how he thought the information was flowing from one neuron to the next. Today, we are doing what Cajal would be doing with the tools of the day: building a digital representation of the neurons and synapses, and simulating the flow of information between neurons on supercomputers. Furthermore, the digitization of the tissue is open to the community and allows the data and the models to be preserved and reused for future generations.”

While a long way from digitizing the whole brain, the study demonstrates that it is feasible to digitally reconstruct and simulate brain tissue, and most importantly, to reveal novel insights into the brain’s functioning. Simulating the emergent electrical behavior of this virtual tissue on supercomputers reproduced a range of previous observations made in experiments on the brain, validating its biological accuracy and providing new insights into the functioning of the neocortex. This is a first step and a significant contribution to Europe’s Human Brain Project, which Henry Markram founded, and where EPFL is the coordinating partner.

Cell has made a video abstract available (it can be found with the Hebrew University of Jerusalem press release)

Here’s a link to and a citation for the paper,

Reconstruction and Simulation of Neocortical Microcircuitry by Henry Markram, Eilif Muller, Srikanth Ramaswamy, Michael W. Reimann, Marwan Abdellah, Carlos Aguado Sanchez, Anastasia Ailamaki, Lidia Alonso-Nanclares, Nicolas Antille, Selim Arsever, Guy Antoine Atenekeng Kahou, Thomas K. Berger, Ahmet Bilgili, Nenad Buncic, Athanassia Chalimourda, Giuseppe Chindemi, Jean-Denis Courcol, Fabien Delalondre, Vincent Delattre, Shaul Druckmann, Raphael Dumusc, James Dynes, Stefan Eilemann, Eyal Gal, Michael Emiel Gevaert, Jean-Pierre Ghobril, Albert Gidon, Joe W. Graham, Anirudh Gupta, Valentin Haenel, Etay Hay, Thomas Heinis, Juan B. Hernando, Michael Hines, Lida Kanari, Daniel Keller, John Kenyon, Georges Khazen, Yihwa Kim, James G. King, Zoltan Kisvarday, Pramod Kumbhar, Sébastien Lasserre, Jean-Vincent Le Bé, Bruno R.C. Magalhães, Angel Merchán-Pérez, Julie Meystre, Benjamin Roy Morrice, Jeffrey Muller, Alberto Muñoz-Céspedes, Shruti Muralidhar, Keerthan Muthurasa, Daniel Nachbaur, Taylor H. Newton, Max Nolte, Aleksandr Ovcharenko, Juan Palacios, Luis Pastor, Rodrigo Perin, Rajnish Ranjan, Imad Riachi, José-Rodrigo Rodríguez, Juan Luis Riquelme, Christian Rössert, Konstantinos Sfyrakis, Ying Shi, Julian C. Shillcock, Gilad Silberberg, Ricardo Silva, Farhan Tauheed, Martin Telefont, Maria Toledo-Rodriguez, Thomas Tränkler, Werner Van Geit, Jafet Villafranca Díaz, Richard Walker, Yun Wang, Stefano M. Zaninetta, Javier DeFelipe, Sean L. Hill, Idan Segev, Felix Schürmann. Cell, Volume 163, Issue 2, p456–492, 8 October 2015 DOI: http://dx.doi.org/10.1016/j.cell.2015.09.029

This paper appears to be open access.

My most substantive description of the Blue Brain Project , previous to this, was in a Jan. 29, 2013 posting featuring the European Union’s (EU) Human Brain project and involvement from countries that are not members.

* I edited a redundant lede (That’s a virtual slice of a rat brain.), moved the second sentence to the lede while adding this:  *about this virtual brain slice* on Oct. 16, 2015 at 0955 hours PST.