Tag Archives: U.S. Department of Energy

Would you like to invest in the Argonne National Laboratory’s reusable oil spill sponge?

A March 7, 2017 news item on phys.org describes some of the US Argonne National Laboratory’s research into oil spill cleanup technology,

When the Deepwater Horizon drilling pipe blew out seven years ago, beginning the worst oil spill [BP oil spill in the Gulf of Mexico] in U.S. history, those in charge of the recovery discovered a new wrinkle: the millions of gallons of oil bubbling from the sea floor weren’t all collecting on the surface where it could be skimmed or burned. Some of it was forming a plume and drifting through the ocean under the surface.

Now, scientists at the U.S. Department of Energy’s (DOE) Argonne National Laboratory have invented a new foam, called Oleo Sponge, that addresses this problem. The material not only easily adsorbs oil from water, but is also reusable and can pull dispersed oil from the entire water column—not just the surface.

A March 6, 2017 Argonne National Laboratory news release (also on EurekAlert) by Louise Lerner, which originated the news item, provides more information about the work,

“The Oleo Sponge offers a set of possibilities that, as far as we know, are unprecedented,” said co-inventor Seth Darling, a scientist with Argonne’s Center for Nanoscale Materials and a fellow of the University of Chicago’s Institute for Molecular Engineering.

We already have a library of molecules that can grab oil, but the problem is how to get them into a useful structure and bind them there permanently.

The scientists started out with common polyurethane foam, used in everything from furniture cushions to home insulation. This foam has lots of nooks and crannies, like an English muffin, which could provide ample surface area to grab oil; but they needed to give the foam a new surface chemistry in order to firmly attach the oil-loving molecules.

Previously, Darling and fellow Argonne chemist Jeff Elam had developed a technique called sequential infiltration synthesis, or SIS, which can be used to infuse hard metal oxide atoms within complicated nanostructures.

After some trial and error, they found a way to adapt the technique to grow an extremely thin layer of metal oxide “primer” near the foam’s interior surfaces. This serves as the perfect glue for attaching the oil-loving molecules, which are deposited in a second step; they hold onto the metal oxide layer with one end and reach out to grab oil molecules with the other.

The result is Oleo Sponge, a block of foam that easily adsorbs oil from the water. The material, which looks a bit like an outdoor seat cushion, can be wrung out to be reused—and the oil itself recovered.

Oleo Sponge

At tests at a giant seawater tank in New Jersey called Ohmsett, the National Oil Spill Response Research & Renewable Energy Test Facility, the Oleo Sponge successfully collected diesel and crude oil from both below and on the water surface.

“The material is extremely sturdy. We’ve run dozens to hundreds of tests, wringing it out each time, and we have yet to see it break down at all,” Darling said.

Oleo Sponge could potentially also be used routinely to clean harbors and ports, where diesel and oil tend to accumulate from ship traffic, said John Harvey, a business development executive with Argonne’s Technology Development and Commercialization division.

Elam, Darling and the rest of the team are continuing to develop the technology.

“The technique offers enormous flexibility, and can be adapted to other types of cleanup besides oil in seawater. You could attach a different molecule to grab any specific substance you need,” Elam said.

The team is actively looking to commercialize [emphasis mine] the material, Harvey said; those interested in licensing the technology or collaborating with the laboratory on further development may contact partners@anl.gov.

Here’s a link to and a citation for the paper,

Advanced oil sorbents using sequential infiltration synthesis by Edward Barry, Anil U. Mane, Joseph A. Libera, Jeffrey W. Elam, and Seth B. Darling. J. Mater. Chem. A, 2017,5, 2929-2935 DOI: 10.1039/C6TA09014A First published online 11 Jan 2017

This paper is behind a paywall.

The two most recent posts here featuring oil spill technology are my Nov. 3, 2016 piece titled: Oil spill cleanup nanotechnology-enabled solution from A*STAR and my Sept. 15, 2016 piece titled: Canada’s Ingenuity Lab receives a $1.7M grant to develop oil recovery system for oil spills. I hope that one of these days someone manages to commercialize at least one of the new oil spill technologies. It seems that there hasn’t been much progress since the BP (Deepwater Horizon) oil spill. If someone has better information than I do about the current state of oil spill cleanup technologies, please do leave a comment.

High-performance, low-energy artificial synapse for neural network computing

This artificial synapse is apparently an improvement on the standard memristor-based artificial synapse but that doesn’t become clear until reading the abstract for the paper. First, there’s a Feb. 20, 2017 Stanford University news release by Taylor Kubota (dated Feb. 21, 2017 on EurekAlert), Note: Links have been removed,

For all the improvements in computer technology over the years, we still struggle to recreate the low-energy, elegant processing of the human brain. Now, researchers at Stanford University and Sandia National Laboratories have made an advance that could help computers mimic one piece of the brain’s efficient design – an artificial version of the space over which neurons communicate, called a synapse.

“It works like a real synapse but it’s an organic electronic device that can be engineered,” said Alberto Salleo, associate professor of materials science and engineering at Stanford and senior author of the paper. “It’s an entirely new family of devices because this type of architecture has not been shown before. For many key metrics, it also performs better than anything that’s been done before with inorganics.”

The new artificial synapse, reported in the Feb. 20 issue of Nature Materials, mimics the way synapses in the brain learn through the signals that cross them. This is a significant energy savings over traditional computing, which involves separately processing information and then storing it into memory. Here, the processing creates the memory.

This synapse may one day be part of a more brain-like computer, which could be especially beneficial for computing that works with visual and auditory signals. Examples of this are seen in voice-controlled interfaces and driverless cars. Past efforts in this field have produced high-performance neural networks supported by artificially intelligent algorithms but these are still distant imitators of the brain that depend on energy-consuming traditional computer hardware.

Building a brain

When we learn, electrical signals are sent between neurons in our brain. The most energy is needed the first time a synapse is traversed. Every time afterward, the connection requires less energy. This is how synapses efficiently facilitate both learning something new and remembering what we’ve learned. The artificial synapse, unlike most other versions of brain-like computing, also fulfills these two tasks simultaneously, and does so with substantial energy savings.

“Deep learning algorithms are very powerful but they rely on processors to calculate and simulate the electrical states and store them somewhere else, which is inefficient in terms of energy and time,” said Yoeri van de Burgt, former postdoctoral scholar in the Salleo lab and lead author of the paper. “Instead of simulating a neural network, our work is trying to make a neural network.”

The artificial synapse is based off a battery design. It consists of two thin, flexible films with three terminals, connected by an electrolyte of salty water. The device works as a transistor, with one of the terminals controlling the flow of electricity between the other two.

Like a neural path in a brain being reinforced through learning, the researchers program the artificial synapse by discharging and recharging it repeatedly. Through this training, they have been able to predict within 1 percent of uncertainly what voltage will be required to get the synapse to a specific electrical state and, once there, it remains at that state. In other words, unlike a common computer, where you save your work to the hard drive before you turn it off, the artificial synapse can recall its programming without any additional actions or parts.

Testing a network of artificial synapses

Only one artificial synapse has been produced but researchers at Sandia used 15,000 measurements from experiments on that synapse to simulate how an array of them would work in a neural network. They tested the simulated network’s ability to recognize handwriting of digits 0 through 9. Tested on three datasets, the simulated array was able to identify the handwritten digits with an accuracy between 93 to 97 percent.

Although this task would be relatively simple for a person, traditional computers have a difficult time interpreting visual and auditory signals.

“More and more, the kinds of tasks that we expect our computing devices to do require computing that mimics the brain because using traditional computing to perform these tasks is becoming really power hungry,” said A. Alec Talin, distinguished member of technical staff at Sandia National Laboratories in Livermore, California, and senior author of the paper. “We’ve demonstrated a device that’s ideal for running these type of algorithms and that consumes a lot less power.”

This device is extremely well suited for the kind of signal identification and classification that traditional computers struggle to perform. Whereas digital transistors can be in only two states, such as 0 and 1, the researchers successfully programmed 500 states in the artificial synapse, which is useful for neuron-type computation models. In switching from one state to another they used about one-tenth as much energy as a state-of-the-art computing system needs in order to move data from the processing unit to the memory.

This, however, means they are still using about 10,000 times as much energy as the minimum a biological synapse needs in order to fire. The researchers are hopeful that they can attain neuron-level energy efficiency once they test the artificial synapse in smaller devices.

Organic potential

Every part of the device is made of inexpensive organic materials. These aren’t found in nature but they are largely composed of hydrogen and carbon and are compatible with the brain’s chemistry. Cells have been grown on these materials and they have even been used to make artificial pumps for neural transmitters. The voltages applied to train the artificial synapse are also the same as those that move through human neurons.

All this means it’s possible that the artificial synapse could communicate with live neurons, leading to improved brain-machine interfaces. The softness and flexibility of the device also lends itself to being used in biological environments. Before any applications to biology, however, the team plans to build an actual array of artificial synapses for further research and testing.

Additional Stanford co-authors of this work include co-lead author Ewout Lubberman, also of the University of Groningen in the Netherlands, Scott T. Keene and Grégorio C. Faria, also of Universidade de São Paulo, in Brazil. Sandia National Laboratories co-authors include Elliot J. Fuller and Sapan Agarwal in Livermore and Matthew J. Marinella in Albuquerque, New Mexico. Salleo is an affiliate of the Stanford Precourt Institute for Energy and the Stanford Neurosciences Institute. Van de Burgt is now an assistant professor in microsystems and an affiliate of the Institute for Complex Molecular Studies (ICMS) at Eindhoven University of Technology in the Netherlands.

This research was funded by the National Science Foundation, the Keck Faculty Scholar Funds, the Neurofab at Stanford, the Stanford Graduate Fellowship, Sandia’s Laboratory-Directed Research and Development Program, the U.S. Department of Energy, the Holland Scholarship, the University of Groningen Scholarship for Excellent Students, the Hendrik Muller National Fund, the Schuurman Schimmel-van Outeren Foundation, the Foundation of Renswoude (The Hague and Delft), the Marco Polo Fund, the Instituto Nacional de Ciência e Tecnologia/Instituto Nacional de Eletrônica Orgânica in Brazil, the Fundação de Amparo à Pesquisa do Estado de São Paulo and the Brazilian National Council.

Here’s an abstract for the researchers’ paper (link to paper provided after abstract) and it’s where you’ll find the memristor connection explained,

The brain is capable of massively parallel information processing while consuming only ~1–100fJ per synaptic event1, 2. Inspired by the efficiency of the brain, CMOS-based neural architectures3 and memristors4, 5 are being developed for pattern recognition and machine learning. However, the volatility, design complexity and high supply voltages for CMOS architectures, and the stochastic and energy-costly switching of memristors complicate the path to achieve the interconnectivity, information density, and energy efficiency of the brain using either approach. Here we describe an electrochemical neuromorphic organic device (ENODe) operating with a fundamentally different mechanism from existing memristors. ENODe switches at low voltage and energy (<10pJ for 103μm2 devices), displays >500 distinct, non-volatile conductance states within a ~1V range, and achieves high classification accuracy when implemented in neural network simulations. Plastic ENODes are also fabricated on flexible substrates enabling the integration of neuromorphic functionality in stretchable electronic systems6, 7. Mechanical flexibility makes ENODes compatible with three-dimensional architectures, opening a path towards extreme interconnectivity comparable to the human brain.

Here’s a link to and a citation for the paper,

A non-volatile organic electrochemical device as a low-voltage artificial synapse for neuromorphic computing by Yoeri van de Burgt, Ewout Lubberman, Elliot J. Fuller, Scott T. Keene, Grégorio C. Faria, Sapan Agarwal, Matthew J. Marinella, A. Alec Talin, & Alberto Salleo. Nature Materials (2017) doi:10.1038/nmat4856 Published online 20 February 2017

This paper is behind a paywall.

ETA March 8, 2017 10:28 PST: You may find this this piece on ferroelectricity and neuromorphic engineering of interest (March 7, 2017 posting titled: Ferroelectric roadmap to neuromorphic computing).

Growing shells atom-by-atom

The University of California at Davis (UC Davis) and the University of Washington (state) collaborated in research into fundamental questions on how aquatic animals grow. From an Oct. 24, 2016 news item on ScienceDaily,

For the first time scientists can see how the shells of tiny marine organisms grow atom-by-atom, a new study reports. The advance provides new insights into the mechanisms of biomineralization and will improve our understanding of environmental change in Earth’s past.

An Oct. 24, 2016 UC Davis news release by Becky Oskin, which originated the news item, provides more detail,

Led by researchers from the University of California, Davis and the University of Washington, with key support from the U.S. Department of Energy’s Pacific Northwest National Laboratory, the team examined an organic-mineral interface where the first calcium carbonate crystals start to appear in the shells of foraminifera, a type of plankton.

“We’ve gotten the first glimpse of the biological event horizon,” said Howard Spero, a study co-author and UC Davis geochemistry professor. …

Foraminifera’s Final Frontier

The researchers zoomed into shells at the atomic level to better understand how growth processes may influence the levels of trace impurities in shells. The team looked at a key stage — the interaction between the biological ‘template’ and the initiation of shell growth. The scientists produced an atom-scale map of the chemistry at this crucial interface in the foraminifera Orbulina universa. This is the first-ever measurement of the chemistry of a calcium carbonate biomineralization template, Spero said.

Among the new findings are elevated levels of sodium and magnesium in the organic layer. This is surprising because the two elements are not considered important architects in building shells, said lead study author Oscar Branson, a former postdoctoral researcher at UC Davis who is now at the Australian National University in Canberra. Also, the greater concentrations of magnesium and sodium in the organic template may need to be considered when investigating past climate with foraminifera shells.

Calibrating Earth’s Climate

Most of what we know about past climate (beyond ice core records) comes from chemical analyses of shells made by the tiny, one-celled creatures called foraminifera, or “forams.” When forams die, their shells sink and are preserved in seafloor mud. The chemistry preserved in ancient shells chronicles climate change on Earth, an archive that stretches back nearly 200 million years.

The calcium carbonate shells incorporate elements from seawater — such as calcium, magnesium and sodium — as the shells grow. The amount of trace impurities in a shell depends on both the surrounding environmental conditions and how the shells are made. For example, the more magnesium a shell has, the warmer the ocean was where that shell grew.

“Finding out how much magnesium there is in a shell can allow us to find out the temperature of seawater going back up to 150 million years,” Branson said.

But magnesium levels also vary within a shell, because of nanometer-scale growth bands. Each band is one day’s growth (similar to the seasonal variations in tree rings). Branson said considerable gaps persist in understanding what exactly causes the daily bands in the shells.

“We know that shell formation processes are important for shell chemistry, but we don’t know much about these processes or how they might have changed through time,” he said. “This adds considerable uncertainty to climate reconstructions.”

Atomic Maps

The researchers used two cutting-edge techniques: Time-of-Flight Secondary Ionization Mass Spectrometry (ToF-SIMS) and Laser-Assisted Atom Probe Tomography (APT). ToF-SIMS is a two-dimensional chemical mapping technique which shows the elemental composition of the surface of a polished sample. The technique was developed for the elemental analysis of complex polymer materials, and is just starting to be applied to natural samples like shells.

APT is an atomic-scale three-dimensional mapping technique, developed for looking at internal structures in advanced alloys, silicon chips and superconductors. The APT imaging was performed at the Environmental Molecular Sciences Laboratory, a U.S. Department of Energy Office of Science User Facility at the Pacific Northwest National Laboratory.

This foraminifera is just starting to form its adult spherical shell. The calcium carbonate spherical shell first forms on a thin organic template, shown here in white, around the dark juvenile skeleton. Calcium carbonate spines then extend from the juvenile skeleton through the new sphere and outward. The bright flecks are algae that the foraminifera “farm” for sustenance.Howard Spero/University of California, Davis

This foraminifera is just starting to form its adult spherical shell. The calcium carbonate spherical shell first forms on a thin organic template, shown here in white, around the dark juvenile skeleton. Calcium carbonate spines then extend from the juvenile skeleton through the new sphere and outward. The bright flecks are algae that the foraminifera “farm” for sustenance.Howard Spero/University of California, Davis

An Oct. 24, 2016 University of Washington (state) news release (also on EurekAlert) adds more information (there is a little repetition),

Unseen out in the ocean, countless single-celled organisms grow protective shells to keep them safe as they drift along, living off other tiny marine plants and animals. Taken together, the shells are so plentiful that when they sink they provide one of the best records for the history of ocean chemistry.

Oceanographers at the University of Washington and the University of California, Davis, have used modern tools to provide an atomic-scale look at how that shell first forms. Results could help answer fundamental questions about how these creatures grow under different ocean conditions, in the past and in the future. …

“There’s this debate among scientists about whether shelled organisms are slaves to the chemistry of the ocean, or whether they have the physiological capacity to adapt to changing environmental conditions,” said senior author Alex Gagnon, a UW assistant professor of oceanography.

The new work shows, he said, that they do exert some biologically-based control over shell formation.

“I think it’s just incredible that we were able to peer into the intricate details of those first moments that set how a seashell forms,” Gagnon said. “And that’s what sets how much of the rest of the skeleton will grow.”

The results could eventually help understand how organisms at the base of the marine food chain will respond to more acidic waters. And while the study looked at one organism, Orbulina universa, which is important for understanding past climate, the same method could be used for other plankton, corals and shellfish.

The study used tools developed for materials science and semiconductor research to view the shell formation in the most detail yet to see how the organisms turn seawater into solid mineral.

“We’re interested more broadly in the question ‘How do organisms make shells?'” said first author Oscar Branson, a former postdoctoral researcher at the University of California, Davis who is now at Australian National University in Canberra. “We’ve focused on a key stage in mineral formation — the interaction between biological template materials and the initiation of shell growth by an organism.”

These tiny single-celled animals, called foraminifera, can’t reproduce anywhere but in their natural surroundings, which prevents breeding them in captivity. The researchers caught juvenile foraminifera by diving in deep water off Southern California. Then they then raised them in the lab, using tiny pipettes to feed them brine shrimp during their weeklong lives.

Marine shells are made from calcium carbonate, drawing the calcium and carbon from surrounding seawater. But the animal first grows a soft template for the mineral to grow over. Because this template is trapped within the growing skeleton, it acts as a snapshot of the chemical conditions during the first part of skeletal growth.

To see this chemical picture, the authors analyzed tiny sections of foraminifera template with a technique called atom probe tomography at the Pacific Northwest National Laboratory. This tool creates an atom-by-atom picture of the organic template, which was located using a chemical tag.

Results show that the template contains more magnesium and sodium atoms than expected, and that this could influence how the mineral in the shell begins to grow around it.

“One of the key stages in growing a skeleton is when you make that first bit, when you build that first bit of structure. Anything that changes that process is a key control point,” Gagnon said.

The clumping suggests that magnesium and sodium play a role in the first stages of shell growth. If their availability changes for any reason, that could influence how the shell grows beyond what simple chemistry would predict.

“We can say who the players are — further experiments will have to tell us exactly how important each of them is,” Gagnon said.

Follow-up work will try to grow the shells and create models of their formation to see how the template affects growth under different conditions, such as more acidic water.

“Translating that into, ‘Can these forams survive ocean acidification?’ is still many steps down the line,” Gagnon cautioned. “But you can’t do that until you have a picture of what that surface actually looks like.”

The researchers also hope that by better understanding the exact mechanism of shell growth they could tease apart different aspects of seafloor remains so the shells can be used to reconstruct more than just the ocean’s past temperature. In the study, they showed that the template was responsible for causing fine lines in the shells — one example of the rich chemical information encoded in fossil shells.

“There are ways that you could separate the effects of temperature from other things and learn much more about the past ocean,” Gagnon said.

Here’s a link to and a citation for the paper,

Nanometer-Scale Chemistry of a Calcite Biomineralization Template: Implications for Skeletal Composition and Nucleation, Proceedings of the National Academy of Sciences, www.pnas.org/cgi/doi/10.1073/pnas.1522864113

This paper is behind a paywall.

Self-shading electrochromic windows from the Massachusetts Institute of Technology

It’s been a while since I’ve had a story about electrochromic windows and I’ve begun to despair that they will ever reach the marketplace. Happily, the Massachusetts Institute of Technology (MIT) has supplied a ray of light (intentional wordplay). An Aug. 11, 2016 news item on Nanowerk makes the announcement,

A team of researchers at MIT has developed a new way of making windows that can switch from transparent to opaque, potentially saving energy by blocking sunlight on hot days and thus reducing air-conditioning costs. While other systems for causing glass to darken do exist, the new method offers significant advantages by combining rapid response times and low power needs.

Once the glass is switched from clear to dark, or vice versa, the new system requires little to no power to maintain its new state; unlike other materials, it only needs electricity when it’s time to switch back again.

An Aug. 11, 2016 MIT news release (also on EurekAlert), which originated the news item, explains the technology in more detail,

The new discovery uses electrochromic materials, which change their color and transparency in response to an applied voltage, Dinca [MIT professor of chemistry Mircea Dinca] explains. These are quite different from photochromic materials, such as those found in some eyeglasses that become darker when the light gets brighter. Such materials tend to have much slower response times and to undergo a smaller change in their levels of opacity.

Existing electrochromic materials suffer from similar limitations and have found only niche applications. For example, Boeing 787 aircraft have electrochromic windows that get darker to prevent bright sunlight from glaring through the cabin. The windows can be darkened by turning on the voltage, Dinca says, but “when you flip the switch, it actually takes a few minutes for the window to turn dark. Obviously, you want that to be faster.”

The reason for that slowness is that the changes within the material rely on a movement of electrons — an electric current — that gives the whole window a negative charge. Positive ions then move through the material to restore the electrical balance, creating the color-changing effect. But while electrons flow rapidly through materials, ions move much more slowly, limiting the overall reaction speed.

The MIT team overcame that by using sponge-like materials called metal-organic frameworks (MOFs), which can conduct both electrons and ions at very high speeds. Such materials have been used for about 20 years for their ability to store gases within their structure, but the MIT team was the first to harness them for their electrical and optical properties.

The other problem with existing versions of self-shading materials, Dinca says, is that “it’s hard to get a material that changes from completely transparent to, let’s say, completely black.” Even the windows in the 787 can only change to a dark shade of green, rather than becoming opaque.

In previous research on MOFs, Dinca and his students had made material that could turn from clear to shades of blue or green, but in this newly reported work they have achieved the long-sought goal of producing a coating that can go all the way from perfectly clear to nearly black (achieved by blending two complementary colors, green and red). The new material is made by combining two chemical compounds, an organic material and a metal salt. Once mixed, these self-assemble into a thin film of the switchable material.

“It’s this combination of these two, of a relatively fast switching time and a nearly black color, that has really got people excited,” Dinca says.

The new windows have the potential, he says, to do much more than just preventing glare. “These could lead to pretty significant energy savings,” he says, by drastically reducing the need for air conditioning in buildings with many windows in hot climates. “You could just flip a switch when the sun shines through the window, and turn it dark,” or even automatically make that whole side of the building go dark all at once, he says.

While the properties of the material have now been demonstrated in a laboratory setting, the team’s next step is to make a small-scale device for further testing: a 1-inch-square sample, to demonstrate the principle in action for potential investors in the technology, and to help determine what the manufacturing costs for such windows would be.

Further testing is also needed, Dinca says, to demonstrate what they have determined from preliminary testing: that once the switch is flipped and the material changes color, it requires no further power to maintain its new state. No extra power is needed until the switch is flipped to turn the material back to its former state, whether clear or opaque. Many existing electrochromic materials, by contrast, require a continuous voltage input.

In addition to smart windows, Dinca says, the material could also be used for some kinds of low-power displays, similar to displays like electronic ink (used in devices such as the Kindle and based on MIT-developed technology) but based on a completely different approach.

Not surprisingly perhaps, the research was partly funded by an organization in a region where such light-blocking windows would be particularly useful: The Masdar Institute, based in the United Arab Emirates, through a cooperative agreement with MIT. The research also received support from the U.S. Department of Energy, through the Center for Excitonics, an Energy Frontier Center.

Here’s a link to and a citation for the paper,

Transparent-to-Dark Electrochromic Behavior in Naphthalene-Diimide-Based Mesoporous MOF-74 Analogs by Khalid AlKaabi, Casey R. Wade, Mircea Dincă. Chem, Volume 1, Issue 2, 11 August 2016, Pages 264–272 doi:10.1016/j.chempr.2016.06.013

This paper is behind a paywall.

For those curious about the windows, there’s this .gif from MIT,

MIT_ElectrochromicWindows

Capturing neon in an organic environment

Neon observed experimentally within the pores of NiMOF-74 at 100 K and 100 bar of neon gas pressure Courtesy: Cambridge Crystallographic Data Centre (CCDC)

Neon observed experimentally within the pores of NiMOF-74 at 100 K and 100 bar of neon gas pressure Courtesy: Cambridge Crystallographic Data Centre (CCDC)

An Aug. 10, 2016 news item on Nanowerk announces the breakthrough (Note: A link has been removed),

In a new study, researchers from the Cambridge Crystallographic Data Centre (CCDC) and the U.S. Department of Energy’s (DOE’s) Argonne National Laboratory have teamed up to capture neon within a porous crystalline framework. Neon is well known for being the most unreactive element and is a key component in semiconductor manufacturing, but neon has never been studied within an organic or metal-organic framework until now.

The results (Chemical Communications, “Capturing neon – the first experimental structure of neon trapped within a metal–organic environment”), which include the critical studies carried out at the Advanced Photon Source (APS), a DOE Office of Science user facility at Argonne, also point the way towards a more economical and greener industrial process for neon production.

An Aug. 10, 2016 Cambridge Crystallographic Data Centre (CCDC) press release, which originated the news item, explains more about neon and about the new process,

Neon is an element that is well-known to the general public due to its iconic use in neon signs, especially in city centres in the United States from the 1920s to the 1960s. In recent years, the industrial use of neon has become dominated by use in excimer lasers to produce semiconductors. Despite being the fifth most abundant element in the atmosphere, the cost of pure neon gas has risen significantly over the years, increasing the demand for better ways to separate and isolate the gas.

During 2015, CCDC scientists presented a talk at the annual American Crystallographic Association (ACA) meeting on the array of elements that have been studied within an organic or metal-organic environment, challenging the crystallographic community to find the next and possibly last element to be added to the Cambridge Structural Database (CSD). A chance encounter at that meeting with Andrey Yakovenko, a beamline scientist at the Advanced Photon Source, resulted in a collaborative project to capture neon – the 95th element to be observed in the CSD.

Neon’s low reactivity, along with the weak scattering of X-rays due to its relatively low number of electrons, means that conclusive experimental observation of neon captured within a crystalline framework is very challenging. In situ high pressure gas flow experiments performed at X-Ray Science Division beamline 17-BM at the APS using the X-ray powder diffraction technique at low temperatures managed to elucidate the structure of two different metal-organic frameworks with neon gas captured within the materials.

“This is a really exciting moment representing the latest new element to be added to the CSD and quite possibly the last given the experimental and safety challenges associated with the other elements yet to be studied” said Peter Wood, Senior Research Scientist at CCDC and lead author on the paper published in Chemical Communications. “More importantly, the structures reported here show the first observation of a genuine interaction between neon and a transition metal, suggesting the potential for future design of selective neon capture frameworks”.

The structure of neon captured within the framework known as NiMOF-74, a porous framework built from nickel metal centres and organic linkers, shows clear nickel to neon interactions forming at low temperatures significantly shorter than would be expected from a typical weak contact.

Andrey Yakovenko said “These fascinating results show the great capabilities of the scientific program at 17-BM and the Advanced Photon Source. Previously we have been doing experiments at our beamline using other much heavier, and therefore easily detectable, noble gases such as xenon and krypton. However, after meeting co-authors Pete, Colin, Amy and Suzanna at the ACA meeting, we decided to perform these much more complicated experiments using the very light and inert gas – neon. In fact, only by using a combination of in situ X-ray powder diffraction measurements, low temperature and high pressure have we been able to conclusively identify the neon atom positions beyond reasonable doubt”.

Summarising the findings, Chris Cahill, Past President of the ACA and Professor of Chemistry, George Washington University said “This is a really elegant piece of in situ crystallography research and it is particularly pleasing to see the collaboration coming about through discussions at an annual ACA meeting”.

The paper describing this study is published in the journal Chemical Communications, http://dx.doi.org/10.1039/C6CC04808K. All of the crystal structures reported in the paper are available from the CCDC website: http://www.ccdc.cam.ac.uk/structures?doi=10.1039/C6CC04808K.

Here’s another link to the paper but this time with a citation for the paper,

Capturing neon – the first experimental structure of neon trapped within a metal–organic environment by
Peter A. Wood, Amy A. Sarjeant, Andrey A. Yakovenko, Suzanna C. Ward, and Colin R. Groom. Chem. Commun., 2016,52, 10048-10051 DOI: 10.1039/C6CC04808K First published online 19 Jul 2016

The paper is open access but you need a free Royal Society of Chemistry publishing personal account to access it.

Directing self-assembly of multiple molecular patterns within a single material

Self-assembly in this context references the notion of ‘bottom-up engineering’, that is, following nature’s engineering process where elements assemble themselves into a plant, animal, or something else. Humans have for centuries used an approach known as ‘top-down engineering’ where we take materials and reform them, e.g., trees into paper or houses.

Theoretically, bottom-up engineering (self-assembly) is more efficient than top-down engineering but we have yet to become as skilled as Nature at the process.

Scientists at the US Brookhaven National Laboratory believe they have taken a step in the right direction with regard to self-assembly. From an Aug. 8, 2016 Brookhaven National Laboratory news release (also on EurekAlert) by Justin Eure describes the research (Note: A link has been removed),

To continue advancing, next-generation electronic devices must fully exploit the nanoscale, where materials span just billionths of a meter. But balancing complexity, precision, and manufacturing scalability on such fantastically small scales is inevitably difficult. Fortunately, some nanomaterials can be coaxed into snapping themselves into desired formations-a process called self-assembly.

Scientists at the U.S. Department of Energy’s (DOE) Brookhaven National Laboratory have just developed a way to direct the self-assembly of multiple molecular patterns within a single material, producing new nanoscale architectures. The results were published in the journal Nature Communications.

“This is a significant conceptual leap in self-assembly,” said Brookhaven Lab physicist Aaron Stein, lead author on the study. “In the past, we were limited to a single emergent pattern, but this technique breaks that barrier with relative ease. This is significant for basic research, certainly, but it could also change the way we design and manufacture electronics.”

Microchips, for example, use meticulously patterned templates to produce the nanoscale structures that process and store information. Through self-assembly, however, these structures can spontaneously form without that exhaustive preliminary patterning. And now, self-assembly can generate multiple distinct patterns-greatly increasing the complexity of nanostructures that can be formed in a single step.

“This technique fits quite easily into existing microchip fabrication workflows,” said study coauthor Kevin Yager, also a Brookhaven physicist. “It’s exciting to make a fundamental discovery that could one day find its way into our computers.”

The experimental work was conducted entirely at Brookhaven Lab’s Center for Functional Nanomaterials (CFN), a DOE Office of Science User Facility, leveraging in-house expertise and instrumentation.

Cooking up organized complexity

The collaboration used block copolymers-chains of two distinct molecules linked together-because of their intrinsic ability to self-assemble.

“As powerful as self-assembly is, we suspected that guiding the process would enhance it to create truly ‘responsive’ self-assembly,” said study coauthor Greg Doerk of Brookhaven. “That’s exactly where we pushed it.”

To guide self-assembly, scientists create precise but simple substrate templates. Using a method called electron beam lithography-Stein’s specialty-they etch patterns thousands of times thinner than a human hair on the template surface. They then add a solution containing a set of block copolymers onto the template, spin the substrate to create a thin coating, and “bake” it all in an oven to kick the molecules into formation. Thermal energy drives interaction between the block copolymers and the template, setting the final configuration-in this instance, parallel lines or dots in a grid.

“In conventional self-assembly, the final nanostructures follow the template’s guiding lines, but are of a single pattern type,” Stein said. “But that all just changed.”

Lines and dots, living together

The collaboration had previously discovered that mixing together different block copolymers allowed multiple, co-existing line and dot nanostructures to form.

“We had discovered an exciting phenomenon, but couldn’t select which morphology would emerge,” Yager said. But then the team found that tweaking the substrate changed the structures that emerged. By simply adjusting the spacing and thickness of the lithographic line patterns-easy to fabricate using modern tools-the self-assembling blocks can be locally converted into ultra-thin lines, or high-density arrays of nano-dots.

“We realized that combining our self-assembling materials with nanofabricated guides gave us that elusive control. And, of course, these new geometries are achieved on an incredibly small scale,” said Yager.

“In essence,” said Stein, “we’ve created ‘smart’ templates for nanomaterial self-assembly. How far we can push the technique remains to be seen, but it opens some very promising pathways.”

Gwen Wright, another CFN coauthor, added, “Many nano-fabrication labs should be able to do this tomorrow with their in-house tools-the trick was discovering it was even possible.”

The scientists plan to increase the sophistication of the process, using more complex materials in order to move toward more device-like architectures.

“The ongoing and open collaboration within the CFN made this possible,” said Charles Black, director of the CFN. “We had experts in self-assembly, electron beam lithography, and even electron microscopy to characterize the materials, all under one roof, all pushing the limits of nanoscience.”

Here’s a link to and a citation for the paper,

Selective directed self-assembly of coexisting morphologies using block copolymer blends by A. Stein, G. Wright, K. G. Yager, G. S. Doerk, & C. T. Black. Nature Communications 7, Article number: 12366  doi:10.1038/ncomms12366 Published 02 August 2016

This paper is open access.

Self-healing diamond-like carbon from the Argonne Lab (US)

Argonne researchers, from left, Subramanian Sankaranarayanan, Badri Narayanan, Ali Erdemir, Giovanni Ramirez and Osman Levent Eryilmaz show off metal engine parts that have been treated with a diamond-like carbon coating similar to one developed and explored by the team. The catalytic coating interacts with engine oil to create a self-healing diamond-like film that could have profound implications for the efficiency and durability of future engines. (photo by Wes Agresta)

Argonne researchers, from left, Subramanian Sankaranarayanan, Badri Narayanan, Ali Erdemir, Giovanni Ramirez and Osman Levent Eryilmaz show off metal engine parts that have been treated with a diamond-like carbon coating similar to one developed and explored by the team. The catalytic coating interacts with engine oil to create a self-healing diamond-like film that could have profound implications for the efficiency and durability of future engines. (photo by Wes Agresta)

An Aug. 5, 2016 news item on ScienceDaily makes the announcement,

Fans of Superman surely recall how the Man of Steel used immense heat and pressure generated by his bare hands to form a diamond out of a lump of coal.

The tribologists — scientists who study friction, wear, and lubrication — and computational materials scientists at the U.S. Department of Energy’s (DOE’s) Argonne National Laboratory will probably never be mistaken for superheroes. However, they recently applied the same principles and discovered a revolutionary diamond-like film of their own that is generated by the heat and pressure of an automotive engine.

An Aug. 5, 2016 Argonne National Laboratory news release (also on EurekAlert) by Greg Cunningham, which originated the news item, explains further,

The discovery of this ultra-durable, self-lubricating tribofilm – a film that forms between moving surfaces — was first reported yesterday in the journal Nature. It could have profound implications for the efficiency and durability of future engines and other moving metal parts that can be made to develop self-healing, diamond-like carbon (DLC) tribofilms.

“This is a very unique discovery, and one that was a little unexpected,” said Ali Erdemir, the Argonne Distinguished Fellow who leads the team. “We have developed many types of diamond-like carbon coatings of our own, but we’ve never found one that generates itself by breaking down the molecules of the lubricating oil and can actually regenerate the tribofilm as it is worn away.”

The phenomenon was first discovered several years ago by Erdemir and his colleague Osman Levent Eryilmaz in the Tribology and Thermal-Mechanics Department in Argonne’s Center for Transportation Research. But it took theoretical insight enhanced by the massive computing resources available at Argonne to fully understand what was happening at the molecular level in the experiments. The theoretical understanding was provided by lead theoretical researcher Subramanian Sankaranarayanan and postdoctoral researcher Badri Narayanan from the Center for Nanoscale Materials (CNM), while the computing power was provided by the Argonne Leadership Computing Facility (ALCF) and the National Energy Research Scientific Computing Center (NERSC) at Lawrence Berkeley National Laboratory. CNM, ALCF and NERSC are all DOE Office of Science User Facilities.

The original discovery occurred when Erdemir and Eryilmaz decided to see what would happen when a small steel ring was coated with a catalytically active nanocoating – tiny molecules of metals that promote chemical reactions to break down other materials – then subjected to high pressure and heat using a base oil without the complex additives of modern lubricants. When they looked at the ring after the endurance test, they didn’t see the expected rust and surface damage, but an intact ring with an odd blackish deposit on the contact area.

“This test creates extreme contact pressure and temperatures, which are supposed to cause the ring to wear and eventually seize,” said Eryilmaz. “But this ring didn’t significantly wear and this blackish deposit was visible. We said, ‘This material is strange. Maybe this is what is causing this unusual effect.'”

Looking at the deposit using high-powered optical and laser Raman microscopes, the experimentalists realized the deposit was a tribofilm of diamond-like carbon, similar to several other DLCs developed at Argonne in the past. But it worked even better. Tests revealed the DLC tribofilm reduced friction by 25 to 40 percent and that wear was reduced to unmeasurable values.

Further experiments, led by postdoctoral researcher Giovanni Ramirez, revealed that multiple types of catalytic coatings can yield DLC tribofilms. The experiments showed the coatings interact with the oil molecules to create the DLC film, which adheres to the metal surfaces. When the tribofilm is worn away, the catalyst in the coating is re-exposed to the oil, causing the catalysis to restart and develop new layers of tribofilm. The process is self-regulating, keeping the film at consistent thickness. The scientists realized the film was developing spontaneously between the sliding surfaces and was replenishing itself, but they needed to understand why and how.

To provide the theoretical understanding of what the tribology team was seeing in its experiments, they turned to Sankaranarayanan and Narayanan, who used the immense computing power of ALCF’s 10-petaflop supercomputer, Mira. They ran large-scale simulations to understand what was happening at the atomic level, and determined that the catalyst metals in the nanocomposite coatings were stripping hydrogen atoms from the hydrocarbon chains of the lubricating oil, then breaking the chains down into smaller segments. The smaller chains joined together under pressure to create the highly durable DLC tribofilm.

“This is an example of catalysis under extreme conditions created by friction. It is opening up a new field where you are merging catalysis and tribology, which has never been done before,” said Sankaranarayanan. “This new field of tribocatalysis has the potential to change the way we look at lubrication.”

The theorists explored the origins of the catalytic activity to understand how catalysis operates under the extreme heat and pressure in an engine. By gaining this understanding, they were able to predict which catalysts would work, and which would create the most advantageous tribofilms.

“Interestingly, we found several metals or composites that we didn’t think would be catalytically active, but under these circumstances, they performed quite well,” said Narayanan. “This opens up new pathways for scientists to use extreme conditions to enhance catalytic activity.”

The implications of the new tribofilm for efficiency and reliability of engines are huge. Manufacturers already use many different types of coatings — some developed at Argonne — for metal parts in engines and other applications. The problem is those coatings are expensive and difficult to apply, and once they are in use, they only last until the coating wears through. The new catalyst allows the tribofilm to be continually renewed during operation.

Additionally, because the tribofilm develops in the presence of base oil, it could allow manufacturers to reduce, or possibly eliminate, some of the modern anti-friction and anti-wear additives in oil. These additives can decrease the efficiency of vehicle catalytic converters and can be harmful to the environment because of their heavy metal content.

Here’s a link to and a citation for the paper,

Carbon-based tribofilms from lubricating oils by Ali Erdemir, Giovanni Ramirez, Osman L. Eryilmaz, Badri Narayanan, Yifeng Liao, Ganesh Kamath, & Subramanian K. R. S. Sankaranarayanan. Nature 536, 67–71 (04 August 2016) doi:10.1038/nature18948 Published online 03 August 2016

This paper is behind a paywall.

DNA as a framework for rationally designed nanostructures

After publishing a June 15, 2016 post about taking DNA (deoxyribonucleic acid) beyond genetics, it seemed like a good to publish a companion piece featuring a more technical explanation of at least one way DNA might provide the base for living computers and robots. From a June 13, 2016 BrookHaven National Laboratory news release (also on EurekAlert),

A cube, an octahedron, a prism–these are among the polyhedral structures, or frames, made of DNA that scientists at the U.S. Department of Energy’s (DOE) Brookhaven National Laboratory have designed to connect nanoparticles into a variety of precisely structured three-dimensional (3D) lattices. The scientists also developed a method to integrate nanoparticles and DNA frames into interconnecting modules, expanding the diversity of possible structures.

These achievements, described in papers published in Nature Materials and Nature Chemistry, could enable the rational design of nanomaterials with enhanced or combined optical, electric, and magnetic properties to achieve desired functions.

“We are aiming to create self-assembled nanostructures from blueprints,” said physicist Oleg Gang, who led this research at the Center for Functional Nanomaterials (CFN), a DOE Office of Science User Facility at Brookhaven. “The structure of our nanoparticle assemblies is mostly controlled by the shape and binding properties of precisely designed DNA frames, not by the nanoparticles themselves. By enabling us to engineer different lattices and architectures without having to manipulate the particles, our method opens up great opportunities for designing nanomaterials with properties that can be enhanced by precisely organizing functional components. For example, we could create targeted light-absorbing materials that harness solar energy, or magnetic materials that increase information-storage capacity.”

The news release goes on to describe the frames,

Gang’s team has previously exploited DNA’s complementary base pairing–the highly specific binding of bases known by the letters A, T, G, and C that make up the rungs of the DNA double-helix “ladder”–to bring particles together in a precise way. Particles coated with single strands of DNA link to particles coated with complementary strands (A binds with T and G binds with C) while repelling particles coated with non-complementary strands.

They have also designed 3D DNA frames whose corners have single-stranded DNA tethers to which nanoparticles coated with complementary strands can bind. When the scientists mix these nanoparticles and frames, the components self-assemble into lattices that are mainly defined by the shape of the designed frame. The Nature Materials paper describes the most recent structures achieved using this strategy.

“In our approach, we use DNA frames to promote the directional interactions between nanoparticles such that the particles connect into specific configurations that achieve the desired 3D arrays,” said Ye Tian, lead author on the Nature Materials paper and a member of Gang’s research team. “The geometry of each particle-linking frame is directly related to the lattice type, though the exact nature of this relationship is still being explored.”

So far, the team has designed five polyhedral frame shapes–a cube, an octahedron, an elongated square bipyramid, a prism, and a triangular bypyramid–but a variety of other shapes could be created.

“The idea is to construct different 3D structures (buildings) from the same nanoparticle (brick),” said Gang. “Usually, the particles need to be modified to produce the desired structures. Our approach significantly reduces the structure’s dependence on the nature of the particle, which can be gold, silver, iron, or any other inorganic material.”

Nanoparticles (yellow balls) capped with short single-stranded DNA (blue squiggly lines) are mixed with polyhedral DNA frames (from top to bottom): cube, octahedron, elongated square bipyramid, prism, and triangular bipyramid. The frames' vertices are encoded with complementary DNA strands for nanoparticle binding. When the corresponding frames and particles mix, they form a framework. Courtesy of Brookhaven National Laboratory

Nanoparticles (yellow balls) capped with short single-stranded DNA (blue squiggly lines) are mixed with polyhedral DNA frames (from top to bottom): cube, octahedron, elongated square bipyramid, prism, and triangular bipyramid. The frames’ vertices are encoded with complementary DNA strands for nanoparticle binding. When the corresponding frames and particles mix, they form a framework. Courtesy of Brookhaven National Laboratory

There’s also a discussion about how DNA origami was used to design the frames,

To design the frames, the team used DNA origami, a self-assembly technique in which short synthetic strands of DNA (staple strands) are mixed with a longer single strand of biologically derived DNA (scaffold strand). When the scientists heat and cool this mixture, the staple strands selectively bind with or “staple” the scaffold strand, causing the scaffold strand to repeatedly fold over onto itself. Computer software helps them determine the specific sequences for folding the DNA into desired shapes.

The folding of the single-stranded DNA scaffold introduces anchoring points that contain free “sticky” ends–unpaired strings of DNA bases–where nanoparticles coated with complementary single-strand tethers can attach. These sticky ends can be positioned anywhere on the DNA frame, but Gang’s team chose the corners so that multiple frames could be connected.

For each frame shape, the number of DNA strands linking a frame corner to an individual nanoparticle is equivalent to the number of edges converging at that corner. The cube and prism frames have three strands at each corner, for example. By making these corner tethers with varying numbers of bases, the scientists can tune the flexibility and length of the particle-frame linkages.

The interparticle distances are determined by the lengths of the frame edges, which are tens of nanometers in the frames designed to date, but the scientists say it should be possible to tailor the frames to achieve any desired dimensions.

The scientists verified the frame structures and nanoparticle arrangements through cryo-electron microscopy (a type of microscopy conducted at very low temperatures) at the CFN and Brookhaven’s Biology Department, and x-ray scattering at the National Synchrotron Light Source II (NSLS-II), a DOE Office of Science User Facility at Brookhaven.

The team started with a relatively simple form (from the news release),

In the Nature Chemistry paper, Gang’s team described how they used a similar DNA-based approach to create programmable two-dimensional (2D), square-like DNA frames around single nanoparticles.

DNA strands inside the frames provide coupling to complementary DNA on the nanoparticles, essentially holding the particle inside the frame. Each exterior side of the frame can be individually encoded with different DNA sequences. These outer DNA strands guide frame-frame recognition and connection.

Gang likens these DNA-framed nanoparticle modules to Legos whose interactions are programmed: “Each module can hold a different kind of nanoparticle and interlock to other modules in different but specific ways, fully determined by the complementary pairing of the DNA bases on the sides of the frame.”

In other words, the frames not only determine if the nanoparticles will connect but also how they will connect. Programming the frame sides with specific DNA sequences means only frames with complementary sequences can link up.

Mixing different types of modules together can yield a variety of structures, similar to the constructs that can be generated from Lego pieces. By creating a library of the modules, the scientists hope to be able to assemble structures on demand.

Finally, the discussion turns to the assembly of multifuctional nanomaterials (from the news release),

The selectivity of the connections enables different types and sizes of nanoparticles to be combined into single structures.

The geometry of the connections, or how the particles are oriented in space, is very important to designing structures with desired functions. For example, optically active nanoparticles can be arranged in a particular geometry to rotate, filter, absorb, and emit light–capabilities that are relevant for energy-harvesting applications, such as display screens and solar panels.

By using different modules from the “library,” Gang’s team demonstrated the self-assembly of one-dimensional linear arrays, “zigzag” chains, square-shaped and cross-shaped clusters, and 2D square lattices. The scientists even generated a simplistic nanoscale model of Leonardo da Vinci’s Vitruvian Man.

“We wanted to demonstrate that complex nanoparticle architectures can be self-assembled using our approach,” said Gang.

Again, the scientists used sophisticated imaging techniques–electron and atomic force microscopy at the CFN and x-ray scattering at NSLS-II–to verify that their structures were consistent with the prescribed designs and to study the assembly process in detail.

“Although many additional studies are required, our results show that we are making advances toward our goal of creating designed matter via self-assembly, including periodic particle arrays and complex nanoarchitectures with freeform shapes,” said Gang. “Our approach is exciting because it is a new platform for nanoscale manufacturing, one that can lead to a variety of rationally designed functional materials.”

Here’s an image illustrating among other things da Vinci’s Vitruvian Man,

A schematic diagram (left) showing how a nanoparticle (yellow ball) is incorporated within a square-like DNA frame. The DNA strands inside the frame (blue squiggly lines) are complementary to the DNA strands on the nanoparticle; the colored strands on the outer edges of the frame have different DNA sequences that determine how the DNA-framed nanoparticle modules can connect. The architecture shown (middle) is a simplistic nanoscale representation of Leonardo da Vinci's Vitruvian Man, assembled from several module types. The scientists used atomic force microscopy to generate the high-magnification image of this assembly (right). Courtesy Brookhaven National Laboratory

A schematic diagram (left) showing how a nanoparticle (yellow ball) is incorporated within a square-like DNA frame. The DNA strands inside the frame (blue squiggly lines) are complementary to the DNA strands on the nanoparticle; the colored strands on the outer edges of the frame have different DNA sequences that determine how the DNA-framed nanoparticle modules can connect. The architecture shown (middle) is a simplistic nanoscale representation of Leonardo da Vinci’s Vitruvian Man, assembled from several module types. The scientists used atomic force microscopy to generate the high-magnification image of this assembly (right). Courtesy Brookhaven National Laboratory

I enjoy the overviews provided by various writers and thinkers in the field but it’s details such as these that are often most compelling to me.

A treasure trove of molecule and battery data released to the public

Scientists working on The Materials Project have taken the notion of open science to their hearts and opened up access to their data according to a June 9, 2016 news item on Nanowerk,

The Materials Project, a Google-like database of material properties aimed at accelerating innovation, has released an enormous trove of data to the public, giving scientists working on fuel cells, photovoltaics, thermoelectrics, and a host of other advanced materials a powerful tool to explore new research avenues. But it has become a particularly important resource for researchers working on batteries. Co-founded and directed by Lawrence Berkeley National Laboratory (Berkeley Lab) scientist Kristin Persson, the Materials Project uses supercomputers to calculate the properties of materials based on first-principles quantum-mechanical frameworks. It was launched in 2011 by the U.S. Department of Energy’s (DOE) Office of Science.

A June 8, 2016 Berkeley Lab news release, which originated the news item, provides more explanation about The Materials Project,

The idea behind the Materials Project is that it can save researchers time by predicting material properties without needing to synthesize the materials first in the lab. It can also suggest new candidate materials that experimentalists had not previously dreamed up. With a user-friendly web interface, users can look up the calculated properties, such as voltage, capacity, band gap, and density, for tens of thousands of materials.

Two sets of data were released last month: nearly 1,500 compounds investigated for multivalent intercalation electrodes and more than 21,000 organic molecules relevant for liquid electrolytes as well as a host of other research applications. Batteries with multivalent cathodes (which have multiple electrons per mobile ion available for charge transfer) are promising candidates for reducing cost and achieving higher energy density than that available with current lithium-ion technology.

The sheer volume and scope of the data is unprecedented, said Persson, who is also a professor in UC Berkeley’s Department of Materials Science and Engineering. “As far as the multivalent cathodes, there’s nothing similar in the world that exists,” she said. “To give you an idea, experimentalists are usually able to focus on one of these materials at a time. Using calculations, we’ve added data on 1,500 different compositions.”

While other research groups have made their data publicly available, what makes the Materials Project so useful are the online tools to search all that data. The recent release includes two new web apps—the Molecules Explorer and the Redox Flow Battery Dashboard—plus an add-on to the Battery Explorer web app enabling researchers to work with other ions in addition to lithium.

“Not only do we give the data freely, we also give algorithms and software to interpret or search over the data,” Persson said.

The Redox Flow Battery app gives scientific parameters as well as techno-economic ones, so battery designers can quickly rule out a molecule that might work well but be prohibitively expensive. The Molecules Explorer app will be useful to researchers far beyond the battery community.

“For multivalent batteries it’s so hard to get good experimental data,” Persson said. “The calculations provide rich and robust benchmarks to assess whether the experiments are actually measuring a valid intercalation process or a side reaction, which is particularly difficult for multivalent energy technology because there are so many problems with testing these batteries.”

Here’s a screen capture from the Battery Explorer app,

The Materials Project’s Battery Explorer app now allows researchers to work with other ions in addition to lithium.

The Materials Project’s Battery Explorer app now allows researchers to work with other ions in addition to lithium. Courtesy: The Materials Project

The news release goes on to describe a new discovery made possible by The Materials Project (Note: A link has been removed),

Together with Persson, Berkeley Lab scientist Gerbrand Ceder, postdoctoral associate Miao Liu, and MIT graduate student Ziqin Rong, the Materials Project team investigated some of the more promising materials in detail for high multivalent ion mobility, which is the most difficult property to achieve in these cathodes. This led the team to materials known as thiospinels. One of these thiospinels has double the capacity of the currently known multivalent cathodes and was recently synthesized and tested in the lab by JCESR researcher Linda Nazar of the University of Waterloo, Canada.

“These materials may not work well the first time you make them,” Persson said. “You have to be persistent; for example you may have to make the material very phase pure or smaller than a particular particle size and you have to test them under very controlled conditions. There are people who have actually tried this material before and discarded it because they thought it didn’t work particularly well. The power of the computations and the design metrics we have uncovered with their help is that it gives us the confidence to keep trying.”

The researchers were able to double the energy capacity of what had previously been achieved for this kind of multivalent battery. The study has been published in the journal Energy & Environmental Science in an article titled, “A High Capacity Thiospinel Cathode for Mg Batteries.”

“The new multivalent battery works really well,” Persson said. “It’s a significant advance and an excellent proof-of-concept for computational predictions as a valuable new tool for battery research.”

Here’s a link to and a citation for the paper,

A high capacity thiospinel cathode for Mg batteries by Xiaoqi Sun, Patrick Bonnick, Victor Duffort, Miao Liu, Ziqin Rong, Kristin A. Persson, Gerbrand Ceder and  Linda F. Nazar. Energy Environ. Sci., 2016, Advance Article DOI: 10.1039/C6EE00724D First published online 24 May 2016

This paper seems to be behind a paywall.

Getting back to the news release, there’s more about The Materials Project in relationship to its membership,

The Materials Project has attracted more than 20,000 users since launching five years ago. Every day about 20 new users register and 300 to 400 people log in to do research.

One of those users is Dane Morgan, a professor of engineering at the University of Wisconsin-Madison who develops new materials for a wide range of applications, including highly active catalysts for fuel cells, stable low-work function electron emitter cathodes for high-powered microwave devices, and efficient, inexpensive, and environmentally safe solar materials.

“The Materials Project has enabled some of the most exciting research in my group,” said Morgan, who also serves on the Materials Project’s advisory board. “By providing easy access to a huge database, as well as tools to process that data for thermodynamic predictions, the Materials Project has enabled my group to rapidly take on materials design projects that would have been prohibitive just a few years ago.”

More materials are being calculated and added to the database every day. In two years, Persson expects another trove of data to be released to the public.

“This is the way to reach a significant part of the research community, to reach students while they’re still learning material science,” she said. “It’s a teaching tool. It’s a science tool. It’s unprecedented.”

Supercomputing clusters at the National Energy Research Scientific Computing Center (NERSC), a DOE Office of Science User Facility hosted at Berkeley Lab, provide the infrastructure for the Materials Project.

Funding for the Materials Project is provided by the Office of Science (US Department of Energy], including support through JCESR [Joint Center for Energy Storage Research].

Happy researching!