Rice University engineers have zeroed in on the optimal architecture for storing hydrogen in “white graphene” nanomaterials — a design like a Lilliputian skyscraper with “floors” of boron nitride sitting one atop another and held precisely 5.2 angstroms apart by boron nitride pillars.
Caption Thousands of hours of calculations on Rice University’s two fastest supercomputers found that the optimal architecture for packing hydrogen into “white graphene” involves making skyscraper-like frameworks of vertical columns and one-dimensional floors that are about 5.2 angstroms apart. In this illustration, hydrogen molecules (white) sit between sheet-like floors of graphene (gray) that are supported by boron-nitride pillars (pink and blue). Researchers found that identical structures made wholly of boron-nitride had unprecedented capacity for storing readily available hydrogen. Credit Lei Tao/Rice University
“The motivation is to create an efficient material that can take up and hold a lot of hydrogen — both by volume and weight — and that can quickly and easily release that hydrogen when it’s needed,” [emphasis mine] said the study’s lead author, Rouzbeh Shahsavari, assistant professor of civil and environmental engineering at Rice.
Hydrogen is the lightest and most abundant element in the universe, and its energy-to-mass ratio — the amount of available energy per pound of raw material, for example — far exceeds that of fossil fuels. It’s also the cleanest way to generate electricity: The only byproduct is water. A 2017 report by market analysts at BCC Research found that global demand for hydrogen storage materials and technologies will likely reach $5.4 billion annually by 2021.
Hydrogen’s primary drawbacks relate to portability, storage and safety. While large volumes can be stored under high pressure in underground salt domes and specially designed tanks, small-scale portable tanks — the equivalent of an automobile gas tank — have so far eluded engineers.
Following months of calculations on two of Rice’s fastest supercomputers, Shahsavari and Rice graduate student Shuo Zhao found the optimal architecture for storing hydrogen in boron nitride. One form of the material, hexagonal boron nitride (hBN), consists of atom-thick sheets of boron and nitrogen and is sometimes called white graphene because the atoms are spaced exactly like carbon atoms in flat sheets of graphene.
Previous work in Shahsavari’s Multiscale Materials Lab found that hybrid materials of graphene and boron nitride could hold enough hydrogen to meet the Department of Energy’s storage targets for light-duty fuel cell vehicles.
“The choice of material is important,” he said. “Boron nitride has been shown to be better in terms of hydrogen absorption than pure graphene, carbon nanotubes or hybrids of graphene and boron nitride.
“But the spacing and arrangement of hBN sheets and pillars is also critical,” he said. “So we decided to perform an exhaustive search of all the possible geometries of hBN to see which worked best. We also expanded the calculations to include various temperatures, pressures and dopants, trace elements that can be added to the boron nitride to enhance its hydrogen storage capacity.”
Zhao and Shahsavari set up numerous “ab initio” tests, computer simulations that used first principles of physics. Shahsavari said the approach was computationally intense but worth the extra effort because it offered the most precision.
“We conducted nearly 4,000 ab initio calculations to try and find that sweet spot where the material and geometry go hand in hand and really work together to optimize hydrogen storage,” he said.
Unlike materials that store hydrogen through chemical bonding, Shahsavari said boron nitride is a sorbent that holds hydrogen through physical bonds, which are weaker than chemical bonds. That’s an advantage when it comes to getting hydrogen out of storage because sorbent materials tend to discharge more easily than their chemical cousins, Shahsavari said.
He said the choice of boron nitride sheets or tubes and the corresponding spacing between them in the superstructure were the key to maximizing capacity.
“Without pillars, the sheets sit naturally one atop the other about 3 angstroms apart, and very few hydrogen atoms can penetrate that space,” he said. “When the distance grew to 6 angstroms or more, the capacity also fell off. At 5.2 angstroms, there is a cooperative attraction from both the ceiling and floor, and the hydrogen tends to clump in the middle. Conversely, models made of purely BN tubes — not sheets — had less storage capacity.”
Shahsavari said models showed that the pure hBN tube-sheet structures could hold 8 weight percent of hydrogen. (Weight percent is a measure of concentration, similar to parts per million.) Physical experiments are needed to verify that capacity, but that the DOE’s ultimate target is 7.5 weight percent, and Shahsavari’s models suggests even more hydrogen can be stored in his structure if trace amounts of lithium are added to the hBN.
Finally, Shahsavari said, irregularities in the flat, floor-like sheets of the structure could also prove useful for engineers.
“Wrinkles form naturally in the sheets of pillared boron nitride because of the nature of the junctions between the columns and floors,” he said. “In fact, this could also be advantageous because the wrinkles can provide toughness. If the material is placed under load or impact, that buckled shape can unbuckle easily without breaking. This could add to the material’s safety, which is a big concern in hydrogen storage devices.
“Furthermore, the high thermal conductivity and flexibility of BN may provide additional opportunities to control the adsorption and release kinetics on-demand,” Shahsavari said. “For example, it may be possible to control release kinetics by applying an external voltage, heat or an electric field.”
I may be wrong but this “The motivation is to create an efficient material that can take up and hold a lot of hydrogen — both by volume and weight — and that can quickly and easily release that hydrogen when it’s needed, …” sounds like a supercapacitor. One other comment, this research appears to be ‘in silico’, i.e., all the testing has been done as computer simulations and the proposed materials themselves have yet to be tested.
I’d have to see it to believe it but researchers at the US Dept. of Energy (DOE) Lawrence Berkeley National Laboratory (LBNL) have developed a new kind of ‘bijel’ which would allow for some pretty nifty robotics. From a Sept. 25, 2017 news item on ScienceDaily,
A new two-dimensional film, made of polymers and nanoparticles and developed by researchers at the Department of Energy’s Lawrence Berkeley National Laboratory (Berkeley Lab), can direct two different non-mixing liquids into a variety of exotic architectures. This finding could lead to soft robotics, liquid circuitry, shape-shifting fluids, and a host of new materials that use soft, rather than solid, substances.
The study, reported today in the journal Nature Nanotechnology, presents the newest entry in a class of substances known as bicontinuous jammed emulsion gels, or bijels, which hold promise as a malleable liquid that can support catalytic reactions, electrical conductivity, and energy conversion.
Bijels are typically made of immiscible, or non-mixing, liquids. People who shake their bottle of vinaigrette before pouring the dressing on their salad are familiar with such liquids. As soon as the shaking stops, the liquids start to separate again, with the lower density liquid – often oil – rising to the top.
Trapping, or jamming, particles where these immiscible liquids meet can prevent the liquids from completely separating, stabilizing the substance into a bijel. What makes bijels remarkable is that, rather than just making the spherical droplets that we normally see when we try to mix oil and water, the particles at the interface shape the liquids into complex networks of interconnected fluid channels.
Bijels are notoriously difficult to make, however, involving exact temperatures at precisely timed stages. In addition, the liquid channels are normally more than 5 micrometers across, making them too large to be useful in energy conversion and catalysis.
“Bijels have long been of interest as next-generation materials for energy applications and chemical synthesis,” said study lead author Caili Huang. “The problem has been making enough of them, and with features of the right size. In this work, we crack that problem.”
Huang started the work as a graduate student with Thomas Russell, the study’s principal investigator, at Berkeley Lab’s Materials Sciences Division, and he continued the project as a postdoctoral researcher at DOE’s Oak Ridge National Laboratory.
Creating a new bijel recipe
The method described in this new study simplifies the bijel process by first using specially coated particles about 10-20 nanometers in diameter. The smaller-sized particles line the liquid interfaces much more quickly than the ones used in traditional bijels, making the smaller channels that are highly valued for applications.
Illustration shows key stages of bijel formation. Clockwise from top left, two non-mixing liquids are shown. Ligands (shown in yellow) with amine groups are dispersed throughout the oil or solvent, and nanoparticles coated with carboxylic acids (shown as blue dots) are scattered in the water. With vigorous shaking, the nanoparticles and ligands form a “supersoap” that gets trapped at the interface of the two liquids. The bottom panel is a magnified view of the jammed nanoparticle supersoap. (Credit: Caili Huang/ORNL)
“We’ve basically taken liquids like oil and water and given them a structure, and it’s a structure that can be changed,” said Russell, a visiting faculty scientist at Berkeley Lab. “If the nanoparticles are responsive to electrical, magnetic, or mechanical stimuli, the bijels can become reconfigurable and re-shaped on demand by an external field.”
The researchers were able to prepare new bijels from a variety of common organic, water-insoluble solvents, such as toluene, that had ligands dissolved in it, and deionized water, which contained the nanoparticles. To ensure thorough mixing of the liquids, they subjected the emulsion to a vortex spinning at 3,200 revolutions per minute.
“This extreme shaking creates a whole bunch of new places where these particles and polymers can meet each other,” said study co-author Joe Forth, a postdoctoral fellow at Berkeley Lab’s Materials Sciences Division. “You’re synthesizing a lot of this material, which is in effect a thin, 2-D coating of the liquid surfaces in the system.”
The liquids remained a bijel even after one week, a sign of the system’s stability.
Russell, who is also a professor of polymer science and engineering at the University of Massachusetts-Amherst, added that these shape-shifting characteristics would be valuable in microreactors, microfluidic devices, and soft actuators.
Nanoparticles had not been seriously considered in bijels before because their small size made them hard to trap in the liquid interface. To resolve that problem, the researchers coated nano-sized particles with carboxylic acids and put them in water. They then took polymers with an added amine group – a derivative of ammonia – and dissolved them in the toluene.
At left is a vial of bijel stabilized with nanoparticle surfactants. On the right is the same vial after a week of inversion, showing that the nanoparticle kept the liquids from moving. (Credit: Caili Huang/ORNL)
This configuration took advantage of the amine group’s affinity to water, a characteristic that is comparable to surfactants, like soap. Their nanoparticle “supersoap” was designed so that the nanoparticles join ligands, forming an octopus-like shape with a polar head and nonpolar legs that get jammed at the interface, the researchers said.
“Bijels are really a new material, and also excitingly weird in that they are kinetically arrested in these unusual configurations,” said study co-author Brett Helms, a staff scientist at Berkeley Lab’s Molecular Foundry. “The discovery that you can make these bijels with simple ingredients is a surprise. We all have access to oils and water and nanocrystals, allowing broad tunability in bijel properties. This platform also allows us to experiment with new ways to control their shape and function since they are both responsive and reconfigurable.”
The nanoparticles were made of silica, but the researchers noted that in previous studies they used graphene and carbon nanotubes to form nanoparticle surfactants.
“The key is that the nanoparticles can be made of many materials,” said Russell. “The most important thing is what’s on the surface.”
This is an animation of the bijel
3-D rendering of the nanoparticle bijel taken by confocal microscope. (Credit: Caili Huang/ORNL [Oak Ridge National Laboratory] and Joe Forth/Berkeley Lab)
Identification of the precise 3-D coordinates of iron, shown in red, and platinum atoms in an iron-platinum nanoparticle.. Courtesy of Colin Ophus and Florian Nickel/Berkeley Lab
The image of the iron-platinum nanoparticle (referenced in the headline) reminds of foetal ultrasound images. A Feb. 1, 2017 news item on ScienceDaily tells us more,
In the world of the very tiny, perfection is rare: virtually all materials have defects on the atomic level. These imperfections — missing atoms, atoms of one type swapped for another, and misaligned atoms — can uniquely determine a material’s properties and function. Now, UCLA [University of California at Los Angeles] physicists and collaborators have mapped the coordinates of more than 23,000 individual atoms in a tiny iron-platinum nanoparticle to reveal the material’s defects.
The results demonstrate that the positions of tens of thousands of atoms can be precisely identified and then fed into quantum mechanics calculations to correlate imperfections and defects with material properties at the single-atom level.
Jianwei “John” Miao, a UCLA professor of physics and astronomy and a member of UCLA’s California NanoSystems Institute, led the international team in mapping the atomic-level details of the bimetallic nanoparticle, more than a trillion of which could fit within a grain of sand.
“No one has seen this kind of three-dimensional structural complexity with such detail before,” said Miao, who is also a deputy director of the Science and Technology Center on Real-Time Functional Imaging. This new National Science Foundation-funded consortium consists of scientists at UCLA and five other colleges and universities who are using high-resolution imaging to address questions in the physical sciences, life sciences and engineering.
Miao and his team focused on an iron-platinum alloy, a very promising material for next-generation magnetic storage media and permanent magnet applications.
By taking multiple images of the iron-platinum nanoparticle with an advanced electron microscope at Lawrence Berkeley National Laboratory and using powerful reconstruction algorithms developed at UCLA, the researchers determined the precise three-dimensional arrangement of atoms in the nanoparticle.
“For the first time, we can see individual atoms and chemical composition in three dimensions. Everything we look at, it’s new,” Miao said.
The team identified and located more than 6,500 iron and 16,600 platinum atoms and showed how the atoms are arranged in nine grains, each of which contains different ratios of iron and platinum atoms. Miao and his colleagues showed that atoms closer to the interior of the grains are more regularly arranged than those near the surfaces. They also observed that the interfaces between grains, called grain boundaries, are more disordered.
“Understanding the three-dimensional structures of grain boundaries is a major challenge in materials science because they strongly influence the properties of materials,” Miao said. “Now we are able to address this challenge by precisely mapping out the three-dimensional atomic positions at the grain boundaries for the first time.”
The researchers then used the three-dimensional coordinates of the atoms as inputs into quantum mechanics calculations to determine the magnetic properties of the iron-platinum nanoparticle. They observed abrupt changes in magnetic properties at the grain boundaries.
“This work makes significant advances in characterization capabilities and expands our fundamental understanding of structure-property relationships, which is expected to find broad applications in physics, chemistry, materials science, nanoscience and nanotechnology,” Miao said.
In the future, as the researchers continue to determine the three-dimensional atomic coordinates of more materials, they plan to establish an online databank for the physical sciences, analogous to protein databanks for the biological and life sciences. “Researchers can use this databank to study material properties truly on the single-atom level,” Miao said.
Miao and his team also look forward to applying their method called GENFIRE (GENeralized Fourier Iterative Reconstruction) to biological and medical applications. “Our three-dimensional reconstruction algorithm might be useful for imaging like CT scans,” Miao said. Compared with conventional reconstruction methods, GENFIRE requires fewer images to compile an accurate three-dimensional structure.
That means that radiation-sensitive objects can be imaged with lower doses of radiation.
The US Dept. of Energy (DOE) Lawrence Berkeley National Laboratory issued their own Feb. 1, 2017 news release (also on EurekAlert) about the work with a focus on how their equipment made this breakthrough possible (it repeats a little of the info. from the UCLA news release),
Scientists used one of the world’s most powerful electron microscopes to map the precise location and chemical type of 23,000 atoms in an extremely small particle made of iron and platinum.
The 3-D reconstruction reveals the arrangement of atoms in unprecedented detail, enabling the scientists to measure chemical order and disorder in individual grains, which sheds light on the material’s properties at the single-atom level. Insights gained from the particle’s structure could lead to new ways to improve its magnetic performance for use in high-density, next-generation hard drives.
What’s more, the technique used to create the reconstruction, atomic electron tomography (which is like an incredibly high-resolution CT scan), lays the foundation for precisely mapping the atomic composition of other useful nanoparticles. This could reveal how to optimize the particles for more efficient catalysts, stronger materials, and disease-detecting fluorescent tags.
Microscopy data was obtained and analyzed by scientists from the Department of Energy’s Lawrence Berkeley National Laboratory (Berkeley Lab) at the Molecular Foundry, in collaboration with Foundry users from UCLA, Oak Ridge National Laboratory, and the United Kingdom’s University of Birmingham. …
Atoms are the building blocks of matter, and the patterns in which they’re arranged dictate a material’s properties. These patterns can also be exploited to greatly improve a material’s function, which is why scientists are eager to determine the 3-D structure of nanoparticles at the smallest scale possible.
“Our research is a big step in this direction. We can now take a snapshot that shows the positions of all the atoms in a nanoparticle at a specific point in its growth. This will help us learn how nanoparticles grow atom by atom, and it sets the stage for a materials-design approach starting from the smallest building blocks,” says Mary Scott, who conducted the research while she was a Foundry user, and who is now a staff scientist. Scott and fellow Foundry scientists Peter Ercius and Colin Ophus developed the method in close collaboration with Jianwei Miao, a UCLA professor of physics and astronomy.
Their nanoparticle reconstruction builds on an achievement they reported last year in which they measured the coordinates of more than 3,000 atoms in a tungsten needle to a precision of 19 trillionths of a meter (19 picometers), which is many times smaller than a hydrogen atom. Now, they’ve taken the same precision, added the ability to distinguish different elements, and scaled up the reconstruction to include tens of thousands of atoms.
Importantly, their method maps the position of each atom in a single, unique nanoparticle. In contrast, X-ray crystallography and cryo-electron microscopy plot the average position of atoms from many identical samples. These methods make assumptions about the arrangement of atoms, which isn’t a good fit for nanoparticles because no two are alike.
“We need to determine the location and type of each atom to truly understand how a nanoparticle functions at the atomic scale,” says Ercius.
A TEAM approach
The scientists’ latest accomplishment hinged on the use of one of the highest-resolution transmission electron microscopes in the world, called TEAM I. It’s located at the National Center for Electron Microscopy, which is a Molecular Foundry facility. The microscope scans a sample with a focused beam of electrons, and then measures how the electrons interact with the atoms in the sample. It also has a piezo-controlled stage that positions samples with unmatched stability and position-control accuracy.
The researchers began growing an iron-platinum nanoparticle from its constituent elements, and then stopped the particle’s growth before it was fully formed. They placed the “partially baked” particle in the TEAM I stage, obtained a 2-D projection of its atomic structure, rotated it a few degrees, obtained another projection, and so on. Each 2-D projection provides a little more information about the full 3-D structure of the nanoparticle.
They sent the projections to Miao at UCLA, who used a sophisticated computer algorithm to convert the 2-D projections into a 3-D reconstruction of the particle. The individual atomic coordinates and chemical types were then traced from the 3-D density based on the knowledge that iron atoms are lighter than platinum atoms. The resulting atomic structure contains 6,569 iron atoms and 16,627 platinum atoms, with each atom’s coordinates precisely plotted to less than the width of a hydrogen atom.
Translating the data into scientific insights
Interesting features emerged at this extreme scale after Molecular Foundry scientists used code they developed to analyze the atomic structure. For example, the analysis revealed chemical order and disorder in interlocking grains, in which the iron and platinum atoms are arranged in different patterns. This has large implications for how the particle grew and its real-world magnetic properties. The analysis also revealed single-atom defects and the width of disordered boundaries between grains, which was not previously possible in complex 3-D boundaries.
“The important materials science problem we are tackling is how this material transforms from a highly randomized structure, what we call a chemically-disordered structure, into a regular highly-ordered structure with the desired magnetic properties,” says Ophus.
To explore how the various arrangements of atoms affect the nanoparticle’s magnetic properties, scientists from DOE’s Oak Ridge National Laboratory ran computer calculations on the Titan supercomputer at ORNL–using the coordinates and chemical type of each atom–to simulate the nanoparticle’s behavior in a magnetic field. This allowed the scientists to see patterns of atoms that are very magnetic, which is ideal for hard drives. They also saw patterns with poor magnetic properties that could sap a hard drive’s performance.
“This could help scientists learn how to steer the growth of iron-platinum nanoparticles so they develop more highly magnetic patterns of atoms,” says Ercius.
Adds Scott, “More broadly, the imaging technique will shed light on the nucleation and growth of ordered phases within nanoparticles, which isn’t fully theoretically understood but is critically important to several scientific disciplines and technologies.”
The folks at the Berkeley Lab have created a video (notice where the still image from the beginning of this post appears),
The Oak Ridge National Laboratory (ORNL), not wanting to be left out, has been mentioned in a Feb. 3, 2017 news item on ScienceDaily,
… researchers working with magnetic nanoparticles at the University of California, Los Angeles (UCLA), and the US Department of Energy’s (DOE’s) Lawrence Berkeley National Laboratory (Berkeley Lab) approached computational scientists at DOE’s Oak Ridge National Laboratory (ORNL) to help solve a unique problem: to model magnetism at the atomic level using experimental data from a real nanoparticle.
“These types of calculations have been done for ideal particles with ideal crystal structures but not for real particles,” said Markus Eisenbach, a computational scientist at the Oak Ridge Leadership Computing Facility (OLCF), a DOE Office of Science User Facility located at ORNL.
Eisenbach develops quantum mechanical electronic structure simulations that predict magnetic properties in materials. Working with Paul Kent, a computational materials scientist at ORNL’s Center for Nanophase Materials Sciences, the team collaborated with researchers at UCLA and Berkeley Lab’s Molecular Foundry to combine world-class experimental data with world-class computing to do something new–simulate magnetism atom by atom in a real nanoparticle.
Using the new data from the research teams on the West Coast, Eisenbach and Kent were able to precisely model the measured atomic structure, including defects, from a unique iron-platinum (FePt) nanoparticle and simulate its magnetic properties on the 27-petaflop Titan supercomputer at the OLCF.
Electronic structure codes take atomic and chemical structure and solve for the corresponding magnetic properties. However, these structures are typically derived from many 2-D electron microscopy or x-ray crystallography images averaged together, resulting in a representative, but not true, 3-D structure.
“In this case, researchers were able to get the precise 3-D structure for a real particle,” Eisenbach said. “The UCLA group has developed a new experimental technique where they can tell where the atoms are–the coordinates–and the chemical resolution, or what they are — iron or platinum.”
The ORNL news release goes on to describe the work from the perspective of the people who ran the supercompute simulationsr,
A Supercomputing Milestone
Magnetism at the atomic level is driven by quantum mechanics — a fact that has shaken up classical physics calculations and called for increasingly complex, first-principle calculations, or calculations working forward from fundamental physics equations rather than relying on assumptions that reduce computational workload.
For magnetic recording and storage devices, researchers are particularly interested in magnetic anisotropy, or what direction magnetism favors in an atom.
“If the anisotropy is too weak, a bit written to the nanoparticle might flip at room temperature,” Kent said.
To solve for magnetic anisotropy, Eisenbach and Kent used two computational codes to compare and validate results.
To simulate a supercell of about 1,300 atoms from strongly magnetic regions of the 23,000-atom nanoparticle, they used the Linear Scaling Multiple Scattering (LSMS) code, a first-principles density functional theory code developed at ORNL.
“The LSMS code was developed for large magnetic systems and can tackle lots of atoms,” Kent said.
As principal investigator on 2017, 2016, and previous INCITE program awards, Eisenbach has scaled the LSMS code to Titan for a range of magnetic materials projects, and the in-house code has been optimized for Titan’s accelerated architecture, speeding up calculations more than 8 times on the machine’s GPUs. Exceptionally capable of crunching large magnetic systems quickly, the LSMS code received an Association for Computing Machinery Gordon Bell Prize in high-performance computing achievement in 1998 and 2009, and developments continue to enhance the code for new architectures.
Working with Renat Sabirianov at the University of Nebraska at Omaha, the team also ran VASP, a simulation package that is better suited for smaller atom counts, to simulate regions of about 32 atoms.
“With both approaches, we were able to confirm that the local VASP results were consistent with the LSMS results, so we have a high confidence in the simulations,” Eisenbach said.
Computer simulations revealed that grain boundaries have a strong effect on magnetism. “We found that the magnetic anisotropy energy suddenly transitions at the grain boundaries. These magnetic properties are very important,” Miao said.
In the future, researchers hope that advances in computing and simulation will make a full-particle simulation possible — as first-principles calculations are currently too intensive to solve small-scale magnetism for regions larger than a few thousand atoms.
Also, future simulations like these could show how different fabrication processes, such as the temperature at which nanoparticles are formed, influence magnetism and performance.
“There’s a hope going forward that one would be able to use these techniques to look at nanoparticle growth and understand how to optimize growth for performance,” Kent said.
Finally, here’s a link to and a citation for the paper,
You might want to skip over the reference to snow as it doesn’t have much relevance to this story about ‘melting’, from a Feb. 1, 2017 news item on Nanowerk (Note: A link has been removed),
Snow falls in winter and melts in spring, but what drives the phase change in between?
Although melting is a familiar phenomenon encountered in everyday life, playing a part in many industrial and commercial processes, much remains to be discovered about this transformation at a fundamental level.
In 2015, a team led by the University of Michigan’s Sharon Glotzer used high-performance computing at the Department of Energy’s (DOE’s) Oak Ridge National Laboratory [ORNL] to study melting in two-dimensional (2-D) systems, a problem that could yield insights into surface interactions in materials important to technologies like solar panels, as well as into the mechanism behind three-dimensional melting. The team explored how particle shape affects the physics of a solid-to-fluid melting transition in two dimensions.
Using the Cray XK7 Titan supercomputer at the Oak Ridge Leadership Computing Facility (OLCF), a DOE Office of Science User Facility, the team’s [latest?] work revealed that the shape and symmetry of particles can dramatically affect the melting process (“Shape and symmetry determine two-dimensional melting transitions of hard regular polygons”). This fundamental finding could help guide researchers in search of nanoparticles with desirable properties for energy applications.
There is a video of the ‘melting’ process but I have to confess to finding it a bit enigmatic,
o tackle the problem, Glotzer’s team needed a supercomputer capable of simulating systems of up to 1 million hard polygons, simple particles used as stand-ins for atoms, ranging from triangles to 14-sided shapes. Unlike traditional molecular dynamics simulations that attempt to mimic nature, hard polygon simulations give researchers a pared-down environment in which to evaluate shape-influenced physics.
“Within our simulated 2-D environment, we found that the melting transition follows one of three different scenarios depending on the shape of the systems’ polygons,” University of Michigan research scientist Joshua Anderson said. “Notably, we found that systems made up of hexagons perfectly follow a well-known theory for 2-D melting, something that hasn’t been described until now.”
Shifting Shape Scenarios
In 3-D systems such as a thinning icicle, melting takes the form of a first-order phase transition. This means that collections of molecules within these systems exist in either solid or liquid form with no in-between in the presence of latent heat, the energy that fuels a solid-to-fluid phase change . In 2-D systems, such as thin-film materials used in batteries and other technologies, melting can be more complex, sometimes exhibiting an intermediate phase known as the hexatic phase.
The hexatic phase, a state characterized as a halfway point between an ordered solid and a disordered liquid, was first theorized in the 1970s by researchers John Kosterlitz, David Thouless, Burt Halperin, David Nelson, and Peter Young. The phase is a principle feature of the KTHNY theory, a 2-D melting theory posited by the researchers (and named based on the first letters of their last names). In 2016 Kosterlitz and Thouless were awarded the Nobel Prize in Physics, along with physicist Duncan Haldane, for their contributions to 2-D materials research.
At the molecular level, solid, hexatic, and liquid systems are defined by the arrangement of their atoms. In a crystalline solid, two types of order are present: translational and orientational. Translational order describes the well-defined paths between atoms over distances, like blocks in a carefully constructed Jenga tower. Orientational order describes the relational and clustered order shared between atoms and groups of atoms over distances. Think of that same Jenga tower turned askew after several rounds of play. The general shape of the tower remains, but its order is now fragmented.
The hexatic phase has no translational order but possesses orientational order. (A liquid has neither translational nor orientational order but exhibits short-range order, meaning any atom will have some average number of neighbors nearby but with no predicable order.)
On Titan, HOOMD-blue used 64 GPUs for each massively parallel Monte Carlo simulation of up to 1 million particles. Researchers explored 11 different shape systems, applying an external pressure to push the particles together. Each system was simulated at 21 different densities, with the lowest densities representing a fluid state and the highest densities a solid state.
The simulations demonstrated multiple melting scenarios hinging on the polygons’ shape. Systems with polygons of seven sides or more closely followed the melting behavior of hard disks, or circles, exhibiting a continuous phase transition from the solid to the hexatic phase and a first-order phase transition from the hexatic to the liquid phase. A continuous phase transition means a constantly changing area in response to a changing external pressure. A first-order phase transition is characterized by a discontinuity in which the volume jumps across the phase transition in response to the changing external pressure. The team found pentagons and fourfold pentilles, irregular pentagons with two different edge lengths, exhibit a first-order solid-to-liquid phase transition.
The most significant finding, however, emerged from hexagon systems, which perfectly followed the phase transition described by the KTHNY theory. In this scenario, the particles’ shift from solid to hexatic and hexatic to fluid in a perfect continuous phase transition pattern.
“It was actually sort of surprising that no one else has found that until now,” Anderson said, “because it seems natural that the hexagon, with its six sides, and the honeycomb-like hexagonal arrangement would be a perfect match for this theory” in which the hexatic phase generally contains sixfold orientational order.
Glotzer’s team, which recently received a 2017 INCITE allocation, is now applying its leadership-class computing prowess to tackle phase transitions in 3-D. The team is focusing on how fluid particles crystallize into complex colloids—mixtures in which particles are suspended throughout another substance. Common examples of colloids include milk, paper, fog, and stained glass.
“We’re planning on using Titan to study how complexity can arise from these simple interactions, and to do that we’re actually going to look at how the crystals grow and study the kinetics of how that happens,” said Anderson.
Layers of graphene separated by nanotube pillars of boron nitride may be a suitable material to store hydrogen fuel in cars, according to Rice University scientists.
The Department of Energy has set benchmarks for storage materials that would make hydrogen a practical fuel for light-duty vehicles. The Rice lab of materials scientist Rouzbeh Shahsavari determined in a new computational study that pillared boron nitride and graphene could be a candidate.
Shahsavari’s lab had already determined through computer models how tough and resilient pillared graphene structures would be, and later worked boron nitride nanotubes into the mix to model a unique three-dimensional architecture. (Samples of boron nitride nanotubes seamlessly bonded to graphene have been made.)
Just as pillars in a building make space between floors for people, pillars in boron nitride graphene make space for hydrogen atoms. The challenge is to make them enter and stay in sufficient numbers and exit upon demand.
In their latest molecular dynamics simulations, the researchers found that either pillared graphene or pillared boron nitride graphene would offer abundant surface area (about 2,547 square meters per gram) with good recyclable properties under ambient conditions. Their models showed adding oxygen or lithium to the materials would make them even better at binding hydrogen.
They focused the simulations on four variants: pillared structures of boron nitride or pillared boron nitride graphene doped with either oxygen or lithium. At room temperature and in ambient pressure, oxygen-doped boron nitride graphene proved the best, holding 11.6 percent of its weight in hydrogen (its gravimetric capacity) and about 60 grams per liter (its volumetric capacity); it easily beat competing technologies like porous boron nitride, metal oxide frameworks and carbon nanotubes.
At a chilly -321 degrees Fahrenheit, the material held 14.77 percent of its weight in hydrogen.
The Department of Energy’s current target for economic storage media is the ability to store more than 5.5 percent of its weight and 40 grams per liter in hydrogen under moderate conditions. The ultimate targets are 7.5 weight percent and 70 grams per liter.
Shahsavari said hydrogen atoms adsorbed to the undoped pillared boron nitride graphene, thanks to weak van der Waals forces. When the material was doped with oxygen, the atoms bonded strongly with the hybrid and created a better surface for incoming hydrogen, which Shahsavari said would likely be delivered under pressure and would exit when pressure is released.
“Adding oxygen to the substrate gives us good bonding because of the nature of the charges and their interactions,” he said. “Oxygen and hydrogen are known to have good chemical affinity.”
He said the polarized nature of the boron nitride where it bonds with the graphene and the electron mobility of the graphene itself make the material highly tunable for applications.
“What we’re looking for is the sweet spot,” Shahsavari said, describing the ideal conditions as a balance between the material’s surface area and weight, as well as the operating temperatures and pressures. “This is only practical through computational modeling, because we can test a lot of variations very quickly. It would take experimentalists months to do what takes us only days.”
He said the structures should be robust enough to easily surpass the Department of Energy requirement that a hydrogen fuel tank be able to withstand 1,500 charge-discharge cycles.
Shayeganfar [Farzaneh Shayeganfar], a former visiting scholar at Rice, is an instructor at Shahid Rajaee Teacher Training University in Tehran, Iran.
Caption: Simulations by Rice University scientists show that pillared graphene boron nitride may be a suitable storage medium for hydrogen-powered vehicles. Above, the pink (boron) and blue (nitrogen) pillars serve as spacers for carbon graphene sheets (gray). The researchers showed the material worked best when doped with oxygen atoms (red), which enhanced its ability to adsorb and desorb hydrogen (white). Credit: Lei Tao/Rice University
An Oct. 12, 2016 news item on ScienceDaily makes an exciting announcement, if carbon-dixoide-conversion-to-fuel is one of your pet topics,
In a new twist to waste-to-fuel technology, scientists at the Department of Energy’s Oak Ridge National Laboratory [ORNL] have developed an electrochemical process that uses tiny spikes of carbon and copper to turn carbon dioxide, a greenhouse gas, into ethanol. Their finding, which involves nanofabrication and catalysis science, was serendipitous.
“We discovered somewhat by accident that this material worked,” said ORNL’s Adam Rondinone, lead author of the team’s study published in ChemistrySelect. “We were trying to study the first step of a proposed reaction when we realized that the catalyst was doing the entire reaction on its own.”
The team used a catalyst made of carbon, copper and nitrogen and applied voltage to trigger a complicated chemical reaction that essentially reverses the combustion process. With the help of the nanotechnology-based catalyst which contains multiple reaction sites, the solution of carbon dioxide dissolved in water turned into ethanol with a yield of 63 percent. Typically, this type of electrochemical reaction results in a mix of several different products in small amounts.
“We’re taking carbon dioxide, a waste product of combustion, and we’re pushing that combustion reaction backwards with very high selectivity to a useful fuel,” Rondinone said. “Ethanol was a surprise — it’s extremely difficult to go straight from carbon dioxide to ethanol with a single catalyst.”
The catalyst’s novelty lies in its nanoscale structure, consisting of copper nanoparticles embedded in carbon spikes. This nano-texturing approach avoids the use of expensive or rare metals such as platinum that limit the economic viability of many catalysts.
“By using common materials, but arranging them with nanotechnology, we figured out how to limit the side reactions and end up with the one thing that we want,” Rondinone said.
The researchers’ initial analysis suggests that the spiky textured surface of the catalysts provides ample reactive sites to facilitate the carbon dioxide-to-ethanol conversion.
“They are like 50-nanometer lightning rods that concentrate electrochemical reactivity at the tip of the spike,” Rondinone said.
Given the technique’s reliance on low-cost materials and an ability to operate at room temperature in water, the researchers believe the approach could be scaled up for industrially relevant applications. For instance, the process could be used to store excess electricity generated from variable power sources such as wind and solar.
“A process like this would allow you to consume extra electricity when it’s available to make and store as ethanol,” Rondinone said. “This could help to balance a grid supplied by intermittent renewable sources.”
The researchers plan to refine their approach to improve the overall production rate and further study the catalyst’s properties and behavior.
The US has embarked on a number of what is called “Grand Challenges.” I first came across the concept when reading about the Bill and Melinda Gates (of Microsoft fame) Foundation. I gather these challenges are intended to provide funding for research that advances bold visions.
There is the US National Strategic Computing Initiative established on July 29, 2015 and its first anniversary results were announced one year to the day later. Within that initiative a nanotechnology-inspired Grand Challenge for Future Computing was issued and, according to a July 29, 2016 news item on Nanowerk, a white paper on the topic has been issued (Note: A link has been removed),
Today [July 29, 2016), Federal agencies participating in the National Nanotechnology Initiative (NNI) released a white paper (pdf) describing the collective Federal vision for the emerging and innovative solutions needed to realize the Nanotechnology-Inspired Grand Challenge for Future Computing.
The grand challenge, announced on October 20, 2015, is to “create a new type of computer that can proactively interpret and learn from data, solve unfamiliar problems using what it has learned, and operate with the energy efficiency of the human brain.” The white paper describes the technical priorities shared by the agencies, highlights the challenges and opportunities associated with these priorities, and presents a guiding vision for the research and development (R&D) needed to achieve key technical goals. By coordinating and collaborating across multiple levels of government, industry, academia, and nonprofit organizations, the nanotechnology and computer science communities can look beyond the decades-old approach to computing based on the von Neumann architecture and chart a new path that will continue the rapid pace of innovation beyond the next decade.
“Materials and devices for computing have been and will continue to be a key application domain in the field of nanotechnology. As evident by the R&D topics highlighted in the white paper, this challenge will require the convergence of nanotechnology, neuroscience, and computer science to create a whole new paradigm for low-power computing with revolutionary, brain-like capabilities,” said Dr. Michael Meador, Director of the National Nanotechnology Coordination Office. …
The white paper was produced as a collaboration by technical staff at the Department of Energy, the National Science Foundation, the Department of Defense, the National Institute of Standards and Technology, and the Intelligence Community. …
A new materials base may be needed for future electronic hardware. While most of today’s electronics use silicon, this approach is unsustainable if billions of disposable and short-lived sensor nodes are needed for the coming Internet-of-Things (IoT). To what extent can the materials base for the implementation of future information technology (IT) components and systems support sustainability through recycling and bio-degradability? More sustainable materials, such as compostable or biodegradable systems (polymers, paper, etc.) that can be recycled or reused, may play an important role. The potential role for such alternative materials in the fabrication of integrated systems needs to be explored as well. [p. 5]
The basic architecture of computers today is essentially the same as those built in the 1940s—the von Neumann architecture—with separate compute, high-speed memory, and high-density storage components that are electronically interconnected. However, it is well known that continued performance increases using this architecture are not feasible in the long term, with power density constraints being one of the fundamental roadblocks.7 Further advances in the current approach using multiple cores, chip multiprocessors, and associated architectures are plagued by challenges in software and programming models. Thus, research and development is required in radically new and different computing architectures involving processors, memory, input-output devices, and how they behave and are interconnected. [p. 7]
Neuroscience research suggests that the brain is a complex, high-performance computing system with low energy consumption and incredible parallelism. A highly plastic and flexible organ, the human brain is able to grow new neurons, synapses, and connections to cope with an ever-changing environment. Energy efficiency, growth, and flexibility occur at all scales, from molecular to cellular, and allow the brain, from early to late stage, to never stop learning and to act with proactive intelligence in both familiar and novel situations. Understanding how these mechanisms work and cooperate within and across scales has the potential to offer tremendous technical insights and novel engineering frameworks for materials, devices, and systems seeking to perform efficient and autonomous computing. This research focus area is the most synergistic with the national BRAIN Initiative. However, unlike the BRAIN Initiative, where the goal is to map the network connectivity of the brain, the objective here is to understand the nature, methods, and mechanisms for computation, and how the brain performs some of its tasks. Even within this broad paradigm, one can loosely distinguish between neuromorphic computing and artificial neural network (ANN) approaches. The goal of neuromorphic computing is oriented towards a hardware approach to reverse engineering the computational architecture of the brain. On the other hand, ANNs include algorithmic approaches arising from machinelearning, which in turn could leverage advancements and understanding in neuroscience as well as novel cognitive, mathematical, and statistical techniques. Indeed, the ultimate intelligent systems may as well be the result of merging existing ANN (e.g., deep learning) and bio-inspired techniques. [p. 8]
As government documents go, this is quite readable.
This atomic force microscopy image of the grainy surface of a perovskite solar cell reveals a new path to much greater efficiency. Individual grains are outlined in black, low-performing facets are red, and high-performing facets are green. A big jump in efficiency could possibly be obtained if the material can be grown so that more high-performing facets develop. (Credit: Berkeley Lab)
It’s always fascinating to observe a trend (or a craze) in science, an endeavour that outsiders (like me) tend to think of as impervious to such vagaries. Perovskite seems to be making its way past the trend/craze phase and moving into a more meaningful phase. From a July 4, 2016 news item on Nanowerk,
Scientists from the Department of Energy’s Lawrence Berkeley National Laboratory (Berkeley Lab) have discovered a possible secret to dramatically boosting the efficiency of perovskite solar cells hidden in the nanoscale peaks and valleys of the crystalline material.
Solar cells made from compounds that have the crystal structure of the mineral perovskite have captured scientists’ imaginations. They’re inexpensive and easy to fabricate, like organic solar cells. Even more intriguing, the efficiency at which perovskite solar cells convert photons to electricity has increased more rapidly than any other material to date, starting at three percent in 2009 — when researchers first began exploring the material’s photovoltaic capabilities — to 22 percent today. This is in the ballpark of the efficiency of silicon solar cells.
Now, as reported online July 4, 2016 in the journal Nature Energy (“Facet-dependent photovoltaic efficiency variations in single grains of hybrid halide perovskite”), a team of scientists from the Molecular Foundry and the Joint Center for Artificial Photosynthesis, both at Berkeley Lab, found a surprising characteristic of a perovskite solar cell that could be exploited for even higher efficiencies, possibly up to 31 percent.
Using photoconductive atomic force microscopy, the scientists mapped two properties on the active layer of the solar cell that relate to its photovoltaic efficiency. The maps revealed a bumpy surface composed of grains about 200 nanometers in length, and each grain has multi-angled facets like the faces of a gemstone.
Unexpectedly, the scientists discovered a huge difference in energy conversion efficiency between facets on individual grains. They found poorly performing facets adjacent to highly efficient facets, with some facets approaching the material’s theoretical energy conversion limit of 31 percent.
The scientists say these top-performing facets could hold the secret to highly efficient solar cells, although more research is needed.
“If the material can be synthesized so that only very efficient facets develop, then we could see a big jump in the efficiency of perovskite solar cells, possibly approaching 31 percent,” says Sibel Leblebici, a postdoctoral researcher at the Molecular Foundry.
Leblebici works in the lab of Alexander Weber-Bargioni, who is a corresponding author of the paper that describes this research. Ian Sharp, also a corresponding author, is a Berkeley Lab scientist at the Joint Center for Artificial Photosynthesis. Other Berkeley Lab scientists who contributed include Linn Leppert, Francesca Toma, and Jeff Neaton, the director of the Molecular Foundry.
A team effort
The research started when Leblebici was searching for a new project. “I thought perovskites are the most exciting thing in solar right now, and I really wanted to see how they work at the nanoscale, which has not been widely studied,” she says.
She didn’t have to go far to find the material. For the past two years, scientists at the nearby Joint Center for Artificial Photosynthesis have been making thin films of perovskite-based compounds, and studying their ability to convert sunlight and CO2 into useful chemicals such as fuel. Switching gears, they created pervoskite solar cells composed of methylammonium lead iodide. They also analyzed the cells’ performance at the macroscale.
The scientists also made a second set of half cells that didn’t have an electrode layer. They packed eight of these cells on a thin film measuring one square centimeter. These films were analyzed at the Molecular Foundry, where researchers mapped the cells’ surface topography at a resolution of ten nanometers. They also mapped two properties that relate to the cells’ photovoltaic efficiency: photocurrent generation and open circuit voltage.
This was performed using a state-of-the-art atomic force microscopy technique, developed in collaboration with Park Systems, which utilizes a conductive tip to scan the material’s surface. The method also eliminates friction between the tip and the sample. This is important because the material is so rough and soft that friction can damage the tip and sample, and cause artifacts in the photocurrent.
Surprise discovery could lead to better solar cells
The resulting maps revealed an order of magnitude difference in photocurrent generation, and a 0.6-volt difference in open circuit voltage, between facets on the same grain. In addition, facets with high photocurrent generation had high open circuit voltage, and facets with low photocurrent generation had low open circuit voltage.
“This was a big surprise. It shows, for the first time, that perovskite solar cells exhibit facet-dependent photovoltaic efficiency,” says Weber-Bargioni.
Adds Toma, “These results open the door to exploring new ways to control the development of the material’s facets to dramatically increase efficiency.”
In practice, the facets behave like billions of tiny solar cells, all connected in parallel. As the scientists discovered, some cells operate extremely well and others very poorly. In this scenario, the current flows towards the bad cells, lowering the overall performance of the material. But if the material can be optimized so that only highly efficient facets interface with the electrode, the losses incurred by the poor facets would be eliminated.
“This means, at the macroscale, the material could possibly approach its theoretical energy conversion limit of 31 percent,” says Sharp.
A theoretical model that describes the experimental results predicts these facets should also impact the emission of light when used as an LED. …
The Molecular Foundry is a DOE Office of Science User Facility located at Berkeley Lab. The Joint Center for Artificial Photosynthesis is a DOE Energy Innovation Hub led by the California Institute of Technology in partnership with Berkeley Lab.
Here’s a link to and a citation for the paper,
Facet-dependent photovoltaic efficiency variations in single grains of hybrid halide perovskite by Sibel Y. Leblebici, Linn Leppert, Yanbo Li, Sebastian E. Reyes-Lillo, Sebastian Wickenburg, Ed Wong, Jiye Lee, Mauro Melli, Dominik Ziegler, Daniel K. Angell, D. Frank Ogletree, Paul D. Ashby, Francesca M. Toma, Jeffrey B. Neaton, Ian D. Sharp, & Alexander Weber-Bargioni. Nature Energy 1, Article number: 16093 (2016 doi:10.1038/nenergy.2016.93 Published online: 04 July 2016
This paper is behind a paywall.
Dexter Johnson’s July 6, 2016 posting on his Nanoclast blog (on the IEEE [Institute of Electrical and Electronics Engineers] website} presents his take on the impact that this new finding may have,
The rise of the crystal perovskite as a potential replacement for silicon in photovoltaics has been impressive over the last decade, with its conversion efficiency improving from 3.8 to 22.1 percent over that time period. Nonetheless, there has been a vague sense that this rise is beginning to peter out of late, largely because when a solar cell made from perovskite gets larger than 1 square centimeter the best conversion efficiency had been around 15.6 percent. …
A June 14, 2016 news item on ScienceDaily focuses on memristors. (It’s been about two months since my last memristor posting on April 22, 2016 regarding electronic synapses and neural networks). This piece announces new insight into how memristors function at the atomic scale,
In experiments at two Department of Energy national labs — SLAC National Accelerator Laboratory and Lawrence Berkeley National Laboratory — scientists at Hewlett Packard Enterprise (HPE) [also referred to as HP Labs or Hewlett Packard Laboratories] have experimentally confirmed critical aspects of how a new type of microelectronic device, the memristor, works at an atomic scale.
This result is an important step in designing these solid-state devices for use in future computer memories that operate much faster, last longer and use less energy than today’s flash memory. …
“We need information like this to be able to design memristors that will succeed commercially,” said Suhas Kumar, an HPE scientist and first author on the group’s technical paper.
The memristor was proposed theoretically [by Dr. Leon Chua] in 1971 as the fourth basic electrical device element alongside the resistor, capacitor and inductor. At its heart is a tiny piece of a transition metal oxide sandwiched between two electrodes. Applying a positive or negative voltage pulse dramatically increases or decreases the memristor’s electrical resistance. This behavior makes it suitable for use as a “non-volatile” computer memory that, like flash memory, can retain its state without being refreshed with additional power.
Over the past decade, an HPE group led by senior fellow R. Stanley Williams has explored memristor designs, materials and behavior in detail. Since 2009 they have used intense synchrotron X-rays to reveal the movements of atoms in memristors during switching. Despite advances in understanding the nature of this switching, critical details that would be important in designing commercially successful circuits remained controversial. For example, the forces that move the atoms, resulting in dramatic resistance changes during switching, remain under debate.
In recent years, the group examined memristors made with oxides of titanium, tantalum and vanadium. Initial experiments revealed that switching in the tantalum oxide devices could be controlled most easily, so it was chosen for further exploration at two DOE Office of Science User Facilities – SLAC’s Stanford Synchrotron Radiation Lightsource (SSRL) and Berkeley Lab’s Advanced Light Source (ALS).
At ALS, the HPE researchers mapped the positions of oxygen atoms before and after switching. For this, they used a scanning transmission X-ray microscope and an apparatus they built to precisely control the position of their sample and the timing and intensity of the 500-electronvolt ALS X-rays, which were tuned to see oxygen.
The experiments revealed that even weak voltage pulses create a thin conductive path through the memristor. During the pulse the path heats up, which creates a force that pushes oxygen atoms away from the path, making it even more conductive. Reversing the voltage pulse resets the memristor by sucking some of oxygen atoms back into the conducting path, thereby increasing the device’s resistance. The memristor’s resistance changes between 10-fold and 1 million-fold, depending on operating parameters like the voltage-pulse amplitude. This resistance change is dramatic enough to exploit commercially.
To be sure of their conclusion, the researchers also needed to understand if the tantalum atoms were moving along with the oxygen during switching. Imaging tantalum required higher-energy, 10,000-electronvolt X-rays, which they obtained at SSRL’s Beam Line 6-2. In a single session there, they determined that the tantalum remained stationary.
“That sealed the deal, convincing us that our hypothesis was correct,” said HPE scientist Catherine Graves, who had worked at SSRL as a Stanford graduate student. She added that discussions with SLAC experts were critical in guiding the HPE team toward the X-ray techniques that would allow them to see the tantalum accurately.
Kumar said the most promising aspect of the tantalum oxide results was that the scientists saw no degradation in switching over more than a billion voltage pulses of a magnitude suitable for commercial use. He added that this knowledge helped his group build memristors that lasted nearly a billion switching cycles, about a thousand-fold improvement.
“This is much longer endurance than is possible with today’s flash memory devices,” Kumar said. “In addition, we also used much higher voltage pulses to accelerate and observe memristor failures, which is also important in understanding how these devices work. Failures occurred when oxygen atoms were forced so far away that they did not return to their initial positions.”
“Transistors are big and bulky compared to memristors,” he said. “Memristors are also much better suited for creating the neuron-like voltage spikes that characterize neuromorphic circuits.”
The researchers have provided an animation illustrating how memristors can fail,
This animation shows how millions of high-voltage switching cycles can cause memristors to fail. The high-voltage switching eventually creates regions that are permanently rich (blue pits) or deficient (red peaks) in oxygen and cannot be switched back. Switching at lower voltages that would be suitable for commercial devices did not show this performance degradation. These observations allowed the researchers to develop materials processing and operating conditions that improved the memristors’ endurance by nearly a thousand times. (Suhas Kumar) Courtesy: SLAC
Some of the ‘memristor story’ is contested and you can find a brief overview of the discussion in this Wikipedia memristor entry in the section on ‘definition and criticism’. There is also a history of the memristor which dates back to the 19th century featured in my May 22, 2012 posting.
Swiss and US scientists have developed a nanoporous crystal that could be used to clean up nuclear waste gases according to a June 13, 2016 news item on Nanowerk (Note: A link has been removed),
An international team of scientists at EPFL [École polytechnique fédérale de Lausanne in Switzerland] and the US have discovered a material that can clear out radioactive waste from nuclear plants more efficiently, cheaply, and safely than current methods.
Nuclear energy is one of the cheapest alternatives to carbon-based fossil fuels. But nuclear-fuel reprocessing plants generate waste gas that is currently too expensive and dangerous to deal with. Scanning hundreds of thousands of materials, scientists led by EPFL and their US colleagues have now discovered a material that can absorb nuclear waste gases much more efficiently, cheaply and safely. The work is published in Nature Communications (“Metal–organic framework with optimally selective xenon adsorption and separation”).
Nuclear-fuel reprocessing plants generate volatile radionuclides such as xenon and krypton, which escape in the so-called “off-gas” of these facilities – the gases emitted as byproducts of the chemical process. Current ways of capturing and clearing out these gases involve distillation at very low temperatures, which is expensive in both terms of energy and capital costs, and poses a risk of explosion.
Scientists led by Berend Smit’s lab at EPFL (Sion) and colleagues in the US, have now identified a material that can be used as an efficient, cheaper, and safer alternative to separate xenon and krypton – and at room temperature. The material, abbreviated as SBMOF-1, is a nanoporous crystal and belongs a class of materials that are currently used to clear out CO2 emissions and other dangerous pollutants. These materials are also very versatile, and scientists can tweak them to self-assemble into ordered, pre-determined crystal structures. In this way, they can synthesize millions of tailor-made materials that can be optimized for gas storage separation, catalysis, chemical sensing and optics.
The scientists carried out high-throughput screening of large material databases of over 125,000 candidates. To do this, they used molecular simulations to find structures that can separate xenon and krypton, and under conditions that match those involved in reprocessing nuclear waste.
Because xenon has a much shorter half-life than krypton – a month versus a decade – the scientists had to find a material that would be selective for both but would capture them separately. As xenon is used in commercial lighting, propulsion, imaging, anesthesia and insulation, it can also be sold back into the chemical market to offset costs.
The scientists identified and confirmed that SBMOF-1 shows remarkable xenon capturing capacity and xenon/krypton selectivity under nuclear-plant conditions and at room temperature.
Researchers are investigating a new material that might help in nuclear fuel recycling and waste reduction by capturing certain gases released during reprocessing. Conventional technologies to remove these radioactive gases operate at extremely low, energy-intensive temperatures. By working at ambient temperature, the new material has the potential to save energy, make reprocessing cleaner and less expensive. The reclaimed materials can also be reused commercially.
Appearing in Nature Communications, the work is a collaboration between experimentalists and computer modelers exploring the characteristics of materials known as metal-organic frameworks.
“This is a great example of computer-inspired material discovery,” said materials scientist Praveen Thallapally of the Department of Energy’s Pacific Northwest National Laboratory. “Usually the experimental results are more realistic than computational ones. This time, the computer modeling showed us something the experiments weren’t telling us.”
Recycling nuclear fuel can reuse uranium and plutonium — the majority of the used fuel — that would otherwise be destined for waste. Researchers are exploring technologies that enable safe, efficient, and reliable recycling of nuclear fuel for use in the future.
A multi-institutional, international collaboration is studying materials to replace costly, inefficient recycling steps. One important step is collecting radioactive gases xenon and krypton, which arise during reprocessing. To capture xenon and krypton, conventional technologies use cryogenic methods in which entire gas streams are brought to a temperature far below where water freezes — such methods are energy intensive and expensive.
Thallapally, working with Maciej Haranczyk and Berend Smit of Lawrence Berkeley National Laboratory [LBNL] and others, has been studying materials called metal-organic frameworks, also known as MOFs, that could potentially trap xenon and krypton without having to use cryogenics.
These materials have tiny pores inside, so small that often only a single molecule can fit inside each pore. When one gas species has a higher affinity for the pore walls than other gas species, metal-organic frameworks can be used to separate gaseous mixtures by selectively adsorbing.
To find the best MOF for xenon and krypton separation, computational chemists led by Haranczyk and Smit screened 125,000 possible MOFs for their ability to trap the gases. Although these gases can come in radioactive varieties, they are part of a group of chemically inert elements called “noble gases.” The team used computing resources at NERSC, the National Energy Research Scientific Computing Center, a DOE Office of Science User Facility at LBNL.
“Identifying the optimal material for a given process, out of thousands of possible structures, is a challenge due to the sheer number of materials. Given that the characterization of each material can take up to a few hours of simulations, the entire screening process may fill a supercomputer for weeks,” said Haranczyk. “Instead, we developed an approach to assess the performance of materials based on their easily computable characteristics. In this case, seven different characteristics were necessary for predicting how the materials behaved, and our team’s grad student Cory Simon’s application of machine learning techniques greatly sped up the material discovery process by eliminating those that didn’t meet the criteria.”
The team’s models identified the MOF that trapped xenon most selectively and had a pore size close to the size of a xenon atom — SBMOF-1, which they then tested in the lab at PNNL.
After optimizing the preparation of SBMOF-1, Thallapally and his team at PNNL tested the material by running a mixture of gases through it — including a non-radioactive form of xenon and krypton — and measuring what came out the other end. Oxygen, helium, nitrogen, krypton, and carbon dioxide all beat xenon out. This indicated that xenon becomes trapped within SBMOF-1’s pores until the gas saturates the material.
Other tests also showed that in the absence of xenon, SBMOF-1 captures krypton. During actual separations, then, operators would pass the gas streams through SBMOF-1 twice to capture both gases.
The team also tested SBMOF-1’s ability to hang onto xenon in conditions of high humidity. Humidity interferes with cryogenics, and gases must be dehydrated before putting them through the ultra-cold method, another time-consuming expense. SBMOF-1, however, performed quite admirably, retaining more than 85 percent of the amount of xenon in high humidity as it did in dry conditions.
The final step in collecting xenon or krypton gas would be to put the MOF material under a vacuum, which sucks the gas out of the molecular cages for safe storage. A last laboratory test examined how stable the material was by repeatedly filling it up with xenon gas and then vacuuming out the xenon. After 10 cycles of this, SBMOF-1 collected just as much xenon as the first cycle, indicating a high degree of stability for long-term use.
Thallapally attributes this stability to the manner in which SBMOF-1 interacts with xenon. Rather than chemical reactions between the molecular cages and the gases, the relationship is purely physical. The material can last a lot longer without constantly going through chemical reactions, he said.
A model finding
Although the researchers showed that SBMOF-1 is a good candidate for nuclear fuel reprocessing, getting these results wasn’t smooth sailing. In the lab, the researchers had followed a previously worked out protocol from Stony Brook University to prepare SBMOF-1. Part of that protocol requires them to “activate” SBMOF-1 by heating it up to 300 degrees Celsius, three times the temperature of boiling water.
Activation cleans out material left in the pores from MOF synthesis. Laboratory tests of the activated SBMOF-1, however, showed the material didn’t behave as well as it should, based on the computer modeling results.
The researchers at PNNL repeated the lab experiments. This time, however, they activated SBMOF-1 at a lower temperature, 100 degrees Celsius, or the actual temperature of boiling water. Subjecting the material to the same lab tests, the researchers found SBMOF-1 behaving as expected, and better than at the higher activation temperature.
But why? To figure out where the discrepancy came from, the researchers modeled what happened to SBMOF-1 at 300 degrees Celsius. Unexpectedly, the pores squeezed in on themselves.
“When we heated the crystal that high, atoms within the pore tilted and partially blocked the pores,” said Thallapally. “The xenon doesn’t fit.”
Armed with these new computational and experimental insights, the researchers can explore SBMOF-1 and other MOFs further for nuclear fuel recycling. These MOFs might also be able to capture other noble gases such as radon, a gas known to pool in some basements.
Researchers hailed from several other institutions as well as those listed earlier, including University of California, Berkeley, Ecole Polytechnique Fédérale de Lausanne (EPFL) in Switzerland, Brookhaven National Laboratory, and IMDEA Materials Institute in Spain. This work was supported by the [US] Department of Energy Offices of Nuclear Energy and Science.
Here’s an image the researchers have provided to illustrate their work,
Caption: The crystal structure of SBMOF-1 (green = Ca, yellow = S, red = O, gray = C, white = H). The light blue surface is a visualization of the one-dimensional channel that SBMOF-1 creates for the gas molecules to move through. The darker blue surface illustrates where a Xe atom sits in the pores of SBMOF-1 when it adsorbs. Credit: Berend Smit/EPFL/University of California Berkley
Final comment, this is the second time in the last month I’ve stumbled across more positive approaches to nuclear energy. The first time was a talk (Why Nuclear Power is Necessary) held in Vancouver, Canada in May 2016 (details here). I’m not trying to suggest anything unduly sinister but it is interesting since most of my adult life nuclear power has been viewed with fear and suspicion.