Tag Archives: Oak Ridge National Laboratory (ORNL)

There’s no ‘I’ in team: coaching scientists to work together

While it’s true enough in English where you don’t spell the word team with the letter ‘I’, that’s not the case in French where the word is ‘equipe’. it makes me wonder how many other languages in the world have an ‘I’ in team.

Moving on. This English language saying is true enough in its way but there is no team unless you have a group of ‘I’s’ and the trick is getting them to work together as a July 18, 2019 Northwestern University news release (received via email) about a new online training tool notes,

Coaching scientists to play well together

Free tool shows how to avoid fights over data and authorship conflicts   

‘You stole my idea’ or ‘I’m not getting credit for my work’ are common disputes
Only tool validated by research to help scientists collaborate smoothly
Many NSF [US National Science Foundation] and NIH [US National Institutes of Health] grants now require applicants to show readiness for team science
Scientists can’t do it on their own

CHICAGO — When scientists from different disciplines collaborate – as is increasingly necessary to confront the complexity of challenging research problems – interpersonal tussles often arise. One scientist may accuse another of stealing her ideas. Or, a researcher may feel he is not getting credit for his work or doesn’t have access to important data. 
 
“Interdisciplinary team science is now the state of the art across all branches of science and engineering,” said Bonnie Spring, professor of preventive medicine at Northwestern University Feinberg School of Medicine. “But very few scientists have been trained to work with others outside of their own disciplinary silo.”
 
The skill is critical because many National Institute[s] of Health and National Science Foundationgrants require applicants to show readiness for team science.
 
A free, online training tool developed by Northwestern — teamscience.net — has been been proven to help scientists develop skills to work with other scientists outside their own discipline. 
 
A new study led by Spring showed scientists who completed the program’s modules – called COALESCE – significantly boosted their knowledge about team science and increased their self-confidence about being able to successfully work in scientific teams. Most people who completed one or more modules (84%) said that the experience of taking the modules was very likely to positively impact their future research.
 
The study will be published July 18 [2019] in the Journal of Clinical and Translational Science.
 
There are few training resources to teach scientists how to collaborate, and the ones that are available don’t have evidence of their effectiveness. Teamscience.net is the only free, validated-by-research tool available to anyone at any time. 
 
Almost 1,000 of the COALESCE users opted voluntarily to respond to questions about the learning modules, providing information about how taking each module influenced team science knowledge, skills and attitudes.
 
‘You stole my idea’
 
The most common area of dispute among collaborating scientists is authorship concerns, such as accusations that one person stole ideas from another or that a contributor was not getting credit for his or her work, the study authors said. Other disputes arise around access to and analysis of data, utilization of materials or resources and the general direction of the research itself. Underlying all of these issues is a common failure to prepare for working collaboratively with other scientists. 
 
“Preparing in advance before starting to collaborate, often through the creation of a formal collaboration agreement document, is the best way to head off these types of disputes,” said Angela Pfammatter, assistant professor of preventive medicine at Feinberg and a coauthor on the paper.
  
Spring suggested “having scientists discuss their expectations of one another and the collaboration to prevent acrimonious conflicts.” 
 
Skills to play well together
 
These skills are critical to a successful scientific team, the authors said: 

The ability to choose team members who have the right mix of expertise, temperament and accessibility to round out a team. 
The ability to anticipate what could go wrong and to develop contingency plans in advance. 
The ability to manage conflict within the team 

The teamscience.net modules help scientists acquire these skills by letting them interact with different problem scenarios that can arise in team-based research. Scientists can try out different solutions and learn from mistakes in a safe, online environment. 
 
More than 16,000 people have accessed the resource in the past six years.  Demand for team science training is expected to increase as interdisciplinary teams set out to tackle some of science’s most challenging problems. 
 
Other Northwestern authors on the paper are Ekaterina Klyachko, Phillip Rak, H. Gene McFadden, Juned Siddique and Leland Bardsley. 
 
Funding support for COALESCE is from the National Institutes of Health, National Center for Advancing Translational Sciences grants 3UL1RR025741 and UL1TR001422 and its Office of Behavioral and Social Sciences Research.

i once got caught here on this blog between two warring scientists. My August 24, 2015 posting was a pretty standard one for me. Initially, it was one of my more minimalistic pieces with a copy of the text from a university news release announcing the research and a link to the academic paper. I can’t remember if the problem was which scientist was listed first and which was listed last but one of them took exception and contacted me explaining how it was wrong. (Note: These decisions are not made by me.) I did my best to fix whatever the problem was and then the other scientist contacted me. After the dust settled, I ended up with a dog’s breakfast for my posting and a new policy.

Getting back to COALESCE: I wish the Northwestern University researchers all the best as they look for ways to help scientists work together more smoothly and cooperatively.

Here’s a link to and a citation for the paper,

Online, cross-disciplinary team science training for health and medical professionals: Evaluation of COALESCE (teamscience.net) by Bonnie Spring, Ekaterina A. Klyachko, Phillip W. Rak, H. Gene McFadden, Donald Hedeker, Juned Siddique, Leland R. Bardsley, and Angela Fidler Pfammatter. Jurnal of Clinical and Translational Science DOI: https://doi.org/10.1017/cts.2019.383 Published online by Cambridge University Press: 18 July 2019

This paper is open access.

Cannibalisitic nanostructures

I think this form of ‘cannibalism’ could also be described as a form of ‘self-assembly’. That said, here is an August 31, 2018 news item on ScienceDaily announcing ‘cannibalistic’ materials,

Scientists at the [US] Department of Energy’s [DOE] Oak Ridge National Laboratory [ORNL] induced a two-dimensional material to cannibalize itself for atomic “building blocks” from which stable structures formed.

The findings, reported in Nature Communications, provide insights that may improve design of 2D materials for fast-charging energy-storage and electronic devices.

An August 31, 2018 DOE/Oak Ridge National Laboratory news release (also on EurekAlert), which originated the news item, provides more detail (Note: Links have been removed),

“Under our experimental conditions, titanium and carbon atoms can spontaneously form an atomically thin layer of 2D transition-metal carbide, which was never observed before,” said Xiahan Sang of ORNL.

He and ORNL’s Raymond Unocic led a team that performed in situ experiments using state-of-the-art scanning transmission electron microscopy (STEM), combined with theory-based simulations, to reveal the mechanism’s atomistic details.

“This study is about determining the atomic-level mechanisms and kinetics that are responsible for forming new structures of a 2D transition-metal carbide such that new synthesis methods can be realized for this class of materials,” Unocic added.

The starting material was a 2D ceramic called a MXene (pronounced “max een”). Unlike most ceramics, MXenes are good electrical conductors because they are made from alternating atomic layers of carbon or nitrogen sandwiched within transition metals like titanium.

The research was a project of the Fluid Interface Reactions, Structures and Transport (FIRST) Center, a DOE Energy Frontier Research Center that explores fluid–solid interface reactions that have consequences for energy transport in everyday applications. Scientists conducted experiments to synthesize and characterize advanced materials and performed theory and simulation work to explain observed structural and functional properties of the materials. New knowledge from FIRST projects provides guideposts for future studies.

The high-quality material used in these experiments was synthesized by Drexel University scientists, in the form of five-ply single-crystal monolayer flakes of MXene. The flakes were taken from a parent crystal called “MAX,” which contains a transition metal denoted by “M”; an element such as aluminum or silicon, denoted by “A”; and either a carbon or nitrogen atom, denoted by “X.” The researchers used an acidic solution to etch out the monoatomic aluminum layers, exfoliate the material and delaminate it into individual monolayers of a titanium carbide MXene (Ti3C2).

The ORNL scientists suspended a large MXene flake on a heating chip with holes drilled in it so no support material, or substrate, interfered with the flake. Under vacuum, the suspended flake was exposed to heat and irradiated with an electron beam to clean the MXene surface and fully expose the layer of titanium atoms.

MXenes are typically inert because their surfaces are covered with protective functional groups—oxygen, hydrogen and fluorine atoms that remain after acid exfoliation. After protective groups are removed, the remaining material activates. Atomic-scale defects—“vacancies” created when titanium atoms are removed during etching—are exposed on the outer ply of the monolayer. “These atomic vacancies are good initiation sites,” Sang said. “It’s favorable for titanium and carbon atoms to move from defective sites to the surface.” In an area with a defect, a pore may form when atoms migrate.

“Once those functional groups are gone, now you’re left with a bare titanium layer (and underneath, alternating carbon, titanium, carbon, titanium) that’s free to reconstruct and form new structures on top of existing structures,” Sang said.

High-resolution STEM imaging proved that atoms moved from one part of the material to another to build structures. Because the material feeds on itself, the growth mechanism is cannibalistic.

“The growth mechanism is completely supported by density functional theory and reactive molecular dynamics simulations, thus opening up future possibilities to use these theory tools to determine the experimental parameters required for synthesizing specific defect structures,” said Adri van Duin of Penn State [Pennsylvania State University].

Most of the time, only one additional layer [of carbon and titanium] grew on a surface. The material changed as atoms built new layers. Ti3C2 turned into Ti4C3, for example.

“These materials are efficient at ionic transport, which lends itself well to battery and supercapacitor applications,” Unocic said. “How does ionic transport change when we add more layers to nanometer-thin MXene sheets?” This question may spur future studies.

“Because MXenes containing molybdenum, niobium, vanadium, tantalum, hafnium, chromium and other metals are available, there are opportunities to make a variety of new structures containing more than three or four metal atoms in cross-section (the current limit for MXenes produced from MAX phases),” Yury Gogotsi of Drexel University added. “Those materials may show different useful properties and create an array of 2D building blocks for advancing technology.”

At ORNL’s Center for Nanophase Materials Sciences (CNMS), Yu Xie, Weiwei Sun and Paul Kent performed first-principles theory calculations to explain why these materials grew layer by layer instead of forming alternate structures, such as squares. Xufan Li and Kai Xiao helped understand the growth mechanism, which minimizes surface energy to stabilize atomic configurations. Penn State scientists conducted large-scale dynamical reactive force field simulations showing how atoms rearranged on surfaces, confirming defect structures and their evolution as observed in experiments.

The researchers hope the new knowledge will help others grow advanced materials and generate useful nanoscale structures.

Here’s a link to and a citation for the paper,

In situ atomistic insight into the growth mechanisms of single layer 2D transition metal carbides by Xiahan Sang, Yu Xie, Dundar E. Yilmaz, Roghayyeh Lotfi, Mohamed Alhabeb, Alireza Ostadhossein, Babak Anasori, Weiwei Sun, Xufan Li, Kai Xiao, Paul R. C. Kent, Adri C. T. van Duin, Yury Gogotsi, & Raymond R. Unocic. Nature Communicationsvolume 9, Article number: 2266 (2018) DOI: https://doi.org/10.1038/s41467-018-04610-0 Published 11 June 2018

This paper is open access.

Liquid circuitry, shape-shifting fluids and more

I’d have to see it to believe it but researchers at the US Dept. of Energy (DOE) Lawrence Berkeley National Laboratory (LBNL) have developed a new kind of ‘bijel’ which would allow for some pretty nifty robotics. From a Sept. 25, 2017 news item on ScienceDaily,

A new two-dimensional film, made of polymers and nanoparticles and developed by researchers at the Department of Energy’s Lawrence Berkeley National Laboratory (Berkeley Lab), can direct two different non-mixing liquids into a variety of exotic architectures. This finding could lead to soft robotics, liquid circuitry, shape-shifting fluids, and a host of new materials that use soft, rather than solid, substances.

The study, reported today in the journal Nature Nanotechnology, presents the newest entry in a class of substances known as bicontinuous jammed emulsion gels, or bijels, which hold promise as a malleable liquid that can support catalytic reactions, electrical conductivity, and energy conversion.

A Sept. 25, 2017 LBNL news release (also on EurekAlert), which originated the news item, expands on the theme,

Bijels are typically made of immiscible, or non-mixing, liquids. People who shake their bottle of vinaigrette before pouring the dressing on their salad are familiar with such liquids. As soon as the shaking stops, the liquids start to separate again, with the lower density liquid – often oil – rising to the top.

Trapping, or jamming, particles where these immiscible liquids meet can prevent the liquids from completely separating, stabilizing the substance into a bijel. What makes bijels remarkable is that, rather than just making the spherical droplets that we normally see when we try to mix oil and water, the particles at the interface shape the liquids into complex networks of interconnected fluid channels.

Bijels are notoriously difficult to make, however, involving exact temperatures at precisely timed stages. In addition, the liquid channels are normally more than 5 micrometers across, making them too large to be useful in energy conversion and catalysis.

“Bijels have long been of interest as next-generation materials for energy applications and chemical synthesis,” said study lead author Caili Huang. “The problem has been making enough of them, and with features of the right size. In this work, we crack that problem.”

Huang started the work as a graduate student with Thomas Russell, the study’s principal investigator, at Berkeley Lab’s Materials Sciences Division, and he continued the project as a postdoctoral researcher at DOE’s Oak Ridge National Laboratory.

Creating a new bijel recipe

The method described in this new study simplifies the bijel process by first using specially coated particles about 10-20 nanometers in diameter. The smaller-sized particles line the liquid interfaces much more quickly than the ones used in traditional bijels, making the smaller channels that are highly valued for applications.

Illustration shows key stages of bijel formation. Clockwise from top left, two non-mixing liquids are shown. Ligands (shown in yellow) with amine groups are dispersed throughout the oil or solvent, and nanoparticles coated with carboxylic acids (shown as blue dots) are scattered in the water. With vigorous shaking, the nanoparticles and ligands form a “supersoap” that gets trapped at the interface of the two liquids. The bottom panel is a magnified view of the jammed nanoparticle supersoap. (Credit: Caili Huang/ORNL)

“We’ve basically taken liquids like oil and water and given them a structure, and it’s a structure that can be changed,” said Russell, a visiting faculty scientist at Berkeley Lab. “If the nanoparticles are responsive to electrical, magnetic, or mechanical stimuli, the bijels can become reconfigurable and re-shaped on demand by an external field.”

The researchers were able to prepare new bijels from a variety of common organic, water-insoluble solvents, such as toluene, that had ligands dissolved in it, and deionized water, which contained the nanoparticles. To ensure thorough mixing of the liquids, they subjected the emulsion to a vortex spinning at 3,200 revolutions per minute.

“This extreme shaking creates a whole bunch of new places where these particles and polymers can meet each other,” said study co-author Joe Forth, a postdoctoral fellow at Berkeley Lab’s Materials Sciences Division. “You’re synthesizing a lot of this material, which is in effect a thin, 2-D coating of the liquid surfaces in the system.”

The liquids remained a bijel even after one week, a sign of the system’s stability.

Russell, who is also a professor of polymer science and engineering at the University of Massachusetts-Amherst, added that these shape-shifting characteristics would be valuable in microreactors, microfluidic devices, and soft actuators.

Nanoparticle supersoap

Nanoparticles had not been seriously considered in bijels before because their small size made them hard to trap in the liquid interface. To resolve that problem, the researchers coated nano-sized particles with carboxylic acids and put them in water. They then took polymers with an added amine group – a derivative of ammonia – and dissolved them in the toluene.

At left is a vial of bijel stabilized with nanoparticle surfactants. On the right is the same vial after a week of inversion, showing that the nanoparticle kept the liquids from moving. (Credit: Caili Huang/ORNL)

This configuration took advantage of the amine group’s affinity to water, a characteristic that is comparable to surfactants, like soap. Their nanoparticle “supersoap” was designed so that the nanoparticles join ligands, forming an octopus-like shape with a polar head and nonpolar legs that get jammed at the interface, the researchers said.

“Bijels are really a new material, and also excitingly weird in that they are kinetically arrested in these unusual configurations,” said study co-author Brett Helms, a staff scientist at Berkeley Lab’s Molecular Foundry. “The discovery that you can make these bijels with simple ingredients is a surprise. We all have access to oils and water and nanocrystals, allowing broad tunability in bijel properties. This platform also allows us to experiment with new ways to control their shape and function since they are both responsive and reconfigurable.”

The nanoparticles were made of silica, but the researchers noted that in previous studies they used graphene and carbon nanotubes to form nanoparticle surfactants.

“The key is that the nanoparticles can be made of many materials,” said Russell.  “The most important thing is what’s on the surface.”

This is an animation of the bijel

3-D rendering of the nanoparticle bijel taken by confocal microscope. (Credit: Caili Huang/ORNL [Oak Ridge National Laboratory] and Joe Forth/Berkeley Lab)

Here’s a link to and a citation for the paper,

Bicontinuous structured liquids with sub-micrometre domains using nanoparticle surfactants by Caili Huang, Joe Forth, Weiyu Wang, Kunlun Hong, Gregory S. Smith, Brett A. Helms & Thomas P. Russell. Nature Nanotechnology (2017) doi:10.1038/nnano.2017.182 25 September 2017

This paper is behind a paywall.

Mapping 23,000 atoms in a nanoparticle

Identification of the precise 3-D coordinates of iron, shown in red, and platinum atoms in an iron-platinum nanoparticle.. Courtesy of Colin Ophus and Florian Nickel/Berkeley Lab

The image of the iron-platinum nanoparticle (referenced in the headline) reminds of foetal ultrasound images. A Feb. 1, 2017 news item on ScienceDaily tells us more,

In the world of the very tiny, perfection is rare: virtually all materials have defects on the atomic level. These imperfections — missing atoms, atoms of one type swapped for another, and misaligned atoms — can uniquely determine a material’s properties and function. Now, UCLA [University of California at Los Angeles] physicists and collaborators have mapped the coordinates of more than 23,000 individual atoms in a tiny iron-platinum nanoparticle to reveal the material’s defects.

The results demonstrate that the positions of tens of thousands of atoms can be precisely identified and then fed into quantum mechanics calculations to correlate imperfections and defects with material properties at the single-atom level.

A Feb. 1, 2017 UCLA news release, which originated the news item, provides more detail about the work,

Jianwei “John” Miao, a UCLA professor of physics and astronomy and a member of UCLA’s California NanoSystems Institute, led the international team in mapping the atomic-level details of the bimetallic nanoparticle, more than a trillion of which could fit within a grain of sand.

“No one has seen this kind of three-dimensional structural complexity with such detail before,” said Miao, who is also a deputy director of the Science and Technology Center on Real-Time Functional Imaging. This new National Science Foundation-funded consortium consists of scientists at UCLA and five other colleges and universities who are using high-resolution imaging to address questions in the physical sciences, life sciences and engineering.

Miao and his team focused on an iron-platinum alloy, a very promising material for next-generation magnetic storage media and permanent magnet applications.

By taking multiple images of the iron-platinum nanoparticle with an advanced electron microscope at Lawrence Berkeley National Laboratory and using powerful reconstruction algorithms developed at UCLA, the researchers determined the precise three-dimensional arrangement of atoms in the nanoparticle.

“For the first time, we can see individual atoms and chemical composition in three dimensions. Everything we look at, it’s new,” Miao said.

The team identified and located more than 6,500 iron and 16,600 platinum atoms and showed how the atoms are arranged in nine grains, each of which contains different ratios of iron and platinum atoms. Miao and his colleagues showed that atoms closer to the interior of the grains are more regularly arranged than those near the surfaces. They also observed that the interfaces between grains, called grain boundaries, are more disordered.

“Understanding the three-dimensional structures of grain boundaries is a major challenge in materials science because they strongly influence the properties of materials,” Miao said. “Now we are able to address this challenge by precisely mapping out the three-dimensional atomic positions at the grain boundaries for the first time.”

The researchers then used the three-dimensional coordinates of the atoms as inputs into quantum mechanics calculations to determine the magnetic properties of the iron-platinum nanoparticle. They observed abrupt changes in magnetic properties at the grain boundaries.

“This work makes significant advances in characterization capabilities and expands our fundamental understanding of structure-property relationships, which is expected to find broad applications in physics, chemistry, materials science, nanoscience and nanotechnology,” Miao said.

In the future, as the researchers continue to determine the three-dimensional atomic coordinates of more materials, they plan to establish an online databank for the physical sciences, analogous to protein databanks for the biological and life sciences. “Researchers can use this databank to study material properties truly on the single-atom level,” Miao said.

Miao and his team also look forward to applying their method called GENFIRE (GENeralized Fourier Iterative Reconstruction) to biological and medical applications. “Our three-dimensional reconstruction algorithm might be useful for imaging like CT scans,” Miao said. Compared with conventional reconstruction methods, GENFIRE requires fewer images to compile an accurate three-dimensional structure.

That means that radiation-sensitive objects can be imaged with lower doses of radiation.

The US Dept. of Energy (DOE) Lawrence Berkeley National Laboratory issued their own Feb. 1, 2017 news release (also on EurekAlert) about the work with a focus on how their equipment made this breakthrough possible (it repeats a little of the info. from the UCLA news release),

Scientists used one of the world’s most powerful electron microscopes to map the precise location and chemical type of 23,000 atoms in an extremely small particle made of iron and platinum.

The 3-D reconstruction reveals the arrangement of atoms in unprecedented detail, enabling the scientists to measure chemical order and disorder in individual grains, which sheds light on the material’s properties at the single-atom level. Insights gained from the particle’s structure could lead to new ways to improve its magnetic performance for use in high-density, next-generation hard drives.

What’s more, the technique used to create the reconstruction, atomic electron tomography (which is like an incredibly high-resolution CT scan), lays the foundation for precisely mapping the atomic composition of other useful nanoparticles. This could reveal how to optimize the particles for more efficient catalysts, stronger materials, and disease-detecting fluorescent tags.

Microscopy data was obtained and analyzed by scientists from the Department of Energy’s Lawrence Berkeley National Laboratory (Berkeley Lab) at the Molecular Foundry, in collaboration with Foundry users from UCLA, Oak Ridge National Laboratory, and the United Kingdom’s University of Birmingham. …

Atoms are the building blocks of matter, and the patterns in which they’re arranged dictate a material’s properties. These patterns can also be exploited to greatly improve a material’s function, which is why scientists are eager to determine the 3-D structure of nanoparticles at the smallest scale possible.

“Our research is a big step in this direction. We can now take a snapshot that shows the positions of all the atoms in a nanoparticle at a specific point in its growth. This will help us learn how nanoparticles grow atom by atom, and it sets the stage for a materials-design approach starting from the smallest building blocks,” says Mary Scott, who conducted the research while she was a Foundry user, and who is now a staff scientist. Scott and fellow Foundry scientists Peter Ercius and Colin Ophus developed the method in close collaboration with Jianwei Miao, a UCLA professor of physics and astronomy.

Their nanoparticle reconstruction builds on an achievement they reported last year in which they measured the coordinates of more than 3,000 atoms in a tungsten needle to a precision of 19 trillionths of a meter (19 picometers), which is many times smaller than a hydrogen atom. Now, they’ve taken the same precision, added the ability to distinguish different elements, and scaled up the reconstruction to include tens of thousands of atoms.

Importantly, their method maps the position of each atom in a single, unique nanoparticle. In contrast, X-ray crystallography and cryo-electron microscopy plot the average position of atoms from many identical samples. These methods make assumptions about the arrangement of atoms, which isn’t a good fit for nanoparticles because no two are alike.

“We need to determine the location and type of each atom to truly understand how a nanoparticle functions at the atomic scale,” says Ercius.

A TEAM approach

The scientists’ latest accomplishment hinged on the use of one of the highest-resolution transmission electron microscopes in the world, called TEAM I. It’s located at the National Center for Electron Microscopy, which is a Molecular Foundry facility. The microscope scans a sample with a focused beam of electrons, and then measures how the electrons interact with the atoms in the sample. It also has a piezo-controlled stage that positions samples with unmatched stability and position-control accuracy.

The researchers began growing an iron-platinum nanoparticle from its constituent elements, and then stopped the particle’s growth before it was fully formed. They placed the “partially baked” particle in the TEAM I stage, obtained a 2-D projection of its atomic structure, rotated it a few degrees, obtained another projection, and so on. Each 2-D projection provides a little more information about the full 3-D structure of the nanoparticle.

They sent the projections to Miao at UCLA, who used a sophisticated computer algorithm to convert the 2-D projections into a 3-D reconstruction of the particle. The individual atomic coordinates and chemical types were then traced from the 3-D density based on the knowledge that iron atoms are lighter than platinum atoms. The resulting atomic structure contains 6,569 iron atoms and 16,627 platinum atoms, with each atom’s coordinates precisely plotted to less than the width of a hydrogen atom.

Translating the data into scientific insights

Interesting features emerged at this extreme scale after Molecular Foundry scientists used code they developed to analyze the atomic structure. For example, the analysis revealed chemical order and disorder in interlocking grains, in which the iron and platinum atoms are arranged in different patterns. This has large implications for how the particle grew and its real-world magnetic properties. The analysis also revealed single-atom defects and the width of disordered boundaries between grains, which was not previously possible in complex 3-D boundaries.

“The important materials science problem we are tackling is how this material transforms from a highly randomized structure, what we call a chemically-disordered structure, into a regular highly-ordered structure with the desired magnetic properties,” says Ophus.

To explore how the various arrangements of atoms affect the nanoparticle’s magnetic properties, scientists from DOE’s Oak Ridge National Laboratory ran computer calculations on the Titan supercomputer at ORNL–using the coordinates and chemical type of each atom–to simulate the nanoparticle’s behavior in a magnetic field. This allowed the scientists to see patterns of atoms that are very magnetic, which is ideal for hard drives. They also saw patterns with poor magnetic properties that could sap a hard drive’s performance.

“This could help scientists learn how to steer the growth of iron-platinum nanoparticles so they develop more highly magnetic patterns of atoms,” says Ercius.

Adds Scott, “More broadly, the imaging technique will shed light on the nucleation and growth of ordered phases within nanoparticles, which isn’t fully theoretically understood but is critically important to several scientific disciplines and technologies.”

The folks at the Berkeley Lab have created a video (notice where the still image from the beginning of this post appears),

The Oak Ridge National Laboratory (ORNL), not wanting to be left out, has been mentioned in a Feb. 3, 2017 news item on ScienceDaily,

… researchers working with magnetic nanoparticles at the University of California, Los Angeles (UCLA), and the US Department of Energy’s (DOE’s) Lawrence Berkeley National Laboratory (Berkeley Lab) approached computational scientists at DOE’s Oak Ridge National Laboratory (ORNL) to help solve a unique problem: to model magnetism at the atomic level using experimental data from a real nanoparticle.

“These types of calculations have been done for ideal particles with ideal crystal structures but not for real particles,” said Markus Eisenbach, a computational scientist at the Oak Ridge Leadership Computing Facility (OLCF), a DOE Office of Science User Facility located at ORNL.

A Feb. 2, 2017 ORNL news release on EurekAlert, which originated the news item, elucidates further on how their team added to the research,

Eisenbach develops quantum mechanical electronic structure simulations that predict magnetic properties in materials. Working with Paul Kent, a computational materials scientist at ORNL’s Center for Nanophase Materials Sciences, the team collaborated with researchers at UCLA and Berkeley Lab’s Molecular Foundry to combine world-class experimental data with world-class computing to do something new–simulate magnetism atom by atom in a real nanoparticle.

Using the new data from the research teams on the West Coast, Eisenbach and Kent were able to precisely model the measured atomic structure, including defects, from a unique iron-platinum (FePt) nanoparticle and simulate its magnetic properties on the 27-petaflop Titan supercomputer at the OLCF.

Electronic structure codes take atomic and chemical structure and solve for the corresponding magnetic properties. However, these structures are typically derived from many 2-D electron microscopy or x-ray crystallography images averaged together, resulting in a representative, but not true, 3-D structure.

“In this case, researchers were able to get the precise 3-D structure for a real particle,” Eisenbach said. “The UCLA group has developed a new experimental technique where they can tell where the atoms are–the coordinates–and the chemical resolution, or what they are — iron or platinum.”

The ORNL news release goes on to describe the work from the perspective of the people who ran the supercompute simulationsr,

A Supercomputing Milestone

Magnetism at the atomic level is driven by quantum mechanics — a fact that has shaken up classical physics calculations and called for increasingly complex, first-principle calculations, or calculations working forward from fundamental physics equations rather than relying on assumptions that reduce computational workload.

For magnetic recording and storage devices, researchers are particularly interested in magnetic anisotropy, or what direction magnetism favors in an atom.

“If the anisotropy is too weak, a bit written to the nanoparticle might flip at room temperature,” Kent said.

To solve for magnetic anisotropy, Eisenbach and Kent used two computational codes to compare and validate results.

To simulate a supercell of about 1,300 atoms from strongly magnetic regions of the 23,000-atom nanoparticle, they used the Linear Scaling Multiple Scattering (LSMS) code, a first-principles density functional theory code developed at ORNL.

“The LSMS code was developed for large magnetic systems and can tackle lots of atoms,” Kent said.

As principal investigator on 2017, 2016, and previous INCITE program awards, Eisenbach has scaled the LSMS code to Titan for a range of magnetic materials projects, and the in-house code has been optimized for Titan’s accelerated architecture, speeding up calculations more than 8 times on the machine’s GPUs. Exceptionally capable of crunching large magnetic systems quickly, the LSMS code received an Association for Computing Machinery Gordon Bell Prize in high-performance computing achievement in 1998 and 2009, and developments continue to enhance the code for new architectures.

Working with Renat Sabirianov at the University of Nebraska at Omaha, the team also ran VASP, a simulation package that is better suited for smaller atom counts, to simulate regions of about 32 atoms.

“With both approaches, we were able to confirm that the local VASP results were consistent with the LSMS results, so we have a high confidence in the simulations,” Eisenbach said.

Computer simulations revealed that grain boundaries have a strong effect on magnetism. “We found that the magnetic anisotropy energy suddenly transitions at the grain boundaries. These magnetic properties are very important,” Miao said.

In the future, researchers hope that advances in computing and simulation will make a full-particle simulation possible — as first-principles calculations are currently too intensive to solve small-scale magnetism for regions larger than a few thousand atoms.

Also, future simulations like these could show how different fabrication processes, such as the temperature at which nanoparticles are formed, influence magnetism and performance.

“There’s a hope going forward that one would be able to use these techniques to look at nanoparticle growth and understand how to optimize growth for performance,” Kent said.

Finally, here’s a link to and a citation for the paper,

Deciphering chemical order/disorder and material properties at the single-atom level by Yongsoo Yang, Chien-Chun Chen, M. C. Scott, Colin Ophus, Rui Xu, Alan Pryor, Li Wu, Fan Sun, Wolfgang Theis, Jihan Zhou, Markus Eisenbach, Paul R. C. Kent, Renat F. Sabirianov, Hao Zeng, Peter Ercius, & Jianwei Miao. Nature 542, 75–79 (02 February 2017) doi:10.1038/nature21042 Published online 01 February 2017

This paper is behind a paywall.

The physics of melting in two-dimensional systems

You might want to skip over the reference to snow as it doesn’t have much relevance to this story about ‘melting’, from a Feb. 1, 2017 news item on Nanowerk (Note: A link has been removed),

Snow falls in winter and melts in spring, but what drives the phase change in between?
Although melting is a familiar phenomenon encountered in everyday life, playing a part in many industrial and commercial processes, much remains to be discovered about this transformation at a fundamental level.

In 2015, a team led by the University of Michigan’s Sharon Glotzer used high-performance computing at the Department of Energy’s (DOE’s) Oak Ridge National Laboratory [ORNL] to study melting in two-dimensional (2-D) systems, a problem that could yield insights into surface interactions in materials important to technologies like solar panels, as well as into the mechanism behind three-dimensional melting. The team explored how particle shape affects the physics of a solid-to-fluid melting transition in two dimensions.

Using the Cray XK7 Titan supercomputer at the Oak Ridge Leadership Computing Facility (OLCF), a DOE Office of Science User Facility, the team’s [latest?] work revealed that the shape and symmetry of particles can dramatically affect the melting process (“Shape and symmetry determine two-dimensional melting transitions of hard regular polygons”). This fundamental finding could help guide researchers in search of nanoparticles with desirable properties for energy applications.

There is a video  of the ‘melting’ process but I have to confess to finding it a bit enigmatic,

A Feb. 1, 2017 ORNL news release (also on EurekAlert), which originated the news item, provides more detail about the research,

o tackle the problem, Glotzer’s team needed a supercomputer capable of simulating systems of up to 1 million hard polygons, simple particles used as stand-ins for atoms, ranging from triangles to 14-sided shapes. Unlike traditional molecular dynamics simulations that attempt to mimic nature, hard polygon simulations give researchers a pared-down environment in which to evaluate shape-influenced physics.

“Within our simulated 2-D environment, we found that the melting transition follows one of three different scenarios depending on the shape of the systems’ polygons,” University of Michigan research scientist Joshua Anderson said. “Notably, we found that systems made up of hexagons perfectly follow a well-known theory for 2-D melting, something that hasn’t been described until now.”

Shifting Shape Scenarios

In 3-D systems such as a thinning icicle, melting takes the form of a first-order phase transition. This means that collections of molecules within these systems exist in either solid or liquid form with no in-between in the presence of latent heat, the energy that fuels a solid-to-fluid phase change . In 2-D systems, such as thin-film materials used in batteries and other technologies, melting can be more complex, sometimes exhibiting an intermediate phase known as the hexatic phase.

The hexatic phase, a state characterized as a halfway point between an ordered solid and a disordered liquid, was first theorized in the 1970s by researchers John Kosterlitz, David Thouless, Burt Halperin, David Nelson, and Peter Young. The phase is a principle feature of the KTHNY theory, a 2-D melting theory posited by the researchers (and named based on the first letters of their last names). In 2016 Kosterlitz and Thouless were awarded the Nobel Prize in Physics, along with physicist Duncan Haldane, for their contributions to 2-D materials research.

At the molecular level, solid, hexatic, and liquid systems are defined by the arrangement of their atoms. In a crystalline solid, two types of order are present: translational and orientational. Translational order describes the well-defined paths between atoms over distances, like blocks in a carefully constructed Jenga tower. Orientational order describes the relational and clustered order shared between atoms and groups of atoms over distances. Think of that same Jenga tower turned askew after several rounds of play. The general shape of the tower remains, but its order is now fragmented.

The hexatic phase has no translational order but possesses orientational order. (A liquid has neither translational nor orientational order but exhibits short-range order, meaning any atom will have some average number of neighbors nearby but with no predicable order.)

Deducing the presence of a hexatic phase requires a leadership-class computer that can calculate large hard-particle systems. Glotzer’s team gained access to the OLCF’s 27-petaflop Titan through the Innovative and Novel Computational Impact on Theory and Experiment (INCITE) program, running its GPU-accelerated HOOMD-blue code to maximize time on the machine.

On Titan, HOOMD-blue used 64 GPUs for each massively parallel Monte Carlo simulation of up to 1 million particles. Researchers explored 11 different shape systems, applying an external pressure to push the particles together. Each system was simulated at 21 different densities, with the lowest densities representing a fluid state and the highest densities a solid state.

The simulations demonstrated multiple melting scenarios hinging on the polygons’ shape. Systems with polygons of seven sides or more closely followed the melting behavior of hard disks, or circles, exhibiting a continuous phase transition from the solid to the hexatic phase and a first-order phase transition from the hexatic to the liquid phase. A continuous phase transition means a constantly changing area in response to a changing external pressure. A first-order phase transition is characterized by a discontinuity in which the volume jumps across the phase transition in response to the changing external pressure. The team found pentagons and fourfold pentilles, irregular pentagons with two different edge lengths, exhibit a first-order solid-to-liquid phase transition.

The most significant finding, however, emerged from hexagon systems, which perfectly followed the phase transition described by the KTHNY theory. In this scenario, the particles’ shift from solid to hexatic and hexatic to fluid in a perfect continuous phase transition pattern.

“It was actually sort of surprising that no one else has found that until now,” Anderson said, “because it seems natural that the hexagon, with its six sides, and the honeycomb-like hexagonal arrangement would be a perfect match for this theory” in which the hexatic phase generally contains sixfold orientational order.

Glotzer’s team, which recently received a 2017 INCITE allocation, is now applying its leadership-class computing prowess to tackle phase transitions in 3-D. The team is focusing on how fluid particles crystallize into complex colloids—mixtures in which particles are suspended throughout another substance. Common examples of colloids include milk, paper, fog, and stained glass.

“We’re planning on using Titan to study how complexity can arise from these simple interactions, and to do that we’re actually going to look at how the crystals grow and study the kinetics of how that happens,” said Anderson.

There is a paper on arXiv,

Shape and symmetry determine two-dimensional melting transitions of hard regular polygons by Joshua A. Anderson, James Antonaglia, Jaime A. Millan, Michael Engel, Sharon C. Glotzer
(Submitted on 2 Jun 2016 (v1), last revised 23 Dec 2016 (this version, v2))  arXiv:1606.00687 [cond-mat.soft] (or arXiv:1606.00687v2

This paper is open access and open to public peer review.

Nano-spike catalysts offer one step conversion of carbon dioxide to ethanol

An Oct. 12, 2016 news item on ScienceDaily makes an exciting announcement, if carbon-dixoide-conversion-to-fuel is one of your pet topics,

In a new twist to waste-to-fuel technology, scientists at the Department of Energy’s Oak Ridge National Laboratory [ORNL] have developed an electrochemical process that uses tiny spikes of carbon and copper to turn carbon dioxide, a greenhouse gas, into ethanol. Their finding, which involves nanofabrication and catalysis science, was serendipitous.

An Oct. 12, 2016 ORNL news release, which originated the news item, explains in greater detail,

“We discovered somewhat by accident that this material worked,” said ORNL’s Adam Rondinone, lead author of the team’s study published in ChemistrySelect. “We were trying to study the first step of a proposed reaction when we realized that the catalyst was doing the entire reaction on its own.”

The team used a catalyst made of carbon, copper and nitrogen and applied voltage to trigger a complicated chemical reaction that essentially reverses the combustion process. With the help of the nanotechnology-based catalyst which contains multiple reaction sites, the solution of carbon dioxide dissolved in water turned into ethanol with a yield of 63 percent. Typically, this type of electrochemical reaction results in a mix of several different products in small amounts.

“We’re taking carbon dioxide, a waste product of combustion, and we’re pushing that combustion reaction backwards with very high selectivity to a useful fuel,” Rondinone said. “Ethanol was a surprise — it’s extremely difficult to go straight from carbon dioxide to ethanol with a single catalyst.”

The catalyst’s novelty lies in its nanoscale structure, consisting of copper nanoparticles embedded in carbon spikes. This nano-texturing approach avoids the use of expensive or rare metals such as platinum that limit the economic viability of many catalysts.

“By using common materials, but arranging them with nanotechnology, we figured out how to limit the side reactions and end up with the one thing that we want,” Rondinone said.

The researchers’ initial analysis suggests that the spiky textured surface of the catalysts provides ample reactive sites to facilitate the carbon dioxide-to-ethanol conversion.

“They are like 50-nanometer lightning rods that concentrate electrochemical reactivity at the tip of the spike,” Rondinone said.

Given the technique’s reliance on low-cost materials and an ability to operate at room temperature in water, the researchers believe the approach could be scaled up for industrially relevant applications. For instance, the process could be used to store excess electricity generated from variable power sources such as wind and solar.

“A process like this would allow you to consume extra electricity when it’s available to make and store as ethanol,” Rondinone said. “This could help to balance a grid supplied by intermittent renewable sources.”

The researchers plan to refine their approach to improve the overall production rate and further study the catalyst’s properties and behavior.

Here’s a link to and a citation for the paper,

High-Selectivity Electrochemical Conversion of CO2 to Ethanol using a Copper Nanoparticle/N-Doped Graphene Electrode by Yang Song, Rui Peng, Dale Hensley, Peter Bonnesen, Liangbo Liang, Zili Wu, Harry Meyer III, Miaofang Chi, Cheng Ma, Bobby Sumpter and Adam Rondinone. Chemistry Select DOI: 10.1002/slct.201601169 First published: 28 September 2016

This paper is open access.

New elements named (provisionally)

They say it’s provisionally but I suspect it would take an act of god for a change in the proposed names. From a June 8, 2016 blog posting (scroll down about 25% of the way) on the International Union of Pure and Applied Chemistry (IUPAC) website,

IUPAC is naming the four new elements nihonium, moscovium, tennessine, and oganesson

Following earlier reports that the claims for discovery of these elements have been fulfilled [1, 2], the discoverers have been invited to propose names and the following are now disclosed for public review:

  • Nihonium and symbol Nh, for the element 113,
  • Moscovium and symbol Mc, for the element 115,
  • Tennessine and symbol Ts, for the element 117, and
  • Oganesson and symbol Og, for the element 118.

The IUPAC Inorganic Chemistry Division has reviewed and considered these proposals and recommends these for acceptance. A five-month public review is now set, expiring 8 November 2016, prior to the formal approval by the IUPAC Council.

I can’t figure out how someone from the public might offer a comment about the names.

There’s more from the posting about what kinds of names are acceptable and how the names in this set of four were arrived at,

The guidelines for the naming the elements were recently revised [3] and shared with the discoverers to assist in their proposals. Keeping with tradition, newly discovered elements can be named after:
(a) a mythological concept or character (including an astronomical object),
(b) a mineral or similar substance,
(c) a place, or geographical region,
(d) a property of the element, or
(e) a scientist.
The names of all new elements in general would have an ending that reflects and maintains historical and chemical consistency. This would be in general “-ium” for elements belonging to groups 1-16, “-ine” for elements of group 17 and “-on” for elements of group 18. Finally, the names for new chemical elements in English should allow proper translation into other major languages.

For the element with atomic number 113 the discoverers at RIKEN Nishina Center for Accelerator-Based Science (Japan) proposed the name nihonium and the symbol Nh. Nihon is one of the two ways to say “Japan” in Japanese, and literally mean “the Land of Rising Sun”. The name is proposed to make a direct connection to the nation where the element was discovered. Element 113 is the first element to have been discovered in an Asian country. While presenting this proposal, the team headed by Professor Kosuke Morita pays homage to the trailblazing work by Masataka Ogawa done in 1908 surrounding the discovery of element 43. The team also hopes that pride and faith in science will displace the lost trust of those who suffered from the 2011 Fukushima nuclear disaster.

For the element with atomic number 115 the name proposed is moscovium with the symbol Mc and for element with atomic number 117, the name proposed is tennessine with the symbol Ts. These are in line with tradition honoring a place or geographical region and are proposed jointly by the discoverers at the Joint Institute for Nuclear Research, Dubna (Russia), Oak Ridge National Laboratory (USA), Vanderbilt University (USA) and Lawrence Livermore National Laboratory (USA).

Moscovium is in recognition of the Moscow region and honors the ancient Russian land that is the home of the Joint Institute for Nuclear Research, where the discovery experiments were conducted using the Dubna Gas-Filled Recoil Separator in combination with the heavy ion accelerator capabilities of the Flerov Laboratory of Nuclear Reactions.

Tennessine is in recognition of the contribution of the Tennessee region, including Oak Ridge National Laboratory, Vanderbilt University, and the University of Tennessee at Knoxville, to superheavy element research, including the production and chemical separation of unique actinide target materials for superheavy element synthesis at ORNL’s High Flux Isotope Reactor (HFIR) and Radiochemical Engineering Development Center (REDC).

For the element with atomic number 118 the collaborating teams of discoverers at the Joint Institute for Nuclear Research, Dubna (Russia) and Lawrence Livermore National Laboratory (USA) proposed the name oganesson and symbol Og. The proposal is in line with the tradition of honoring a scientist and recognizes Professor Yuri Oganessian (born 1933) for his pioneering contributions to transactinoid elements research. His many achievements include the discovery of superheavy elements and significant advances in the nuclear physics of superheavy nuclei including experimental evidence for the “island of stability”.

“It is a pleasure to see that specific places and names (country, state, city, and scientist) related to the new elements is recognized in these four names. Although these choices may perhaps be viewed by some as slightly self-indulgent, the names are completely in accordance with IUPAC rules”, commented Jan Reedijk, who corresponded with the various laboratories and invited the discoverers to make proposals. “In fact, I see it as thrilling to recognize that international collaborations were at the core of these discoveries and that these new names also make the discoveries somewhat tangible.”

So, let’s welcome Tennessine, Muscovium, Nihonium, and Oganesson to the table of periodic elements. I imagine Don Lehrer’s Elements Song will be updated soon. In the meantime we have this from ASAP Science, which includes the new elements under their placeholder names (when the addition was first publicized in January 2016. All of the placeholder names start with U,

Enjoy!

Corrections: Hybrid Photonic-Nanomechanical Force Microscopy uses vibration for better chemical analysis

*ETA  Nov. 4, 2015: I’m apologizing to anyone wishing to read this posting as it’s a bit of a mess. I deeply regret mishandling the situation. In future, I shall not be taking any corrections from individual researchers to materials such as news releases that have been issued by an institution. Whether or not the individual researchers are happy with how their contributions or how a colleague’s contributions or how their home institutions have been characterized is a matter for them and their home institutions.

The August 10, 2015 ORNL news release with all the correct details has been added to the end of this post.*

A researcher at the University of Central Florida (UCF) has developed a microscope that uses vibrations for better analysis of chemical composition. From an Aug. 10, 2015 news item on Nanowerk,

It’s a discovery that could have promising implications for fields as varied as biofuel production, solar energy, opto-electronic devices, pharmaceuticals and medical research.

“What we’re interested in is the tools that allow us to understand the world at a very small scale,” said UCF professor Laurene Tetard, formerly of the Oak Ridge National Laboratory. “Not just the shape of the object, but its mechanical properties, its composition and how it evolves in time.”

An Aug. 10, 2015 UCF news release (also on EurekAlert), which originated the news item, describes the limitations of atomic force microscopy and gives a few details about the hybrid microscope (Note: A link has been removed),

For more than two decades, scientists have used atomic force microscopy – a probe that acts like an ultra-sensitive needle on a record player – to determine the surface characteristics of samples at the microscopic scale. A “needle” that comes to an atoms-thin point traces a path over a sample, mapping the surface features at a sub-cellular level [nanoscale].

But that technology has its limits. It can determine the topographical characteristics of [a] sample, but it can’t identify its composition. And with the standard tools currently used for chemical mapping, anything smaller than roughly half a micron is going to look like a blurry blob, so researchers are out of luck if they want to study what’s happening at the molecular level.

A team led by Tetard has come up with a hybrid form of that technology that produces a much clearer chemical image. As described Aug. 10 in the journal Nature Nanotechnology, Hybrid Photonic-Nanomechanical Force Microscopy (HPFM) can discern a sample’s topographic characteristics together with the chemical properties at a much finer scale.

The HPFM method is able to identify materials based on differences in the vibration produced when they’re subjected to different wavelengths of light – essentially a material’s unique “fingerprint.”

“What we are developing is a completely new way of making that detection possible,” said Tetard, who has joint appointments to UCF’s Physics Department, Material Science and Engineering Department and the NanoScience Technology Center.

The researchers proved the effectiveness of HPFM while examining samples from an eastern cottonwood tree, a potential source of biofuel. By examining the plant samples at the nanoscale, the researchers for the first time were able to determine the molecular traits of both untreated and chemically processed cottonwood inside the plant cell walls.

The research team included Tetard; Ali Passian, R.H. Farahi and Brian Davison, all of Oak Ridge National Laboratory; and Thomas Thundat of the University of Alberta.

Long term, the results will help reveal better methods for producing the most biofuel from the cottonwood, a potential boon for industry. Likewise, the new method could be used to examine samples of myriad plants to determine whether they’re good candidates for biofuel production.

Potential uses of the technology go beyond the world of biofuel. Continued research may allow HPFM to be used as a probe so, for instance, it would be possible to study the effect of new treatments being developed to save plants such as citrus trees from bacterial diseases rapidly decimating the citrus industry, or study fundamental photonically-induced processes in complex systems such as in solar cell materials or opto-electronic devices.

Here’s a link to and a citation for the paper,

Opto-nanomechanical spectroscopic material characterization by L. Tetard, A. Passian, R. H. Farahi, T. Thundat, & B. H. Davison. Nature Nanotechnology (2015) doi:10.1038/nnano.2015.168 Published online 10 August 2015

This paper is behind a paywall.

*ETA August 27, 2015:

August 10, 2015 ORNL news release (Note: Funding information and a link to the paper [previously given] have been removed):

A microscope being developed at the Department of Energy’s Oak Ridge National Laboratory will allow scientists studying biological and synthetic materials to simultaneously observe chemical and physical properties on and beneath the surface.

The Hybrid Photonic Mode-Synthesizing Atomic Force Microscope is unique, according to principal investigator Ali Passian of ORNL’s Quantum Information System group. As a hybrid, the instrument, described in a paper published in Nature Nanotechnology, combines the disciplines of nanospectroscopy and nanomechanical microscopy.

“Our microscope offers a noninvasive rapid method to explore materials simultaneously for their chemical and physical properties,” Passian said. “It allows researchers to study the surface and subsurface of synthetic and biological samples, which is a capability that until now didn’t exist.”

ORNL’s instrument retains all of the advantages of an atomic force microscope while simultaneously offering the potential for discoveries through its high resolution and subsurface spectroscopic capabilities.

“The originality of the instrument and technique lies in its ability to provide information about a material’s chemical composition in the broad infrared spectrum of the chemical composition while showing the morphology of a material’s interior and exterior with nanoscale – a billionth of a meter – resolution,” Passian said.

Researchers will be able to study samples ranging from engineered nanoparticles and nanostructures to naturally occurring biological polymers, tissues and plant cells.

The first application as part of DOE’s BioEnergy Science Center was in the examination of plant cell walls under several treatments to provide submicron characterization. The plant cell wall is a layered nanostructure of biopolymers such as cellulose. Scientists want to convert such biopolymers to free the useful sugars and release energy.

An earlier instrument, also invented at ORNL, provided imaging of poplar cell wall structures that yielded unprecedented topological information, advancing fundamental research in sustainable biofuels.

Because of this new instrument’s impressive capabilities, the researcher team envisions broad applications.
“An urgent need exists for new platforms that can tackle the challenges of subsurface and chemical characterization at the nanometer scale,” said co-author Rubye Farahi. “Hybrid approaches such as ours bring together multiple capabilities, in this case, spectroscopy and high-resolution microscopy.”

Looking inside, the hybrid microscope consists of a photonic module that is incorporated into a mode-synthesizing atomic force microscope. The modular aspect of the system makes it possible to accommodate various radiation sources such as tunable lasers and non-coherent monochromatic or polychromatic sources.

ETA2 August 27, 2015: I’ve received an email from one of the paper’s authors (RH Farahi of the US Oak Ridge National Laboratory [ORNL]) who claims some inaccuracies in this piece.  The news release supplied by the University of Central Florida states that Dr. Tetard led the team and that is not so. According to Dr. Farahi, she had a postdoctoral position on the team which she left two years ago. You might also get the impression that some of the work was performed at the University of Central Florida. That is not so according to Dr. Farahi.  As a courtesy Dr. Tetard was retained as first author of the paper.

*Nov. 4, 2015: I suspect some of the misunderstanding was due to overeagerness and/or time pressures. Whoever wrote the news release may have made some assumptions. It’s very easy to make a mistake when talking to an ebullient scientist who can unintentionally lead you to believe something that’s not so. I worked in a high tech company and believed that there was some new software being developed which turned out to be a case of high hopes. Luckily, I said something that triggered a rapid rebuttal to the fantasies. Getting back to this situation, other contributing factors could include the writer not having time to get the news release reviewed the scientist or the scientist skimming the release and missing a few bits due to time pressure.*

Sealing graphene’s defects to make a better filtration device

Making a graphene filter that allows water to pass through while screening out salt and/or noxious materials has been more challenging than one might think. According to a May 7, 2015 news item on Nanowerk, graphene filters can be ‘leaky’,

For faster, longer-lasting water filters, some scientists are looking to graphene –thin, strong sheets of carbon — to serve as ultrathin membranes, filtering out contaminants to quickly purify high volumes of water.

Graphene’s unique properties make it a potentially ideal membrane for water filtration or desalination. But there’s been one main drawback to its wider use: Making membranes in one-atom-thick layers of graphene is a meticulous process that can tear the thin material — creating defects through which contaminants can leak.

Now engineers at MIT [Massachusetts Institute of Technology], Oak Ridge National Laboratory, and King Fahd University of Petroleum and Minerals (KFUPM) have devised a process to repair these leaks, filling cracks and plugging holes using a combination of chemical deposition and polymerization techniques. The team then used a process it developed previously to create tiny, uniform pores in the material, small enough to allow only water to pass through.

A May 8, 2015 MIT news release (also on EurkeAlert), which originated the news item, expands on the theme,

Combining these two techniques, the researchers were able to engineer a relatively large defect-free graphene membrane — about the size of a penny. The membrane’s size is significant: To be exploited as a filtration membrane, graphene would have to be manufactured at a scale of centimeters, or larger.

In experiments, the researchers pumped water through a graphene membrane treated with both defect-sealing and pore-producing processes, and found that water flowed through at rates comparable to current desalination membranes. The graphene was able to filter out most large-molecule contaminants, such as magnesium sulfate and dextran.

Rohit Karnik, an associate professor of mechanical engineering at MIT, says the group’s results, published in the journal Nano Letters, represent the first success in plugging graphene’s leaks.

“We’ve been able to seal defects, at least on the lab scale, to realize molecular filtration across a macroscopic area of graphene, which has not been possible before,” Karnik says. “If we have better process control, maybe in the future we don’t even need defect sealing. But I think it’s very unlikely that we’ll ever have perfect graphene — there will always be some need to control leakages. These two [techniques] are examples which enable filtration.”

Sean O’Hern, a former graduate research assistant at MIT, is the paper’s first author. Other contributors include MIT graduate student Doojoon Jang, former graduate student Suman Bose, and Professor Jing Kong.

A delicate transfer

“The current types of membranes that can produce freshwater from saltwater are fairly thick, on the order of 200 nanometers,” O’Hern says. “The benefit of a graphene membrane is, instead of being hundreds of nanometers thick, we’re on the order of three angstroms — 600 times thinner than existing membranes. This enables you to have a higher flow rate over the same area.”

O’Hern and Karnik have been investigating graphene’s potential as a filtration membrane for the past several years. In 2009, the group began fabricating membranes from graphene grown on copper — a metal that supports the growth of graphene across relatively large areas. However, copper is impermeable, requiring the group to transfer the graphene to a porous substrate following fabrication.

However, O’Hern noticed that this transfer process would create tears in graphene. What’s more, he observed intrinsic defects created during the growth process, resulting perhaps from impurities in the original material.

Plugging graphene’s leaks

To plug graphene’s leaks, the team came up with a technique to first tackle the smaller intrinsic defects, then the larger transfer-induced defects. For the intrinsic defects, the researchers used a process called “atomic layer deposition,” placing the graphene membrane in a vacuum chamber, then pulsing in a hafnium-containing chemical that does not normally interact with graphene. However, if the chemical comes in contact with a small opening in graphene, it will tend to stick to that opening, attracted by the area’s higher surface energy.

The team applied several rounds of atomic layer deposition, finding that the deposited hafnium oxide successfully filled in graphene’s nanometer-scale intrinsic defects. However, O’Hern realized that using the same process to fill in much larger holes and tears — on the order of hundreds of nanometers — would require too much time.

Instead, he and his colleagues came up with a second technique to fill in larger defects, using a process called “interfacial polymerization” that is often employed in membrane synthesis. After they filled in graphene’s intrinsic defects, the researchers submerged the membrane at the interface of two solutions: a water bath and an organic solvent that, like oil, does not mix with water.

In the two solutions, the researchers dissolved two different molecules that can react to form nylon. Once O’Hern placed the graphene membrane at the interface of the two solutions, he observed that nylon plugs formed only in tears and holes — regions where the two molecules could come in contact because of tears in the otherwise impermeable graphene — effectively sealing the remaining defects.

Using a technique they developed last year, the researchers then etched tiny, uniform holes in graphene — small enough to let water molecules through, but not larger contaminants. In experiments, the group tested the membrane with water containing several different molecules, including salt, and found that the membrane rejected up to 90 percent of larger molecules. However, it let salt through at a faster rate than water.

The preliminary tests suggest that graphene may be a viable alternative to existing filtration membranes, although Karnik says techniques to seal its defects and control its permeability will need further improvements.

“Water desalination and nanofiltration are big applications where, if things work out and this technology withstands the different demands of real-world tests, it would have a large impact,” Karnik says. “But one could also imagine applications for fine chemical- or biological-sample processing, where these membranes could be useful. And this is the first report of a centimeter-scale graphene membrane that does any kind of molecular filtration. That’s exciting.”

De-en Jiang, an assistant professor of chemistry at the University of California at Riverside, sees the defect-sealing technique as “a great advance toward making graphene filtration a reality.”

“The two-step technique is very smart: sealing the defects while preserving the desired pores for filtration,” says Jiang, who did not contribute to the research. “This would make the scale-up much easier. One can produce a large graphene membrane first, not worrying about the defects, which can be sealed later.”

I have featured graphene and water desalination work before  from these researchers at MIT in a Feb. 27, 2014 posting. Interestingly, there was no mention of problems with defects in the news release highlighting this previous work.

Here’s a link to and a citation for the latest paper,

Nanofiltration across Defect-Sealed Nanoporous Monolayer Graphene by Sean C. O’Hern, Doojoon Jang, Suman Bose, Juan-Carlos Idrobo, Yi Song §, Tahar Laoui, Jing Kong, and Rohit Karnik. Nano Lett., Article ASAP DOI: 10.1021/acs.nanolett.5b00456 Publication Date (Web): April 27, 2015

Copyright © 2015 American Chemical Society

This paper is behind a paywall.

Citrus canker, Florida, and Zinkicide

Found in Florida orchards in 2005, a citrus canker, citrus greening, poses a serious threat to the US state’s fruit industry. An April 2, 2105 news item on phys.org describes a possible solution to the problem,

Since it was discovered in South Florida in 2005, the plague of citrus greening has spread to nearly every grove in the state, stoking fears among growers that the $10.7 billion-a-year industry may someday disappear.

Now the U.S. Department of Agriculture has awarded the University of Florida a $4.6 million grant aimed at testing a potential new weapon in the fight against citrus greening: Zinkicide, a bactericide invented by a nanoparticle researcher at the University of Central Florida.

An April 2, 2015 University of Central Florida news release by Mark Schlueb (also on EurekAlert), which originated the news item, describes the problem and the solution (Zinkicide),

Citrus greening – also known by its Chinese name, Huanglongbing, or HLB – causes orange, grapefruit and other citrus trees to produce small, bitter fruit that drop prematurely and is unsuitable for sale or juice. Eventually, infected trees die. Florida has lost tens of thousands of acres to the disease.

“It’s a hundred-year-old disease, but to date there is no cure. It’s a killer, a true killer for the citrus industry,” said Swadeshmukul Santra, associate professor in the NanoScience Technology Center at UCF.

The bacteria that causes HLB is carried by the Asian citrus psyllid, a tiny insect that  feeds on leaves and stems of infected citrus trees, then carries the bacteria to healthy trees.

Zinkicide, developed by Santra, is designed to kill the bacteria.

The $4.6 million grant is the largest of five totaling $23 million that were recently announced by the USDA’s National Institute of Food and Agriculture.

The evaluation of Zinkicide is a multi-institute project involving 13 investigators from six institutions. Evan Johnson of UF’s [University of Florida] Citrus Research and Education Center at Lake Alfred is the project director, and there are a dozen co-principal investigators from UF, UCF, Oak Ridge National Laboratory (ORNL), Auburn University, New Mexico State University and The Ohio State University.

”Managing systemic diseases like HLB is a difficult challenge that has faced plant pathologists for many years,” said Johnson “It is a privilege to work with an excellent team of researchers from many different disciplines with the goal of developing new tools that are both effective and safe.”

A portion of the grant money, $1.4 million, flows to UCF, where Santra leads a team that also includes Andre Gesquiere, Laurene Tetard and the Oak Ridge National Laboratory collaborator, Loukas Petridis.

HLB control is difficult because current bactericidal sprays, such as copper, simply leave a protective film on the outside of a plant. The insect-transmitted bacteria bypasses that barrier and lives inside a tree’s fruit, stems and roots, in the vascular tissue known as the phloem. There, it deprives the tree of carbohydrate and nutrients, causing root loss and ultimately death. For a bactericide to be effective against HLB, it must be able to move within the plant, too.

Zinkicide is a nanoparticle smaller than a single microscopic cell, and researchers are cautiously optimistic it will be able to move systemically from cell to cell to kill the bacteria that cause HLB.

“The bacteria hide inside the plant in the phloem region,” Santra said. “If you spray and your compound doesn’t travel to the phloem region, then you cannot treat HLB.”

Zinkicide is derived from ingredients which are found in plants, and is designed to break down and be metabolized after its job is done. [emphasis mine]

It’s the first step in a years-long process to bring a treatment to market. UF will lead five years of greenhouse and field trials on grapefruit and sweet orange to determine the effectiveness of Zinkicide and the best method and timing of application.

The project also includes research to study where the nanoparticles travel within the plant, understand how they interact with plant tissue and how long they remain before breaking down. [emphasis mine]

If effective, the bactericide could have a substantial role in combatting HLB in Florida, and in other citrus-producing states and countries. It would also likely be useful for control of other bacterial pathogens infecting other crops.

The Zinkicide project builds as a spinoff from previous collaborations between Santra and UF’s Jim Graham, at the Citrus Research and Education Center to develop alternatives to copper for citrus canker control.

The previous Citrus Research and Education Foundation (CRDF)-funded Zinkicide project has issued three reports, for June 30, 2014, Sept. 30, 2014, and Dec. 31, 2014. This project’s completion date is May 2015. The reports which are remarkably succinct, consisting of two paragraphs, can be found here.

Oddly, the UCF news release doesn’t mention that Zinkicide (although it can be inferred) is a zinc particulate (I’m guessing they mean zinc nanoparticle) as noted on the CRDF project webpage. Happily, they are researching what happens after the bactericide has done its work on the infection. It’s good to see a life cycle approach to this research.