Tag Archives: Oak Ridge National Laboratory (ORNL)

Mapping 23,000 atoms in a nanoparticle

Identification of the precise 3-D coordinates of iron, shown in red, and platinum atoms in an iron-platinum nanoparticle.. Courtesy of Colin Ophus and Florian Nickel/Berkeley Lab

The image of the iron-platinum nanoparticle (referenced in the headline) reminds of foetal ultrasound images. A Feb. 1, 2017 news item on ScienceDaily tells us more,

In the world of the very tiny, perfection is rare: virtually all materials have defects on the atomic level. These imperfections — missing atoms, atoms of one type swapped for another, and misaligned atoms — can uniquely determine a material’s properties and function. Now, UCLA [University of California at Los Angeles] physicists and collaborators have mapped the coordinates of more than 23,000 individual atoms in a tiny iron-platinum nanoparticle to reveal the material’s defects.

The results demonstrate that the positions of tens of thousands of atoms can be precisely identified and then fed into quantum mechanics calculations to correlate imperfections and defects with material properties at the single-atom level.

A Feb. 1, 2017 UCLA news release, which originated the news item, provides more detail about the work,

Jianwei “John” Miao, a UCLA professor of physics and astronomy and a member of UCLA’s California NanoSystems Institute, led the international team in mapping the atomic-level details of the bimetallic nanoparticle, more than a trillion of which could fit within a grain of sand.

“No one has seen this kind of three-dimensional structural complexity with such detail before,” said Miao, who is also a deputy director of the Science and Technology Center on Real-Time Functional Imaging. This new National Science Foundation-funded consortium consists of scientists at UCLA and five other colleges and universities who are using high-resolution imaging to address questions in the physical sciences, life sciences and engineering.

Miao and his team focused on an iron-platinum alloy, a very promising material for next-generation magnetic storage media and permanent magnet applications.

By taking multiple images of the iron-platinum nanoparticle with an advanced electron microscope at Lawrence Berkeley National Laboratory and using powerful reconstruction algorithms developed at UCLA, the researchers determined the precise three-dimensional arrangement of atoms in the nanoparticle.

“For the first time, we can see individual atoms and chemical composition in three dimensions. Everything we look at, it’s new,” Miao said.

The team identified and located more than 6,500 iron and 16,600 platinum atoms and showed how the atoms are arranged in nine grains, each of which contains different ratios of iron and platinum atoms. Miao and his colleagues showed that atoms closer to the interior of the grains are more regularly arranged than those near the surfaces. They also observed that the interfaces between grains, called grain boundaries, are more disordered.

“Understanding the three-dimensional structures of grain boundaries is a major challenge in materials science because they strongly influence the properties of materials,” Miao said. “Now we are able to address this challenge by precisely mapping out the three-dimensional atomic positions at the grain boundaries for the first time.”

The researchers then used the three-dimensional coordinates of the atoms as inputs into quantum mechanics calculations to determine the magnetic properties of the iron-platinum nanoparticle. They observed abrupt changes in magnetic properties at the grain boundaries.

“This work makes significant advances in characterization capabilities and expands our fundamental understanding of structure-property relationships, which is expected to find broad applications in physics, chemistry, materials science, nanoscience and nanotechnology,” Miao said.

In the future, as the researchers continue to determine the three-dimensional atomic coordinates of more materials, they plan to establish an online databank for the physical sciences, analogous to protein databanks for the biological and life sciences. “Researchers can use this databank to study material properties truly on the single-atom level,” Miao said.

Miao and his team also look forward to applying their method called GENFIRE (GENeralized Fourier Iterative Reconstruction) to biological and medical applications. “Our three-dimensional reconstruction algorithm might be useful for imaging like CT scans,” Miao said. Compared with conventional reconstruction methods, GENFIRE requires fewer images to compile an accurate three-dimensional structure.

That means that radiation-sensitive objects can be imaged with lower doses of radiation.

The US Dept. of Energy (DOE) Lawrence Berkeley National Laboratory issued their own Feb. 1, 2017 news release (also on EurekAlert) about the work with a focus on how their equipment made this breakthrough possible (it repeats a little of the info. from the UCLA news release),

Scientists used one of the world’s most powerful electron microscopes to map the precise location and chemical type of 23,000 atoms in an extremely small particle made of iron and platinum.

The 3-D reconstruction reveals the arrangement of atoms in unprecedented detail, enabling the scientists to measure chemical order and disorder in individual grains, which sheds light on the material’s properties at the single-atom level. Insights gained from the particle’s structure could lead to new ways to improve its magnetic performance for use in high-density, next-generation hard drives.

What’s more, the technique used to create the reconstruction, atomic electron tomography (which is like an incredibly high-resolution CT scan), lays the foundation for precisely mapping the atomic composition of other useful nanoparticles. This could reveal how to optimize the particles for more efficient catalysts, stronger materials, and disease-detecting fluorescent tags.

Microscopy data was obtained and analyzed by scientists from the Department of Energy’s Lawrence Berkeley National Laboratory (Berkeley Lab) at the Molecular Foundry, in collaboration with Foundry users from UCLA, Oak Ridge National Laboratory, and the United Kingdom’s University of Birmingham. …

Atoms are the building blocks of matter, and the patterns in which they’re arranged dictate a material’s properties. These patterns can also be exploited to greatly improve a material’s function, which is why scientists are eager to determine the 3-D structure of nanoparticles at the smallest scale possible.

“Our research is a big step in this direction. We can now take a snapshot that shows the positions of all the atoms in a nanoparticle at a specific point in its growth. This will help us learn how nanoparticles grow atom by atom, and it sets the stage for a materials-design approach starting from the smallest building blocks,” says Mary Scott, who conducted the research while she was a Foundry user, and who is now a staff scientist. Scott and fellow Foundry scientists Peter Ercius and Colin Ophus developed the method in close collaboration with Jianwei Miao, a UCLA professor of physics and astronomy.

Their nanoparticle reconstruction builds on an achievement they reported last year in which they measured the coordinates of more than 3,000 atoms in a tungsten needle to a precision of 19 trillionths of a meter (19 picometers), which is many times smaller than a hydrogen atom. Now, they’ve taken the same precision, added the ability to distinguish different elements, and scaled up the reconstruction to include tens of thousands of atoms.

Importantly, their method maps the position of each atom in a single, unique nanoparticle. In contrast, X-ray crystallography and cryo-electron microscopy plot the average position of atoms from many identical samples. These methods make assumptions about the arrangement of atoms, which isn’t a good fit for nanoparticles because no two are alike.

“We need to determine the location and type of each atom to truly understand how a nanoparticle functions at the atomic scale,” says Ercius.

A TEAM approach

The scientists’ latest accomplishment hinged on the use of one of the highest-resolution transmission electron microscopes in the world, called TEAM I. It’s located at the National Center for Electron Microscopy, which is a Molecular Foundry facility. The microscope scans a sample with a focused beam of electrons, and then measures how the electrons interact with the atoms in the sample. It also has a piezo-controlled stage that positions samples with unmatched stability and position-control accuracy.

The researchers began growing an iron-platinum nanoparticle from its constituent elements, and then stopped the particle’s growth before it was fully formed. They placed the “partially baked” particle in the TEAM I stage, obtained a 2-D projection of its atomic structure, rotated it a few degrees, obtained another projection, and so on. Each 2-D projection provides a little more information about the full 3-D structure of the nanoparticle.

They sent the projections to Miao at UCLA, who used a sophisticated computer algorithm to convert the 2-D projections into a 3-D reconstruction of the particle. The individual atomic coordinates and chemical types were then traced from the 3-D density based on the knowledge that iron atoms are lighter than platinum atoms. The resulting atomic structure contains 6,569 iron atoms and 16,627 platinum atoms, with each atom’s coordinates precisely plotted to less than the width of a hydrogen atom.

Translating the data into scientific insights

Interesting features emerged at this extreme scale after Molecular Foundry scientists used code they developed to analyze the atomic structure. For example, the analysis revealed chemical order and disorder in interlocking grains, in which the iron and platinum atoms are arranged in different patterns. This has large implications for how the particle grew and its real-world magnetic properties. The analysis also revealed single-atom defects and the width of disordered boundaries between grains, which was not previously possible in complex 3-D boundaries.

“The important materials science problem we are tackling is how this material transforms from a highly randomized structure, what we call a chemically-disordered structure, into a regular highly-ordered structure with the desired magnetic properties,” says Ophus.

To explore how the various arrangements of atoms affect the nanoparticle’s magnetic properties, scientists from DOE’s Oak Ridge National Laboratory ran computer calculations on the Titan supercomputer at ORNL–using the coordinates and chemical type of each atom–to simulate the nanoparticle’s behavior in a magnetic field. This allowed the scientists to see patterns of atoms that are very magnetic, which is ideal for hard drives. They also saw patterns with poor magnetic properties that could sap a hard drive’s performance.

“This could help scientists learn how to steer the growth of iron-platinum nanoparticles so they develop more highly magnetic patterns of atoms,” says Ercius.

Adds Scott, “More broadly, the imaging technique will shed light on the nucleation and growth of ordered phases within nanoparticles, which isn’t fully theoretically understood but is critically important to several scientific disciplines and technologies.”

The folks at the Berkeley Lab have created a video (notice where the still image from the beginning of this post appears),

The Oak Ridge National Laboratory (ORNL), not wanting to be left out, has been mentioned in a Feb. 3, 2017 news item on ScienceDaily,

… researchers working with magnetic nanoparticles at the University of California, Los Angeles (UCLA), and the US Department of Energy’s (DOE’s) Lawrence Berkeley National Laboratory (Berkeley Lab) approached computational scientists at DOE’s Oak Ridge National Laboratory (ORNL) to help solve a unique problem: to model magnetism at the atomic level using experimental data from a real nanoparticle.

“These types of calculations have been done for ideal particles with ideal crystal structures but not for real particles,” said Markus Eisenbach, a computational scientist at the Oak Ridge Leadership Computing Facility (OLCF), a DOE Office of Science User Facility located at ORNL.

A Feb. 2, 2017 ORNL news release on EurekAlert, which originated the news item, elucidates further on how their team added to the research,

Eisenbach develops quantum mechanical electronic structure simulations that predict magnetic properties in materials. Working with Paul Kent, a computational materials scientist at ORNL’s Center for Nanophase Materials Sciences, the team collaborated with researchers at UCLA and Berkeley Lab’s Molecular Foundry to combine world-class experimental data with world-class computing to do something new–simulate magnetism atom by atom in a real nanoparticle.

Using the new data from the research teams on the West Coast, Eisenbach and Kent were able to precisely model the measured atomic structure, including defects, from a unique iron-platinum (FePt) nanoparticle and simulate its magnetic properties on the 27-petaflop Titan supercomputer at the OLCF.

Electronic structure codes take atomic and chemical structure and solve for the corresponding magnetic properties. However, these structures are typically derived from many 2-D electron microscopy or x-ray crystallography images averaged together, resulting in a representative, but not true, 3-D structure.

“In this case, researchers were able to get the precise 3-D structure for a real particle,” Eisenbach said. “The UCLA group has developed a new experimental technique where they can tell where the atoms are–the coordinates–and the chemical resolution, or what they are — iron or platinum.”

The ORNL news release goes on to describe the work from the perspective of the people who ran the supercompute simulationsr,

A Supercomputing Milestone

Magnetism at the atomic level is driven by quantum mechanics — a fact that has shaken up classical physics calculations and called for increasingly complex, first-principle calculations, or calculations working forward from fundamental physics equations rather than relying on assumptions that reduce computational workload.

For magnetic recording and storage devices, researchers are particularly interested in magnetic anisotropy, or what direction magnetism favors in an atom.

“If the anisotropy is too weak, a bit written to the nanoparticle might flip at room temperature,” Kent said.

To solve for magnetic anisotropy, Eisenbach and Kent used two computational codes to compare and validate results.

To simulate a supercell of about 1,300 atoms from strongly magnetic regions of the 23,000-atom nanoparticle, they used the Linear Scaling Multiple Scattering (LSMS) code, a first-principles density functional theory code developed at ORNL.

“The LSMS code was developed for large magnetic systems and can tackle lots of atoms,” Kent said.

As principal investigator on 2017, 2016, and previous INCITE program awards, Eisenbach has scaled the LSMS code to Titan for a range of magnetic materials projects, and the in-house code has been optimized for Titan’s accelerated architecture, speeding up calculations more than 8 times on the machine’s GPUs. Exceptionally capable of crunching large magnetic systems quickly, the LSMS code received an Association for Computing Machinery Gordon Bell Prize in high-performance computing achievement in 1998 and 2009, and developments continue to enhance the code for new architectures.

Working with Renat Sabirianov at the University of Nebraska at Omaha, the team also ran VASP, a simulation package that is better suited for smaller atom counts, to simulate regions of about 32 atoms.

“With both approaches, we were able to confirm that the local VASP results were consistent with the LSMS results, so we have a high confidence in the simulations,” Eisenbach said.

Computer simulations revealed that grain boundaries have a strong effect on magnetism. “We found that the magnetic anisotropy energy suddenly transitions at the grain boundaries. These magnetic properties are very important,” Miao said.

In the future, researchers hope that advances in computing and simulation will make a full-particle simulation possible — as first-principles calculations are currently too intensive to solve small-scale magnetism for regions larger than a few thousand atoms.

Also, future simulations like these could show how different fabrication processes, such as the temperature at which nanoparticles are formed, influence magnetism and performance.

“There’s a hope going forward that one would be able to use these techniques to look at nanoparticle growth and understand how to optimize growth for performance,” Kent said.

Finally, here’s a link to and a citation for the paper,

Deciphering chemical order/disorder and material properties at the single-atom level by Yongsoo Yang, Chien-Chun Chen, M. C. Scott, Colin Ophus, Rui Xu, Alan Pryor, Li Wu, Fan Sun, Wolfgang Theis, Jihan Zhou, Markus Eisenbach, Paul R. C. Kent, Renat F. Sabirianov, Hao Zeng, Peter Ercius, & Jianwei Miao. Nature 542, 75–79 (02 February 2017) doi:10.1038/nature21042 Published online 01 February 2017

This paper is behind a paywall.

The physics of melting in two-dimensional systems

You might want to skip over the reference to snow as it doesn’t have much relevance to this story about ‘melting’, from a Feb. 1, 2017 news item on Nanowerk (Note: A link has been removed),

Snow falls in winter and melts in spring, but what drives the phase change in between?
Although melting is a familiar phenomenon encountered in everyday life, playing a part in many industrial and commercial processes, much remains to be discovered about this transformation at a fundamental level.

In 2015, a team led by the University of Michigan’s Sharon Glotzer used high-performance computing at the Department of Energy’s (DOE’s) Oak Ridge National Laboratory [ORNL] to study melting in two-dimensional (2-D) systems, a problem that could yield insights into surface interactions in materials important to technologies like solar panels, as well as into the mechanism behind three-dimensional melting. The team explored how particle shape affects the physics of a solid-to-fluid melting transition in two dimensions.

Using the Cray XK7 Titan supercomputer at the Oak Ridge Leadership Computing Facility (OLCF), a DOE Office of Science User Facility, the team’s [latest?] work revealed that the shape and symmetry of particles can dramatically affect the melting process (“Shape and symmetry determine two-dimensional melting transitions of hard regular polygons”). This fundamental finding could help guide researchers in search of nanoparticles with desirable properties for energy applications.

There is a video  of the ‘melting’ process but I have to confess to finding it a bit enigmatic,

A Feb. 1, 2017 ORNL news release (also on EurekAlert), which originated the news item, provides more detail about the research,

o tackle the problem, Glotzer’s team needed a supercomputer capable of simulating systems of up to 1 million hard polygons, simple particles used as stand-ins for atoms, ranging from triangles to 14-sided shapes. Unlike traditional molecular dynamics simulations that attempt to mimic nature, hard polygon simulations give researchers a pared-down environment in which to evaluate shape-influenced physics.

“Within our simulated 2-D environment, we found that the melting transition follows one of three different scenarios depending on the shape of the systems’ polygons,” University of Michigan research scientist Joshua Anderson said. “Notably, we found that systems made up of hexagons perfectly follow a well-known theory for 2-D melting, something that hasn’t been described until now.”

Shifting Shape Scenarios

In 3-D systems such as a thinning icicle, melting takes the form of a first-order phase transition. This means that collections of molecules within these systems exist in either solid or liquid form with no in-between in the presence of latent heat, the energy that fuels a solid-to-fluid phase change . In 2-D systems, such as thin-film materials used in batteries and other technologies, melting can be more complex, sometimes exhibiting an intermediate phase known as the hexatic phase.

The hexatic phase, a state characterized as a halfway point between an ordered solid and a disordered liquid, was first theorized in the 1970s by researchers John Kosterlitz, David Thouless, Burt Halperin, David Nelson, and Peter Young. The phase is a principle feature of the KTHNY theory, a 2-D melting theory posited by the researchers (and named based on the first letters of their last names). In 2016 Kosterlitz and Thouless were awarded the Nobel Prize in Physics, along with physicist Duncan Haldane, for their contributions to 2-D materials research.

At the molecular level, solid, hexatic, and liquid systems are defined by the arrangement of their atoms. In a crystalline solid, two types of order are present: translational and orientational. Translational order describes the well-defined paths between atoms over distances, like blocks in a carefully constructed Jenga tower. Orientational order describes the relational and clustered order shared between atoms and groups of atoms over distances. Think of that same Jenga tower turned askew after several rounds of play. The general shape of the tower remains, but its order is now fragmented.

The hexatic phase has no translational order but possesses orientational order. (A liquid has neither translational nor orientational order but exhibits short-range order, meaning any atom will have some average number of neighbors nearby but with no predicable order.)

Deducing the presence of a hexatic phase requires a leadership-class computer that can calculate large hard-particle systems. Glotzer’s team gained access to the OLCF’s 27-petaflop Titan through the Innovative and Novel Computational Impact on Theory and Experiment (INCITE) program, running its GPU-accelerated HOOMD-blue code to maximize time on the machine.

On Titan, HOOMD-blue used 64 GPUs for each massively parallel Monte Carlo simulation of up to 1 million particles. Researchers explored 11 different shape systems, applying an external pressure to push the particles together. Each system was simulated at 21 different densities, with the lowest densities representing a fluid state and the highest densities a solid state.

The simulations demonstrated multiple melting scenarios hinging on the polygons’ shape. Systems with polygons of seven sides or more closely followed the melting behavior of hard disks, or circles, exhibiting a continuous phase transition from the solid to the hexatic phase and a first-order phase transition from the hexatic to the liquid phase. A continuous phase transition means a constantly changing area in response to a changing external pressure. A first-order phase transition is characterized by a discontinuity in which the volume jumps across the phase transition in response to the changing external pressure. The team found pentagons and fourfold pentilles, irregular pentagons with two different edge lengths, exhibit a first-order solid-to-liquid phase transition.

The most significant finding, however, emerged from hexagon systems, which perfectly followed the phase transition described by the KTHNY theory. In this scenario, the particles’ shift from solid to hexatic and hexatic to fluid in a perfect continuous phase transition pattern.

“It was actually sort of surprising that no one else has found that until now,” Anderson said, “because it seems natural that the hexagon, with its six sides, and the honeycomb-like hexagonal arrangement would be a perfect match for this theory” in which the hexatic phase generally contains sixfold orientational order.

Glotzer’s team, which recently received a 2017 INCITE allocation, is now applying its leadership-class computing prowess to tackle phase transitions in 3-D. The team is focusing on how fluid particles crystallize into complex colloids—mixtures in which particles are suspended throughout another substance. Common examples of colloids include milk, paper, fog, and stained glass.

“We’re planning on using Titan to study how complexity can arise from these simple interactions, and to do that we’re actually going to look at how the crystals grow and study the kinetics of how that happens,” said Anderson.

There is a paper on arXiv,

Shape and symmetry determine two-dimensional melting transitions of hard regular polygons by Joshua A. Anderson, James Antonaglia, Jaime A. Millan, Michael Engel, Sharon C. Glotzer
(Submitted on 2 Jun 2016 (v1), last revised 23 Dec 2016 (this version, v2))  arXiv:1606.00687 [cond-mat.soft] (or arXiv:1606.00687v2

This paper is open access and open to public peer review.

Nano-spike catalysts offer one step conversion of carbon dioxide to ethanol

An Oct. 12, 2016 news item on ScienceDaily makes an exciting announcement, if carbon-dixoide-conversion-to-fuel is one of your pet topics,

In a new twist to waste-to-fuel technology, scientists at the Department of Energy’s Oak Ridge National Laboratory [ORNL] have developed an electrochemical process that uses tiny spikes of carbon and copper to turn carbon dioxide, a greenhouse gas, into ethanol. Their finding, which involves nanofabrication and catalysis science, was serendipitous.

An Oct. 12, 2016 ORNL news release, which originated the news item, explains in greater detail,

“We discovered somewhat by accident that this material worked,” said ORNL’s Adam Rondinone, lead author of the team’s study published in ChemistrySelect. “We were trying to study the first step of a proposed reaction when we realized that the catalyst was doing the entire reaction on its own.”

The team used a catalyst made of carbon, copper and nitrogen and applied voltage to trigger a complicated chemical reaction that essentially reverses the combustion process. With the help of the nanotechnology-based catalyst which contains multiple reaction sites, the solution of carbon dioxide dissolved in water turned into ethanol with a yield of 63 percent. Typically, this type of electrochemical reaction results in a mix of several different products in small amounts.

“We’re taking carbon dioxide, a waste product of combustion, and we’re pushing that combustion reaction backwards with very high selectivity to a useful fuel,” Rondinone said. “Ethanol was a surprise — it’s extremely difficult to go straight from carbon dioxide to ethanol with a single catalyst.”

The catalyst’s novelty lies in its nanoscale structure, consisting of copper nanoparticles embedded in carbon spikes. This nano-texturing approach avoids the use of expensive or rare metals such as platinum that limit the economic viability of many catalysts.

“By using common materials, but arranging them with nanotechnology, we figured out how to limit the side reactions and end up with the one thing that we want,” Rondinone said.

The researchers’ initial analysis suggests that the spiky textured surface of the catalysts provides ample reactive sites to facilitate the carbon dioxide-to-ethanol conversion.

“They are like 50-nanometer lightning rods that concentrate electrochemical reactivity at the tip of the spike,” Rondinone said.

Given the technique’s reliance on low-cost materials and an ability to operate at room temperature in water, the researchers believe the approach could be scaled up for industrially relevant applications. For instance, the process could be used to store excess electricity generated from variable power sources such as wind and solar.

“A process like this would allow you to consume extra electricity when it’s available to make and store as ethanol,” Rondinone said. “This could help to balance a grid supplied by intermittent renewable sources.”

The researchers plan to refine their approach to improve the overall production rate and further study the catalyst’s properties and behavior.

Here’s a link to and a citation for the paper,

High-Selectivity Electrochemical Conversion of CO2 to Ethanol using a Copper Nanoparticle/N-Doped Graphene Electrode by Yang Song, Rui Peng, Dale Hensley, Peter Bonnesen, Liangbo Liang, Zili Wu, Harry Meyer III, Miaofang Chi, Cheng Ma, Bobby Sumpter and Adam Rondinone. Chemistry Select DOI: 10.1002/slct.201601169 First published: 28 September 2016

This paper is open access.

New elements named (provisionally)

They say it’s provisionally but I suspect it would take an act of god for a change in the proposed names. From a June 8, 2016 blog posting (scroll down about 25% of the way) on the International Union of Pure and Applied Chemistry (IUPAC) website,

IUPAC is naming the four new elements nihonium, moscovium, tennessine, and oganesson

Following earlier reports that the claims for discovery of these elements have been fulfilled [1, 2], the discoverers have been invited to propose names and the following are now disclosed for public review:

  • Nihonium and symbol Nh, for the element 113,
  • Moscovium and symbol Mc, for the element 115,
  • Tennessine and symbol Ts, for the element 117, and
  • Oganesson and symbol Og, for the element 118.

The IUPAC Inorganic Chemistry Division has reviewed and considered these proposals and recommends these for acceptance. A five-month public review is now set, expiring 8 November 2016, prior to the formal approval by the IUPAC Council.

I can’t figure out how someone from the public might offer a comment about the names.

There’s more from the posting about what kinds of names are acceptable and how the names in this set of four were arrived at,

The guidelines for the naming the elements were recently revised [3] and shared with the discoverers to assist in their proposals. Keeping with tradition, newly discovered elements can be named after:
(a) a mythological concept or character (including an astronomical object),
(b) a mineral or similar substance,
(c) a place, or geographical region,
(d) a property of the element, or
(e) a scientist.
The names of all new elements in general would have an ending that reflects and maintains historical and chemical consistency. This would be in general “-ium” for elements belonging to groups 1-16, “-ine” for elements of group 17 and “-on” for elements of group 18. Finally, the names for new chemical elements in English should allow proper translation into other major languages.

For the element with atomic number 113 the discoverers at RIKEN Nishina Center for Accelerator-Based Science (Japan) proposed the name nihonium and the symbol Nh. Nihon is one of the two ways to say “Japan” in Japanese, and literally mean “the Land of Rising Sun”. The name is proposed to make a direct connection to the nation where the element was discovered. Element 113 is the first element to have been discovered in an Asian country. While presenting this proposal, the team headed by Professor Kosuke Morita pays homage to the trailblazing work by Masataka Ogawa done in 1908 surrounding the discovery of element 43. The team also hopes that pride and faith in science will displace the lost trust of those who suffered from the 2011 Fukushima nuclear disaster.

For the element with atomic number 115 the name proposed is moscovium with the symbol Mc and for element with atomic number 117, the name proposed is tennessine with the symbol Ts. These are in line with tradition honoring a place or geographical region and are proposed jointly by the discoverers at the Joint Institute for Nuclear Research, Dubna (Russia), Oak Ridge National Laboratory (USA), Vanderbilt University (USA) and Lawrence Livermore National Laboratory (USA).

Moscovium is in recognition of the Moscow region and honors the ancient Russian land that is the home of the Joint Institute for Nuclear Research, where the discovery experiments were conducted using the Dubna Gas-Filled Recoil Separator in combination with the heavy ion accelerator capabilities of the Flerov Laboratory of Nuclear Reactions.

Tennessine is in recognition of the contribution of the Tennessee region, including Oak Ridge National Laboratory, Vanderbilt University, and the University of Tennessee at Knoxville, to superheavy element research, including the production and chemical separation of unique actinide target materials for superheavy element synthesis at ORNL’s High Flux Isotope Reactor (HFIR) and Radiochemical Engineering Development Center (REDC).

For the element with atomic number 118 the collaborating teams of discoverers at the Joint Institute for Nuclear Research, Dubna (Russia) and Lawrence Livermore National Laboratory (USA) proposed the name oganesson and symbol Og. The proposal is in line with the tradition of honoring a scientist and recognizes Professor Yuri Oganessian (born 1933) for his pioneering contributions to transactinoid elements research. His many achievements include the discovery of superheavy elements and significant advances in the nuclear physics of superheavy nuclei including experimental evidence for the “island of stability”.

“It is a pleasure to see that specific places and names (country, state, city, and scientist) related to the new elements is recognized in these four names. Although these choices may perhaps be viewed by some as slightly self-indulgent, the names are completely in accordance with IUPAC rules”, commented Jan Reedijk, who corresponded with the various laboratories and invited the discoverers to make proposals. “In fact, I see it as thrilling to recognize that international collaborations were at the core of these discoveries and that these new names also make the discoveries somewhat tangible.”

So, let’s welcome Tennessine, Muscovium, Nihonium, and Oganesson to the table of periodic elements. I imagine Don Lehrer’s Elements Song will be updated soon. In the meantime we have this from ASAP Science, which includes the new elements under their placeholder names (when the addition was first publicized in January 2016. All of the placeholder names start with U,

Enjoy!

Corrections: Hybrid Photonic-Nanomechanical Force Microscopy uses vibration for better chemical analysis

*ETA  Nov. 4, 2015: I’m apologizing to anyone wishing to read this posting as it’s a bit of a mess. I deeply regret mishandling the situation. In future, I shall not be taking any corrections from individual researchers to materials such as news releases that have been issued by an institution. Whether or not the individual researchers are happy with how their contributions or how a colleague’s contributions or how their home institutions have been characterized is a matter for them and their home institutions.

The August 10, 2015 ORNL news release with all the correct details has been added to the end of this post.*

A researcher at the University of Central Florida (UCF) has developed a microscope that uses vibrations for better analysis of chemical composition. From an Aug. 10, 2015 news item on Nanowerk,

It’s a discovery that could have promising implications for fields as varied as biofuel production, solar energy, opto-electronic devices, pharmaceuticals and medical research.

“What we’re interested in is the tools that allow us to understand the world at a very small scale,” said UCF professor Laurene Tetard, formerly of the Oak Ridge National Laboratory. “Not just the shape of the object, but its mechanical properties, its composition and how it evolves in time.”

An Aug. 10, 2015 UCF news release (also on EurekAlert), which originated the news item, describes the limitations of atomic force microscopy and gives a few details about the hybrid microscope (Note: A link has been removed),

For more than two decades, scientists have used atomic force microscopy – a probe that acts like an ultra-sensitive needle on a record player – to determine the surface characteristics of samples at the microscopic scale. A “needle” that comes to an atoms-thin point traces a path over a sample, mapping the surface features at a sub-cellular level [nanoscale].

But that technology has its limits. It can determine the topographical characteristics of [a] sample, but it can’t identify its composition. And with the standard tools currently used for chemical mapping, anything smaller than roughly half a micron is going to look like a blurry blob, so researchers are out of luck if they want to study what’s happening at the molecular level.

A team led by Tetard has come up with a hybrid form of that technology that produces a much clearer chemical image. As described Aug. 10 in the journal Nature Nanotechnology, Hybrid Photonic-Nanomechanical Force Microscopy (HPFM) can discern a sample’s topographic characteristics together with the chemical properties at a much finer scale.

The HPFM method is able to identify materials based on differences in the vibration produced when they’re subjected to different wavelengths of light – essentially a material’s unique “fingerprint.”

“What we are developing is a completely new way of making that detection possible,” said Tetard, who has joint appointments to UCF’s Physics Department, Material Science and Engineering Department and the NanoScience Technology Center.

The researchers proved the effectiveness of HPFM while examining samples from an eastern cottonwood tree, a potential source of biofuel. By examining the plant samples at the nanoscale, the researchers for the first time were able to determine the molecular traits of both untreated and chemically processed cottonwood inside the plant cell walls.

The research team included Tetard; Ali Passian, R.H. Farahi and Brian Davison, all of Oak Ridge National Laboratory; and Thomas Thundat of the University of Alberta.

Long term, the results will help reveal better methods for producing the most biofuel from the cottonwood, a potential boon for industry. Likewise, the new method could be used to examine samples of myriad plants to determine whether they’re good candidates for biofuel production.

Potential uses of the technology go beyond the world of biofuel. Continued research may allow HPFM to be used as a probe so, for instance, it would be possible to study the effect of new treatments being developed to save plants such as citrus trees from bacterial diseases rapidly decimating the citrus industry, or study fundamental photonically-induced processes in complex systems such as in solar cell materials or opto-electronic devices.

Here’s a link to and a citation for the paper,

Opto-nanomechanical spectroscopic material characterization by L. Tetard, A. Passian, R. H. Farahi, T. Thundat, & B. H. Davison. Nature Nanotechnology (2015) doi:10.1038/nnano.2015.168 Published online 10 August 2015

This paper is behind a paywall.

*ETA August 27, 2015:

August 10, 2015 ORNL news release (Note: Funding information and a link to the paper [previously given] have been removed):

A microscope being developed at the Department of Energy’s Oak Ridge National Laboratory will allow scientists studying biological and synthetic materials to simultaneously observe chemical and physical properties on and beneath the surface.

The Hybrid Photonic Mode-Synthesizing Atomic Force Microscope is unique, according to principal investigator Ali Passian of ORNL’s Quantum Information System group. As a hybrid, the instrument, described in a paper published in Nature Nanotechnology, combines the disciplines of nanospectroscopy and nanomechanical microscopy.

“Our microscope offers a noninvasive rapid method to explore materials simultaneously for their chemical and physical properties,” Passian said. “It allows researchers to study the surface and subsurface of synthetic and biological samples, which is a capability that until now didn’t exist.”

ORNL’s instrument retains all of the advantages of an atomic force microscope while simultaneously offering the potential for discoveries through its high resolution and subsurface spectroscopic capabilities.

“The originality of the instrument and technique lies in its ability to provide information about a material’s chemical composition in the broad infrared spectrum of the chemical composition while showing the morphology of a material’s interior and exterior with nanoscale – a billionth of a meter – resolution,” Passian said.

Researchers will be able to study samples ranging from engineered nanoparticles and nanostructures to naturally occurring biological polymers, tissues and plant cells.

The first application as part of DOE’s BioEnergy Science Center was in the examination of plant cell walls under several treatments to provide submicron characterization. The plant cell wall is a layered nanostructure of biopolymers such as cellulose. Scientists want to convert such biopolymers to free the useful sugars and release energy.

An earlier instrument, also invented at ORNL, provided imaging of poplar cell wall structures that yielded unprecedented topological information, advancing fundamental research in sustainable biofuels.

Because of this new instrument’s impressive capabilities, the researcher team envisions broad applications.
“An urgent need exists for new platforms that can tackle the challenges of subsurface and chemical characterization at the nanometer scale,” said co-author Rubye Farahi. “Hybrid approaches such as ours bring together multiple capabilities, in this case, spectroscopy and high-resolution microscopy.”

Looking inside, the hybrid microscope consists of a photonic module that is incorporated into a mode-synthesizing atomic force microscope. The modular aspect of the system makes it possible to accommodate various radiation sources such as tunable lasers and non-coherent monochromatic or polychromatic sources.

ETA2 August 27, 2015: I’ve received an email from one of the paper’s authors (RH Farahi of the US Oak Ridge National Laboratory [ORNL]) who claims some inaccuracies in this piece.  The news release supplied by the University of Central Florida states that Dr. Tetard led the team and that is not so. According to Dr. Farahi, she had a postdoctoral position on the team which she left two years ago. You might also get the impression that some of the work was performed at the University of Central Florida. That is not so according to Dr. Farahi.  As a courtesy Dr. Tetard was retained as first author of the paper.

*Nov. 4, 2015: I suspect some of the misunderstanding was due to overeagerness and/or time pressures. Whoever wrote the news release may have made some assumptions. It’s very easy to make a mistake when talking to an ebullient scientist who can unintentionally lead you to believe something that’s not so. I worked in a high tech company and believed that there was some new software being developed which turned out to be a case of high hopes. Luckily, I said something that triggered a rapid rebuttal to the fantasies. Getting back to this situation, other contributing factors could include the writer not having time to get the news release reviewed the scientist or the scientist skimming the release and missing a few bits due to time pressure.*

Sealing graphene’s defects to make a better filtration device

Making a graphene filter that allows water to pass through while screening out salt and/or noxious materials has been more challenging than one might think. According to a May 7, 2015 news item on Nanowerk, graphene filters can be ‘leaky’,

For faster, longer-lasting water filters, some scientists are looking to graphene –thin, strong sheets of carbon — to serve as ultrathin membranes, filtering out contaminants to quickly purify high volumes of water.

Graphene’s unique properties make it a potentially ideal membrane for water filtration or desalination. But there’s been one main drawback to its wider use: Making membranes in one-atom-thick layers of graphene is a meticulous process that can tear the thin material — creating defects through which contaminants can leak.

Now engineers at MIT [Massachusetts Institute of Technology], Oak Ridge National Laboratory, and King Fahd University of Petroleum and Minerals (KFUPM) have devised a process to repair these leaks, filling cracks and plugging holes using a combination of chemical deposition and polymerization techniques. The team then used a process it developed previously to create tiny, uniform pores in the material, small enough to allow only water to pass through.

A May 8, 2015 MIT news release (also on EurkeAlert), which originated the news item, expands on the theme,

Combining these two techniques, the researchers were able to engineer a relatively large defect-free graphene membrane — about the size of a penny. The membrane’s size is significant: To be exploited as a filtration membrane, graphene would have to be manufactured at a scale of centimeters, or larger.

In experiments, the researchers pumped water through a graphene membrane treated with both defect-sealing and pore-producing processes, and found that water flowed through at rates comparable to current desalination membranes. The graphene was able to filter out most large-molecule contaminants, such as magnesium sulfate and dextran.

Rohit Karnik, an associate professor of mechanical engineering at MIT, says the group’s results, published in the journal Nano Letters, represent the first success in plugging graphene’s leaks.

“We’ve been able to seal defects, at least on the lab scale, to realize molecular filtration across a macroscopic area of graphene, which has not been possible before,” Karnik says. “If we have better process control, maybe in the future we don’t even need defect sealing. But I think it’s very unlikely that we’ll ever have perfect graphene — there will always be some need to control leakages. These two [techniques] are examples which enable filtration.”

Sean O’Hern, a former graduate research assistant at MIT, is the paper’s first author. Other contributors include MIT graduate student Doojoon Jang, former graduate student Suman Bose, and Professor Jing Kong.

A delicate transfer

“The current types of membranes that can produce freshwater from saltwater are fairly thick, on the order of 200 nanometers,” O’Hern says. “The benefit of a graphene membrane is, instead of being hundreds of nanometers thick, we’re on the order of three angstroms — 600 times thinner than existing membranes. This enables you to have a higher flow rate over the same area.”

O’Hern and Karnik have been investigating graphene’s potential as a filtration membrane for the past several years. In 2009, the group began fabricating membranes from graphene grown on copper — a metal that supports the growth of graphene across relatively large areas. However, copper is impermeable, requiring the group to transfer the graphene to a porous substrate following fabrication.

However, O’Hern noticed that this transfer process would create tears in graphene. What’s more, he observed intrinsic defects created during the growth process, resulting perhaps from impurities in the original material.

Plugging graphene’s leaks

To plug graphene’s leaks, the team came up with a technique to first tackle the smaller intrinsic defects, then the larger transfer-induced defects. For the intrinsic defects, the researchers used a process called “atomic layer deposition,” placing the graphene membrane in a vacuum chamber, then pulsing in a hafnium-containing chemical that does not normally interact with graphene. However, if the chemical comes in contact with a small opening in graphene, it will tend to stick to that opening, attracted by the area’s higher surface energy.

The team applied several rounds of atomic layer deposition, finding that the deposited hafnium oxide successfully filled in graphene’s nanometer-scale intrinsic defects. However, O’Hern realized that using the same process to fill in much larger holes and tears — on the order of hundreds of nanometers — would require too much time.

Instead, he and his colleagues came up with a second technique to fill in larger defects, using a process called “interfacial polymerization” that is often employed in membrane synthesis. After they filled in graphene’s intrinsic defects, the researchers submerged the membrane at the interface of two solutions: a water bath and an organic solvent that, like oil, does not mix with water.

In the two solutions, the researchers dissolved two different molecules that can react to form nylon. Once O’Hern placed the graphene membrane at the interface of the two solutions, he observed that nylon plugs formed only in tears and holes — regions where the two molecules could come in contact because of tears in the otherwise impermeable graphene — effectively sealing the remaining defects.

Using a technique they developed last year, the researchers then etched tiny, uniform holes in graphene — small enough to let water molecules through, but not larger contaminants. In experiments, the group tested the membrane with water containing several different molecules, including salt, and found that the membrane rejected up to 90 percent of larger molecules. However, it let salt through at a faster rate than water.

The preliminary tests suggest that graphene may be a viable alternative to existing filtration membranes, although Karnik says techniques to seal its defects and control its permeability will need further improvements.

“Water desalination and nanofiltration are big applications where, if things work out and this technology withstands the different demands of real-world tests, it would have a large impact,” Karnik says. “But one could also imagine applications for fine chemical- or biological-sample processing, where these membranes could be useful. And this is the first report of a centimeter-scale graphene membrane that does any kind of molecular filtration. That’s exciting.”

De-en Jiang, an assistant professor of chemistry at the University of California at Riverside, sees the defect-sealing technique as “a great advance toward making graphene filtration a reality.”

“The two-step technique is very smart: sealing the defects while preserving the desired pores for filtration,” says Jiang, who did not contribute to the research. “This would make the scale-up much easier. One can produce a large graphene membrane first, not worrying about the defects, which can be sealed later.”

I have featured graphene and water desalination work before  from these researchers at MIT in a Feb. 27, 2014 posting. Interestingly, there was no mention of problems with defects in the news release highlighting this previous work.

Here’s a link to and a citation for the latest paper,

Nanofiltration across Defect-Sealed Nanoporous Monolayer Graphene by Sean C. O’Hern, Doojoon Jang, Suman Bose, Juan-Carlos Idrobo, Yi Song §, Tahar Laoui, Jing Kong, and Rohit Karnik. Nano Lett., Article ASAP DOI: 10.1021/acs.nanolett.5b00456 Publication Date (Web): April 27, 2015

Copyright © 2015 American Chemical Society

This paper is behind a paywall.

Citrus canker, Florida, and Zinkicide

Found in Florida orchards in 2005, a citrus canker, citrus greening, poses a serious threat to the US state’s fruit industry. An April 2, 2105 news item on phys.org describes a possible solution to the problem,

Since it was discovered in South Florida in 2005, the plague of citrus greening has spread to nearly every grove in the state, stoking fears among growers that the $10.7 billion-a-year industry may someday disappear.

Now the U.S. Department of Agriculture has awarded the University of Florida a $4.6 million grant aimed at testing a potential new weapon in the fight against citrus greening: Zinkicide, a bactericide invented by a nanoparticle researcher at the University of Central Florida.

An April 2, 2015 University of Central Florida news release by Mark Schlueb (also on EurekAlert), which originated the news item, describes the problem and the solution (Zinkicide),

Citrus greening – also known by its Chinese name, Huanglongbing, or HLB – causes orange, grapefruit and other citrus trees to produce small, bitter fruit that drop prematurely and is unsuitable for sale or juice. Eventually, infected trees die. Florida has lost tens of thousands of acres to the disease.

“It’s a hundred-year-old disease, but to date there is no cure. It’s a killer, a true killer for the citrus industry,” said Swadeshmukul Santra, associate professor in the NanoScience Technology Center at UCF.

The bacteria that causes HLB is carried by the Asian citrus psyllid, a tiny insect that  feeds on leaves and stems of infected citrus trees, then carries the bacteria to healthy trees.

Zinkicide, developed by Santra, is designed to kill the bacteria.

The $4.6 million grant is the largest of five totaling $23 million that were recently announced by the USDA’s National Institute of Food and Agriculture.

The evaluation of Zinkicide is a multi-institute project involving 13 investigators from six institutions. Evan Johnson of UF’s [University of Florida] Citrus Research and Education Center at Lake Alfred is the project director, and there are a dozen co-principal investigators from UF, UCF, Oak Ridge National Laboratory (ORNL), Auburn University, New Mexico State University and The Ohio State University.

”Managing systemic diseases like HLB is a difficult challenge that has faced plant pathologists for many years,” said Johnson “It is a privilege to work with an excellent team of researchers from many different disciplines with the goal of developing new tools that are both effective and safe.”

A portion of the grant money, $1.4 million, flows to UCF, where Santra leads a team that also includes Andre Gesquiere, Laurene Tetard and the Oak Ridge National Laboratory collaborator, Loukas Petridis.

HLB control is difficult because current bactericidal sprays, such as copper, simply leave a protective film on the outside of a plant. The insect-transmitted bacteria bypasses that barrier and lives inside a tree’s fruit, stems and roots, in the vascular tissue known as the phloem. There, it deprives the tree of carbohydrate and nutrients, causing root loss and ultimately death. For a bactericide to be effective against HLB, it must be able to move within the plant, too.

Zinkicide is a nanoparticle smaller than a single microscopic cell, and researchers are cautiously optimistic it will be able to move systemically from cell to cell to kill the bacteria that cause HLB.

“The bacteria hide inside the plant in the phloem region,” Santra said. “If you spray and your compound doesn’t travel to the phloem region, then you cannot treat HLB.”

Zinkicide is derived from ingredients which are found in plants, and is designed to break down and be metabolized after its job is done. [emphasis mine]

It’s the first step in a years-long process to bring a treatment to market. UF will lead five years of greenhouse and field trials on grapefruit and sweet orange to determine the effectiveness of Zinkicide and the best method and timing of application.

The project also includes research to study where the nanoparticles travel within the plant, understand how they interact with plant tissue and how long they remain before breaking down. [emphasis mine]

If effective, the bactericide could have a substantial role in combatting HLB in Florida, and in other citrus-producing states and countries. It would also likely be useful for control of other bacterial pathogens infecting other crops.

The Zinkicide project builds as a spinoff from previous collaborations between Santra and UF’s Jim Graham, at the Citrus Research and Education Center to develop alternatives to copper for citrus canker control.

The previous Citrus Research and Education Foundation (CRDF)-funded Zinkicide project has issued three reports, for June 30, 2014, Sept. 30, 2014, and Dec. 31, 2014. This project’s completion date is May 2015. The reports which are remarkably succinct, consisting of two paragraphs, can be found here.

Oddly, the UCF news release doesn’t mention that Zinkicide (although it can be inferred) is a zinc particulate (I’m guessing they mean zinc nanoparticle) as noted on the CRDF project webpage. Happily, they are researching what happens after the bactericide has done its work on the infection. It’s good to see a life cycle approach to this research.

Water desalination by graphene and water purification by sapwood

I have two items about water. The first concerns a new technique from MIT (Massachusetts Institute of Technology) for desalination using graphene and sapwood, respectively*. From a Feb. 25, 2014 news release by David Chandler on EurekAlert,

Researchers have devised a way of making tiny holes of controllable size in sheets of graphene, a development that could lead to ultrathin filters for improved desalination or water purification.

The team of researchers at MIT, Oak Ridge National Laboratory, and in Saudi Arabia succeeded in creating subnanoscale pores in a sheet of the one-atom-thick material, which is one of the strongest materials known. …

The concept of using graphene, perforated by nanoscale pores, as a filter in desalination has been proposed and analyzed by other MIT researchers. The new work, led by graduate student Sean O’Hern and associate professor of mechanical engineering Rohit Karnik, is the first step toward actual production of such a graphene filter.

Making these minuscule holes in graphene — a hexagonal array of carbon atoms, like atomic-scale chicken wire — occurs in a two-stage process. First, the graphene is bombarded with gallium ions, which disrupt the carbon bonds. Then, the graphene is etched with an oxidizing solution that reacts strongly with the disrupted bonds — producing a hole at each spot where the gallium ions struck. By controlling how long the graphene sheet is left in the oxidizing solution, the MIT researchers can control the average size of the pores.

A big limitation in existing nanofiltration and reverse-osmosis desalination plants, which use filters to separate salt from seawater, is their low permeability: Water flows very slowly through them. The graphene filters, being much thinner, yet very strong, can sustain a much higher flow. “We’ve developed the first membrane that consists of a high density of subnanometer-scale pores in an atomically thin, single sheet of graphene,” O’Hern says.

For efficient desalination, a membrane must demonstrate “a high rejection rate of salt, yet a high flow rate of water,” he adds. One way of doing that is decreasing the membrane’s thickness, but this quickly renders conventional polymer-based membranes too weak to sustain the water pressure, or too ineffective at rejecting salt, he explains.

With graphene membranes, it becomes simply a matter of controlling the size of the pores, making them “larger than water molecules, but smaller than everything else,” O’Hern says — whether salt, impurities, or particular kinds of biochemical molecules.

The permeability of such graphene filters, according to computer simulations, could be 50 times greater than that of conventional membranes, as demonstrated earlier by a team of MIT researchers led by graduate student David Cohen-Tanugi of the Department of Materials Science and Engineering. But producing such filters with controlled pore sizes has remained a challenge. The new work, O’Hern says, demonstrates a method for actually producing such material with dense concentrations of nanometer-scale holes over large areas.

“We bombard the graphene with gallium ions at high energy,” O’Hern says. “That creates defects in the graphene structure, and these defects are more chemically reactive.” When the material is bathed in a reactive oxidant solution, the oxidant “preferentially attacks the defects,” and etches away many holes of roughly similar size. O’Hern and his co-authors were able to produce a membrane with 5 trillion pores per square centimeter, well suited to use for filtration. “To better understand how small and dense these graphene pores are, if our graphene membrane were to be magnified about a million times, the pores would be less than 1 millimeter in size, spaced about 4 millimeters apart, and span over 38 square miles, an area roughly half the size of Boston,” O’Hern says.

With this technique, the researchers were able to control the filtration properties of a single, centimeter-sized sheet of graphene: Without etching, no salt flowed through the defects formed by gallium ions. With just a little etching, the membranes started allowing positive salt ions to flow through. With further etching, the membranes allowed both positive and negative salt ions to flow through, but blocked the flow of larger organic molecules. With even more etching, the pores were large enough to allow everything to go through.

Scaling up the process to produce useful sheets of the permeable graphene, while maintaining control over the pore sizes, will require further research, O’Hern says.

Karnik says that such membranes, depending on their pore size, could find various applications. Desalination and nanofiltration may be the most demanding, since the membranes required for these plants would be very large. But for other purposes, such as selective filtration of molecules — for example, removal of unreacted reagents from DNA — even the very small filters produced so far might be useful.

“For biofiltration, size or cost are not as critical,” Karnik says. “For those applications, the current scale is suitable.”

Dexter Johnson in a Feb. 26,2014 posting provides some context for and insight into the work (from the Nanoclast blog on the IEEE [Institute of Electrical and Electronics Engineers]), Note: Links have been removed,

About 18 months ago, I wrote about an MIT project in which computer models demonstrated that graphene could act as a filter in the desalination of water through the reverse osmosis (RO) method. RO is slightly less energy intensive than the predominantly used multi-stage-flash process. The hope was that the nanopores of the graphene material would make the RO method even less energy intensive than current versions by making it easier to push the water through the filter membrane.

The models were promising, but other researchers in the field said at the time it was going to be a long road to translate a computer model to a real product.

It would seem that the MIT researchers agreed it was worth the effort and accepted the challenge to go from computer model to a real device as they announced this week that they had developed a method for creating selective pores in graphene that make it suitable for water desalination.

Here’s a link to and a citation for the paper,

Selective Ionic Transport through Tunable Subnanometer Pores in Single-Layer Graphene Membranes by Sean C. O’Hern, Michael S. H. Boutilier, Juan-Carlos Idrobo, Yi Song, Jing Kong, Tahar Laoui, Muataz Atieh, and Rohit Karnik. Nano Lett., Article ASAP DOI: 10.1021/nl404118f Publication Date (Web): February 3, 2014

Copyright © 2014 American Chemical Society

This article is behind a paywall.

The second item is also from MIT and concerns a low-tech means of purifying water. From a Feb. 27, 2014 news item on Azonano,

If you’ve run out of drinking water during a lakeside camping trip, there’s a simple solution: Break off a branch from the nearest pine tree, peel away the bark, and slowly pour lake water through the stick. The improvised filter should trap any bacteria, producing fresh, uncontaminated water.

In fact, an MIT team has discovered that this low-tech filtration system can produce up to four liters of drinking water a day — enough to quench the thirst of a typical person.

In a paper published this week in the journal PLoS ONE, the researchers demonstrate that a small piece of sapwood can filter out more than 99 percent of the bacteria E. coli from water. They say the size of the pores in sapwood — which contains xylem tissue evolved to transport sap up the length of a tree — also allows water through while blocking most types of bacteria.

Co-author Rohit Karnik, an associate professor of mechanical engineering at MIT, says sapwood is a promising, low-cost, and efficient material for water filtration, particularly for rural communities where more advanced filtration systems are not readily accessible.

“Today’s filtration membranes have nanoscale pores that are not something you can manufacture in a garage very easily,” Karnik says. “The idea here is that we don’t need to fabricate a membrane, because it’s easily available. You can just take a piece of wood and make a filter out of it.”

The Feb. 26, 2014 news release on EurekAlert, which originated the news item, describes current filtration techniques and the advantages associated with this new low-tech approach,

There are a number of water-purification technologies on the market today, although many come with drawbacks: Systems that rely on chlorine treatment work well at large scales, but are expensive. Boiling water to remove contaminants requires a great deal of fuel to heat the water. Membrane-based filters, while able to remove microbes, are expensive, require a pump, and can become easily clogged.

Sapwood may offer a low-cost, small-scale alternative. The wood is comprised of xylem, porous tissue that conducts sap from a tree’s roots to its crown through a system of vessels and pores. Each vessel wall is pockmarked with tiny pores called pit membranes, through which sap can essentially hopscotch, flowing from one vessel to another as it feeds structures along a tree’s length. The pores also limit cavitation, a process by which air bubbles can grow and spread in xylem, eventually killing a tree. The xylem’s tiny pores can trap bubbles, preventing them from spreading in the wood.

“Plants have had to figure out how to filter out bubbles but allow easy flow of sap,” Karnik observes. “It’s the same problem with water filtration where we want to filter out microbes but maintain a high flow rate. So it’s a nice coincidence that the problems are similar.”

The news release also describes the experimental procedure the scientists followed (from the news release),

To study sapwood’s water-filtering potential, the researchers collected branches of white pine and stripped off the outer bark. They cut small sections of sapwood measuring about an inch long and half an inch wide, and mounted each in plastic tubing, sealed with epoxy and secured with clamps.

Before experimenting with contaminated water, the group used water mixed with red ink particles ranging from 70 to 500 nanometers in size. After all the liquid passed through, the researchers sliced the sapwood in half lengthwise, and observed that much of the red dye was contained within the very top layers of the wood, while the filtrate, or filtered water, was clear. This experiment showed that sapwood is naturally able to filter out particles bigger than about 70 nanometers.

However, in another experiment, the team found that sapwood was unable to separate out 20-nanometer particles from water, suggesting that there is a limit to the size of particles coniferous sapwood can filter.

Finally, the team flowed inactivated, E. coli-contaminated water through the wood filter. When they examined the xylem under a fluorescent microscope, they saw that bacteria had accumulated around pit membranes in the first few millimeters of the wood. Counting the bacterial cells in the filtered water, the researchers found that the sapwood was able to filter out more than 99 percent of E. coli from water.

Karnik says sapwood likely can filter most types of bacteria, the smallest of which measure about 200 nanometers. However, the filter probably cannot trap most viruses, which are much smaller in size.

The researchers have future plans (from the news release),

Karnik says his group now plans to evaluate the filtering potential of other types of sapwood. In general, flowering trees have smaller pores than coniferous trees, suggesting that they may be able to filter out even smaller particles. However, vessels in flowering trees tend to be much longer, which may be less practical for designing a compact water filter.

Designers interested in using sapwood as a filtering material will also have to find ways to keep the wood damp, or to dry it while retaining the xylem function. In other experiments with dried sapwood, Karnik found that water either did not flow through well, or flowed through cracks, but did not filter out contaminants.

“There’s huge variation between plants,” Karnik says. “There could be much better plants out there that are suitable for this process. Ideally, a filter would be a thin slice of wood you could use for a few days, then throw it away and replace at almost no cost. It’s orders of magnitude cheaper than the high-end membranes on the market today.”

Here’s a link to and a citation for the paper,

Water Filtration Using Plant Xylem by Michael S. H. Boutilier, Jongho Lee, Valerie Chambers, Varsha Venkatesh, & Rohit Karnik. PLOS One Published: February 26, 2014 DOI: 10.1371/journal.pone.0089934

This paper is open access.

One final observation, two of the researchers (Michael S. H. Boutilier & Rohit Karnik) listed as authors on the graphene/water desalination paper are also listed on the low-tech sapwood paper solution.*

* The first sentence of the this post originally stated both items were graphene-related, it has been changed to say 1… using graphene and sapwood, respectively*’ on May 8, 2015.

The last sentence of this post was changed from

‘One final observation, two of the researchers listed as authors on the graphene/water desalination paper are also listed on the low-tech sapwood paper (Michael S. H. Boutilier & Rohit Karnik).’

to this

‘One final observation, two of the researchers (Michael S. H. Boutilier & Rohit Karnik) listed as authors on the graphene/water desalination paper are also listed on the low-tech sapwood paper solution.*’ for clarity on May 8, 2015.