Tag Archives: US National Science Foundation

Using only sunlight to desalinate water

The researchers seem to believe that this new desalination technique could be a game changer. From a June 20, 2017 news item on Azonano,

An off-grid technology using only the energy from sunlight to transform salt water into fresh drinking water has been developed as an outcome of the effort from a federally funded research.

The desalination system uses a combination of light-harvesting nanophotonics and membrane distillation technology and is considered to be the first major innovation from the Center for Nanotechnology Enabled Water Treatment (NEWT), which is a multi-institutional engineering research center located at Rice University.

NEWT’s “nanophotonics-enabled solar membrane distillation” technology (NESMD) integrates tried-and-true water treatment methods with cutting-edge nanotechnology capable of transforming sunlight to heat. …

A June 19, 2017 Rice University news release, which originated the news item, expands on the theme,

More than 18,000 desalination plants operate in 150 countries, but NEWT’s desalination technology is unlike any other used today.

“Direct solar desalination could be a game changer for some of the estimated 1 billion people who lack access to clean drinking water,” said Rice scientist and water treatment expert Qilin Li, a corresponding author on the study. “This off-grid technology is capable of providing sufficient clean water for family use in a compact footprint, and it can be scaled up to provide water for larger communities.”

The oldest method for making freshwater from salt water is distillation. Salt water is boiled, and the steam is captured and run through a condensing coil. Distillation has been used for centuries, but it requires complex infrastructure and is energy inefficient due to the amount of heat required to boil water and produce steam. More than half the cost of operating a water distillation plant is for energy.

An emerging technology for desalination is membrane distillation, where hot salt water is flowed across one side of a porous membrane and cold freshwater is flowed across the other. Water vapor is naturally drawn through the membrane from the hot to the cold side, and because the seawater need not be boiled, the energy requirements are less than they would be for traditional distillation. However, the energy costs are still significant because heat is continuously lost from the hot side of the membrane to the cold.

“Unlike traditional membrane distillation, NESMD benefits from increasing efficiency with scale,” said Rice’s Naomi Halas, a corresponding author on the paper and the leader of NEWT’s nanophotonics research efforts. “It requires minimal pumping energy for optimal distillate conversion, and there are a number of ways we can further optimize the technology to make it more productive and efficient.”

NEWT’s new technology builds upon research in Halas’ lab to create engineered nanoparticles that harvest as much as 80 percent of sunlight to generate steam. By adding low-cost, commercially available nanoparticles to a porous membrane, NEWT has essentially turned the membrane itself into a one-sided heating element that alone heats the water to drive membrane distillation.

“The integration of photothermal heating capabilities within a water purification membrane for direct, solar-driven desalination opens new opportunities in water purification,” said Yale University ‘s Menachem “Meny” Elimelech, a co-author of the new study and NEWT’s lead researcher for membrane processes.

In the PNAS study, researchers offered proof-of-concept results based on tests with an NESMD chamber about the size of three postage stamps and just a few millimeters thick. The distillation membrane in the chamber contained a specially designed top layer of carbon black nanoparticles infused into a porous polymer. The light-capturing nanoparticles heated the entire surface of the membrane when exposed to sunlight. A thin half-millimeter-thick layer of salt water flowed atop the carbon-black layer, and a cool freshwater stream flowed below.

Li, the leader of NEWT’s advanced treatment test beds at Rice, said the water production rate increased greatly by concentrating the sunlight. “The intensity got up 17.5 kilowatts per meter squared when a lens was used to concentrate sunlight by 25 times, and the water production increased to about 6 liters per meter squared per hour.”

Li said NEWT’s research team has already made a much larger system that contains a panel that is about 70 centimeters by 25 centimeters. Ultimately, she said, NEWT hopes to produce a modular system where users could order as many panels as they needed based on their daily water demands.

“You could assemble these together, just as you would the panels in a solar farm,” she said. “Depending on the water production rate you need, you could calculate how much membrane area you would need. For example, if you need 20 liters per hour, and the panels produce 6 liters per hour per square meter, you would order a little over 3 square meters of panels.”

Established by the National Science Foundation in 2015, NEWT aims to develop compact, mobile, off-grid water-treatment systems that can provide clean water to millions of people who lack it and make U.S. energy production more sustainable and cost-effective. NEWT, which is expected to leverage more than $40 million in federal and industrial support over the next decade, is the first NSF Engineering Research Center (ERC) in Houston and only the third in Texas since NSF began the ERC program in 1985. NEWT focuses on applications for humanitarian emergency response, rural water systems and wastewater treatment and reuse at remote sites, including both onshore and offshore drilling platforms for oil and gas exploration.

There is a video but it is focused on the NEWT center rather than any specific water technologies,

For anyone interested in the technology, here’s a link to and a citation for the researchers’ paper,

Nanophotonics-enabled solar membrane distillation for off-grid water purification by Pratiksha D. Dongare, Alessandro Alabastri, Seth Pedersen, Katherine R. Zodrow, Nathaniel J. Hogan, Oara Neumann, Jinjian Wu, Tianxiao Wang, Akshay Deshmukh,f, Menachem Elimelech, Qilin Li, Peter Nordlander, and Naomi J. Halas. PNAS {Proceedings of the National Academy of Sciences] doi: 10.1073/pnas.1701835114 June 19, 2017

This paper appears to be open access.

Biodegradable nanoparticles to program immune cells for cancer treatments

The Fred Hutchinson Cancer Research Centre in Seattle, Washington has announced a proposed cancer treatment using nanoparticle-programmed T cells according to an April 12, 2017 news release (received via email; also on EurekAlert), Note: A link has been removed,

Researchers at Fred Hutchinson Cancer Research Center have developed biodegradable nanoparticles that can be used to genetically program immune cells to recognize and destroy cancer cells — while the immune cells are still inside the body.

In a proof-of-principle study to be published April 17 [2017] in Nature Nanotechnology, the team showed that nanoparticle-programmed immune cells, known as T cells, can rapidly clear or slow the progression of leukemia in a mouse model.

“Our technology is the first that we know of to quickly program tumor-recognizing capabilities into T cells without extracting them for laboratory manipulation,” said Fred Hutch’s Dr. Matthias Stephan, the study’s senior author. “The reprogrammed cells begin to work within 24 to 48 hours and continue to produce these receptors for weeks. This suggests that our technology has the potential to allow the immune system to quickly mount a strong enough response to destroy cancerous cells before the disease becomes fatal.”

Cellular immunotherapies have shown promise in clinical trials, but challenges remain to making them more widely available and to being able to deploy them quickly. At present, it typically takes a couple of weeks to prepare these treatments: the T cells must be removed from the patient and genetically engineered and grown in special cell processing facilities before they are infused back into the patient. These new nanoparticles could eliminate the need for such expensive and time consuming steps.

Although his T-cell programming method is still several steps away from the clinic, Stephan imagines a future in which nanoparticles transform cell-based immunotherapies — whether for cancer or infectious disease — into an easily administered, off-the-shelf treatment that’s available anywhere.

“I’ve never had cancer, but if I did get a cancer diagnosis I would want to start treatment right away,” Stephan said. “I want to make cellular immunotherapy a treatment option the day of diagnosis and have it able to be done in an outpatient setting near where people live.”

The body as a genetic engineering lab

Stephan created his T-cell homing nanoparticles as a way to bring the power of cellular cancer immunotherapy to more people.

In his method, the laborious, time-consuming T-cell programming steps all take place within the body, creating a potential army of “serial killers” within days.

As reported in the new study, Stephan and his team developed biodegradable nanoparticles that turned T cells into CAR T cells, a particular type of cellular immunotherapy that has delivered promising results against leukemia in clinical trials.

The researchers designed the nanoparticles to carry genes that encode for chimeric antigen receptors, or CARs, that target and eliminate cancer. They also tagged the nanoparticles with molecules that make them stick like burrs to T cells, which engulf the nanoparticles. The cell’s internal traffic system then directs the nanoparticle to the nucleus, and it dissolves.

The study provides proof-of-principle that the nanoparticles can educate the immune system to target cancer cells. Stephan and his team designed the new CAR genes to integrate into chromosomes housed in the nucleus, making it possible for T cells to begin decoding the new genes and producing CARs within just one or two days.

Once the team determined that their CAR-carrying nanoparticles reprogrammed a noticeable percent of T cells, they tested their efficacy. Using a preclinical mouse model of leukemia, Stephan and his colleagues compared their nanoparticle-programming strategy against chemotherapy followed by an infusion of T cells programmed in the lab to express CARs, which mimics current CAR-T-cell therapy.

The nanoparticle-programmed CAR-T cells held their own against the infused CAR-T cells. Treatment with nanoparticles or infused CAR-T cells improved survival 58 days on average, up from a median survival of about two weeks.

The study was funded by Fred Hutch’s Immunotherapy Initiative, the Leukemia & Lymphoma Society, the Phi Beta Psi Sorority, the National Science Foundation and the National Cancer Institute.

Next steps and other applications

Stephan’s nanoparticles still have to clear several hurdles before they get close to human trials. He’s pursuing new strategies to make the gene-delivery-and-expression system safe in people and working with companies that have the capacity to produce clinical-grade nanoparticles. Additionally, Stephan has turned his sights to treating solid tumors and is collaborating to this end with several research groups at Fred Hutch.

And, he said, immunotherapy may be just the beginning. In theory, nanoparticles could be modified to serve the needs of patients whose immune systems need a boost, but who cannot wait for several months for a conventional vaccine to kick in.

“We hope that this can be used for infectious diseases like hepatitis or HIV,” Stephan said. This method may be a way to “provide patients with receptors they don’t have in their own body,” he explained. “You just need a tiny number of programmed T cells to protect against a virus.”

Here’s a link to and a citation for the paper,

In situ programming of leukaemia-specific T cells using synthetic DNA nanocarriers by Tyrel T. Smith, Sirkka B. Stephan, Howell F. Moffett, Laura E. McKnight, Weihang Ji, Diana Reiman, Emmy Bonagofski, Martin E. Wohlfahrt, Smitha P. S. Pillai, & Matthias T. Stephan. Nature Nanotechnology (2017) doi:10.1038/nnano.2017.57 Published online 17 April 2017

This paper is behind a paywall.

High-performance, low-energy artificial synapse for neural network computing

This artificial synapse is apparently an improvement on the standard memristor-based artificial synapse but that doesn’t become clear until reading the abstract for the paper. First, there’s a Feb. 20, 2017 Stanford University news release by Taylor Kubota (dated Feb. 21, 2017 on EurekAlert), Note: Links have been removed,

For all the improvements in computer technology over the years, we still struggle to recreate the low-energy, elegant processing of the human brain. Now, researchers at Stanford University and Sandia National Laboratories have made an advance that could help computers mimic one piece of the brain’s efficient design – an artificial version of the space over which neurons communicate, called a synapse.

“It works like a real synapse but it’s an organic electronic device that can be engineered,” said Alberto Salleo, associate professor of materials science and engineering at Stanford and senior author of the paper. “It’s an entirely new family of devices because this type of architecture has not been shown before. For many key metrics, it also performs better than anything that’s been done before with inorganics.”

The new artificial synapse, reported in the Feb. 20 issue of Nature Materials, mimics the way synapses in the brain learn through the signals that cross them. This is a significant energy savings over traditional computing, which involves separately processing information and then storing it into memory. Here, the processing creates the memory.

This synapse may one day be part of a more brain-like computer, which could be especially beneficial for computing that works with visual and auditory signals. Examples of this are seen in voice-controlled interfaces and driverless cars. Past efforts in this field have produced high-performance neural networks supported by artificially intelligent algorithms but these are still distant imitators of the brain that depend on energy-consuming traditional computer hardware.

Building a brain

When we learn, electrical signals are sent between neurons in our brain. The most energy is needed the first time a synapse is traversed. Every time afterward, the connection requires less energy. This is how synapses efficiently facilitate both learning something new and remembering what we’ve learned. The artificial synapse, unlike most other versions of brain-like computing, also fulfills these two tasks simultaneously, and does so with substantial energy savings.

“Deep learning algorithms are very powerful but they rely on processors to calculate and simulate the electrical states and store them somewhere else, which is inefficient in terms of energy and time,” said Yoeri van de Burgt, former postdoctoral scholar in the Salleo lab and lead author of the paper. “Instead of simulating a neural network, our work is trying to make a neural network.”

The artificial synapse is based off a battery design. It consists of two thin, flexible films with three terminals, connected by an electrolyte of salty water. The device works as a transistor, with one of the terminals controlling the flow of electricity between the other two.

Like a neural path in a brain being reinforced through learning, the researchers program the artificial synapse by discharging and recharging it repeatedly. Through this training, they have been able to predict within 1 percent of uncertainly what voltage will be required to get the synapse to a specific electrical state and, once there, it remains at that state. In other words, unlike a common computer, where you save your work to the hard drive before you turn it off, the artificial synapse can recall its programming without any additional actions or parts.

Testing a network of artificial synapses

Only one artificial synapse has been produced but researchers at Sandia used 15,000 measurements from experiments on that synapse to simulate how an array of them would work in a neural network. They tested the simulated network’s ability to recognize handwriting of digits 0 through 9. Tested on three datasets, the simulated array was able to identify the handwritten digits with an accuracy between 93 to 97 percent.

Although this task would be relatively simple for a person, traditional computers have a difficult time interpreting visual and auditory signals.

“More and more, the kinds of tasks that we expect our computing devices to do require computing that mimics the brain because using traditional computing to perform these tasks is becoming really power hungry,” said A. Alec Talin, distinguished member of technical staff at Sandia National Laboratories in Livermore, California, and senior author of the paper. “We’ve demonstrated a device that’s ideal for running these type of algorithms and that consumes a lot less power.”

This device is extremely well suited for the kind of signal identification and classification that traditional computers struggle to perform. Whereas digital transistors can be in only two states, such as 0 and 1, the researchers successfully programmed 500 states in the artificial synapse, which is useful for neuron-type computation models. In switching from one state to another they used about one-tenth as much energy as a state-of-the-art computing system needs in order to move data from the processing unit to the memory.

This, however, means they are still using about 10,000 times as much energy as the minimum a biological synapse needs in order to fire. The researchers are hopeful that they can attain neuron-level energy efficiency once they test the artificial synapse in smaller devices.

Organic potential

Every part of the device is made of inexpensive organic materials. These aren’t found in nature but they are largely composed of hydrogen and carbon and are compatible with the brain’s chemistry. Cells have been grown on these materials and they have even been used to make artificial pumps for neural transmitters. The voltages applied to train the artificial synapse are also the same as those that move through human neurons.

All this means it’s possible that the artificial synapse could communicate with live neurons, leading to improved brain-machine interfaces. The softness and flexibility of the device also lends itself to being used in biological environments. Before any applications to biology, however, the team plans to build an actual array of artificial synapses for further research and testing.

Additional Stanford co-authors of this work include co-lead author Ewout Lubberman, also of the University of Groningen in the Netherlands, Scott T. Keene and Grégorio C. Faria, also of Universidade de São Paulo, in Brazil. Sandia National Laboratories co-authors include Elliot J. Fuller and Sapan Agarwal in Livermore and Matthew J. Marinella in Albuquerque, New Mexico. Salleo is an affiliate of the Stanford Precourt Institute for Energy and the Stanford Neurosciences Institute. Van de Burgt is now an assistant professor in microsystems and an affiliate of the Institute for Complex Molecular Studies (ICMS) at Eindhoven University of Technology in the Netherlands.

This research was funded by the National Science Foundation, the Keck Faculty Scholar Funds, the Neurofab at Stanford, the Stanford Graduate Fellowship, Sandia’s Laboratory-Directed Research and Development Program, the U.S. Department of Energy, the Holland Scholarship, the University of Groningen Scholarship for Excellent Students, the Hendrik Muller National Fund, the Schuurman Schimmel-van Outeren Foundation, the Foundation of Renswoude (The Hague and Delft), the Marco Polo Fund, the Instituto Nacional de Ciência e Tecnologia/Instituto Nacional de Eletrônica Orgânica in Brazil, the Fundação de Amparo à Pesquisa do Estado de São Paulo and the Brazilian National Council.

Here’s an abstract for the researchers’ paper (link to paper provided after abstract) and it’s where you’ll find the memristor connection explained,

The brain is capable of massively parallel information processing while consuming only ~1–100fJ per synaptic event1, 2. Inspired by the efficiency of the brain, CMOS-based neural architectures3 and memristors4, 5 are being developed for pattern recognition and machine learning. However, the volatility, design complexity and high supply voltages for CMOS architectures, and the stochastic and energy-costly switching of memristors complicate the path to achieve the interconnectivity, information density, and energy efficiency of the brain using either approach. Here we describe an electrochemical neuromorphic organic device (ENODe) operating with a fundamentally different mechanism from existing memristors. ENODe switches at low voltage and energy (<10pJ for 103μm2 devices), displays >500 distinct, non-volatile conductance states within a ~1V range, and achieves high classification accuracy when implemented in neural network simulations. Plastic ENODes are also fabricated on flexible substrates enabling the integration of neuromorphic functionality in stretchable electronic systems6, 7. Mechanical flexibility makes ENODes compatible with three-dimensional architectures, opening a path towards extreme interconnectivity comparable to the human brain.

Here’s a link to and a citation for the paper,

A non-volatile organic electrochemical device as a low-voltage artificial synapse for neuromorphic computing by Yoeri van de Burgt, Ewout Lubberman, Elliot J. Fuller, Scott T. Keene, Grégorio C. Faria, Sapan Agarwal, Matthew J. Marinella, A. Alec Talin, & Alberto Salleo. Nature Materials (2017) doi:10.1038/nmat4856 Published online 20 February 2017

This paper is behind a paywall.

ETA March 8, 2017 10:28 PST: You may find this this piece on ferroelectricity and neuromorphic engineering of interest (March 7, 2017 posting titled: Ferroelectric roadmap to neuromorphic computing).

Mapping 23,000 atoms in a nanoparticle

Identification of the precise 3-D coordinates of iron, shown in red, and platinum atoms in an iron-platinum nanoparticle.. Courtesy of Colin Ophus and Florian Nickel/Berkeley Lab

The image of the iron-platinum nanoparticle (referenced in the headline) reminds of foetal ultrasound images. A Feb. 1, 2017 news item on ScienceDaily tells us more,

In the world of the very tiny, perfection is rare: virtually all materials have defects on the atomic level. These imperfections — missing atoms, atoms of one type swapped for another, and misaligned atoms — can uniquely determine a material’s properties and function. Now, UCLA [University of California at Los Angeles] physicists and collaborators have mapped the coordinates of more than 23,000 individual atoms in a tiny iron-platinum nanoparticle to reveal the material’s defects.

The results demonstrate that the positions of tens of thousands of atoms can be precisely identified and then fed into quantum mechanics calculations to correlate imperfections and defects with material properties at the single-atom level.

A Feb. 1, 2017 UCLA news release, which originated the news item, provides more detail about the work,

Jianwei “John” Miao, a UCLA professor of physics and astronomy and a member of UCLA’s California NanoSystems Institute, led the international team in mapping the atomic-level details of the bimetallic nanoparticle, more than a trillion of which could fit within a grain of sand.

“No one has seen this kind of three-dimensional structural complexity with such detail before,” said Miao, who is also a deputy director of the Science and Technology Center on Real-Time Functional Imaging. This new National Science Foundation-funded consortium consists of scientists at UCLA and five other colleges and universities who are using high-resolution imaging to address questions in the physical sciences, life sciences and engineering.

Miao and his team focused on an iron-platinum alloy, a very promising material for next-generation magnetic storage media and permanent magnet applications.

By taking multiple images of the iron-platinum nanoparticle with an advanced electron microscope at Lawrence Berkeley National Laboratory and using powerful reconstruction algorithms developed at UCLA, the researchers determined the precise three-dimensional arrangement of atoms in the nanoparticle.

“For the first time, we can see individual atoms and chemical composition in three dimensions. Everything we look at, it’s new,” Miao said.

The team identified and located more than 6,500 iron and 16,600 platinum atoms and showed how the atoms are arranged in nine grains, each of which contains different ratios of iron and platinum atoms. Miao and his colleagues showed that atoms closer to the interior of the grains are more regularly arranged than those near the surfaces. They also observed that the interfaces between grains, called grain boundaries, are more disordered.

“Understanding the three-dimensional structures of grain boundaries is a major challenge in materials science because they strongly influence the properties of materials,” Miao said. “Now we are able to address this challenge by precisely mapping out the three-dimensional atomic positions at the grain boundaries for the first time.”

The researchers then used the three-dimensional coordinates of the atoms as inputs into quantum mechanics calculations to determine the magnetic properties of the iron-platinum nanoparticle. They observed abrupt changes in magnetic properties at the grain boundaries.

“This work makes significant advances in characterization capabilities and expands our fundamental understanding of structure-property relationships, which is expected to find broad applications in physics, chemistry, materials science, nanoscience and nanotechnology,” Miao said.

In the future, as the researchers continue to determine the three-dimensional atomic coordinates of more materials, they plan to establish an online databank for the physical sciences, analogous to protein databanks for the biological and life sciences. “Researchers can use this databank to study material properties truly on the single-atom level,” Miao said.

Miao and his team also look forward to applying their method called GENFIRE (GENeralized Fourier Iterative Reconstruction) to biological and medical applications. “Our three-dimensional reconstruction algorithm might be useful for imaging like CT scans,” Miao said. Compared with conventional reconstruction methods, GENFIRE requires fewer images to compile an accurate three-dimensional structure.

That means that radiation-sensitive objects can be imaged with lower doses of radiation.

The US Dept. of Energy (DOE) Lawrence Berkeley National Laboratory issued their own Feb. 1, 2017 news release (also on EurekAlert) about the work with a focus on how their equipment made this breakthrough possible (it repeats a little of the info. from the UCLA news release),

Scientists used one of the world’s most powerful electron microscopes to map the precise location and chemical type of 23,000 atoms in an extremely small particle made of iron and platinum.

The 3-D reconstruction reveals the arrangement of atoms in unprecedented detail, enabling the scientists to measure chemical order and disorder in individual grains, which sheds light on the material’s properties at the single-atom level. Insights gained from the particle’s structure could lead to new ways to improve its magnetic performance for use in high-density, next-generation hard drives.

What’s more, the technique used to create the reconstruction, atomic electron tomography (which is like an incredibly high-resolution CT scan), lays the foundation for precisely mapping the atomic composition of other useful nanoparticles. This could reveal how to optimize the particles for more efficient catalysts, stronger materials, and disease-detecting fluorescent tags.

Microscopy data was obtained and analyzed by scientists from the Department of Energy’s Lawrence Berkeley National Laboratory (Berkeley Lab) at the Molecular Foundry, in collaboration with Foundry users from UCLA, Oak Ridge National Laboratory, and the United Kingdom’s University of Birmingham. …

Atoms are the building blocks of matter, and the patterns in which they’re arranged dictate a material’s properties. These patterns can also be exploited to greatly improve a material’s function, which is why scientists are eager to determine the 3-D structure of nanoparticles at the smallest scale possible.

“Our research is a big step in this direction. We can now take a snapshot that shows the positions of all the atoms in a nanoparticle at a specific point in its growth. This will help us learn how nanoparticles grow atom by atom, and it sets the stage for a materials-design approach starting from the smallest building blocks,” says Mary Scott, who conducted the research while she was a Foundry user, and who is now a staff scientist. Scott and fellow Foundry scientists Peter Ercius and Colin Ophus developed the method in close collaboration with Jianwei Miao, a UCLA professor of physics and astronomy.

Their nanoparticle reconstruction builds on an achievement they reported last year in which they measured the coordinates of more than 3,000 atoms in a tungsten needle to a precision of 19 trillionths of a meter (19 picometers), which is many times smaller than a hydrogen atom. Now, they’ve taken the same precision, added the ability to distinguish different elements, and scaled up the reconstruction to include tens of thousands of atoms.

Importantly, their method maps the position of each atom in a single, unique nanoparticle. In contrast, X-ray crystallography and cryo-electron microscopy plot the average position of atoms from many identical samples. These methods make assumptions about the arrangement of atoms, which isn’t a good fit for nanoparticles because no two are alike.

“We need to determine the location and type of each atom to truly understand how a nanoparticle functions at the atomic scale,” says Ercius.

A TEAM approach

The scientists’ latest accomplishment hinged on the use of one of the highest-resolution transmission electron microscopes in the world, called TEAM I. It’s located at the National Center for Electron Microscopy, which is a Molecular Foundry facility. The microscope scans a sample with a focused beam of electrons, and then measures how the electrons interact with the atoms in the sample. It also has a piezo-controlled stage that positions samples with unmatched stability and position-control accuracy.

The researchers began growing an iron-platinum nanoparticle from its constituent elements, and then stopped the particle’s growth before it was fully formed. They placed the “partially baked” particle in the TEAM I stage, obtained a 2-D projection of its atomic structure, rotated it a few degrees, obtained another projection, and so on. Each 2-D projection provides a little more information about the full 3-D structure of the nanoparticle.

They sent the projections to Miao at UCLA, who used a sophisticated computer algorithm to convert the 2-D projections into a 3-D reconstruction of the particle. The individual atomic coordinates and chemical types were then traced from the 3-D density based on the knowledge that iron atoms are lighter than platinum atoms. The resulting atomic structure contains 6,569 iron atoms and 16,627 platinum atoms, with each atom’s coordinates precisely plotted to less than the width of a hydrogen atom.

Translating the data into scientific insights

Interesting features emerged at this extreme scale after Molecular Foundry scientists used code they developed to analyze the atomic structure. For example, the analysis revealed chemical order and disorder in interlocking grains, in which the iron and platinum atoms are arranged in different patterns. This has large implications for how the particle grew and its real-world magnetic properties. The analysis also revealed single-atom defects and the width of disordered boundaries between grains, which was not previously possible in complex 3-D boundaries.

“The important materials science problem we are tackling is how this material transforms from a highly randomized structure, what we call a chemically-disordered structure, into a regular highly-ordered structure with the desired magnetic properties,” says Ophus.

To explore how the various arrangements of atoms affect the nanoparticle’s magnetic properties, scientists from DOE’s Oak Ridge National Laboratory ran computer calculations on the Titan supercomputer at ORNL–using the coordinates and chemical type of each atom–to simulate the nanoparticle’s behavior in a magnetic field. This allowed the scientists to see patterns of atoms that are very magnetic, which is ideal for hard drives. They also saw patterns with poor magnetic properties that could sap a hard drive’s performance.

“This could help scientists learn how to steer the growth of iron-platinum nanoparticles so they develop more highly magnetic patterns of atoms,” says Ercius.

Adds Scott, “More broadly, the imaging technique will shed light on the nucleation and growth of ordered phases within nanoparticles, which isn’t fully theoretically understood but is critically important to several scientific disciplines and technologies.”

The folks at the Berkeley Lab have created a video (notice where the still image from the beginning of this post appears),

The Oak Ridge National Laboratory (ORNL), not wanting to be left out, has been mentioned in a Feb. 3, 2017 news item on ScienceDaily,

… researchers working with magnetic nanoparticles at the University of California, Los Angeles (UCLA), and the US Department of Energy’s (DOE’s) Lawrence Berkeley National Laboratory (Berkeley Lab) approached computational scientists at DOE’s Oak Ridge National Laboratory (ORNL) to help solve a unique problem: to model magnetism at the atomic level using experimental data from a real nanoparticle.

“These types of calculations have been done for ideal particles with ideal crystal structures but not for real particles,” said Markus Eisenbach, a computational scientist at the Oak Ridge Leadership Computing Facility (OLCF), a DOE Office of Science User Facility located at ORNL.

A Feb. 2, 2017 ORNL news release on EurekAlert, which originated the news item, elucidates further on how their team added to the research,

Eisenbach develops quantum mechanical electronic structure simulations that predict magnetic properties in materials. Working with Paul Kent, a computational materials scientist at ORNL’s Center for Nanophase Materials Sciences, the team collaborated with researchers at UCLA and Berkeley Lab’s Molecular Foundry to combine world-class experimental data with world-class computing to do something new–simulate magnetism atom by atom in a real nanoparticle.

Using the new data from the research teams on the West Coast, Eisenbach and Kent were able to precisely model the measured atomic structure, including defects, from a unique iron-platinum (FePt) nanoparticle and simulate its magnetic properties on the 27-petaflop Titan supercomputer at the OLCF.

Electronic structure codes take atomic and chemical structure and solve for the corresponding magnetic properties. However, these structures are typically derived from many 2-D electron microscopy or x-ray crystallography images averaged together, resulting in a representative, but not true, 3-D structure.

“In this case, researchers were able to get the precise 3-D structure for a real particle,” Eisenbach said. “The UCLA group has developed a new experimental technique where they can tell where the atoms are–the coordinates–and the chemical resolution, or what they are — iron or platinum.”

The ORNL news release goes on to describe the work from the perspective of the people who ran the supercompute simulationsr,

A Supercomputing Milestone

Magnetism at the atomic level is driven by quantum mechanics — a fact that has shaken up classical physics calculations and called for increasingly complex, first-principle calculations, or calculations working forward from fundamental physics equations rather than relying on assumptions that reduce computational workload.

For magnetic recording and storage devices, researchers are particularly interested in magnetic anisotropy, or what direction magnetism favors in an atom.

“If the anisotropy is too weak, a bit written to the nanoparticle might flip at room temperature,” Kent said.

To solve for magnetic anisotropy, Eisenbach and Kent used two computational codes to compare and validate results.

To simulate a supercell of about 1,300 atoms from strongly magnetic regions of the 23,000-atom nanoparticle, they used the Linear Scaling Multiple Scattering (LSMS) code, a first-principles density functional theory code developed at ORNL.

“The LSMS code was developed for large magnetic systems and can tackle lots of atoms,” Kent said.

As principal investigator on 2017, 2016, and previous INCITE program awards, Eisenbach has scaled the LSMS code to Titan for a range of magnetic materials projects, and the in-house code has been optimized for Titan’s accelerated architecture, speeding up calculations more than 8 times on the machine’s GPUs. Exceptionally capable of crunching large magnetic systems quickly, the LSMS code received an Association for Computing Machinery Gordon Bell Prize in high-performance computing achievement in 1998 and 2009, and developments continue to enhance the code for new architectures.

Working with Renat Sabirianov at the University of Nebraska at Omaha, the team also ran VASP, a simulation package that is better suited for smaller atom counts, to simulate regions of about 32 atoms.

“With both approaches, we were able to confirm that the local VASP results were consistent with the LSMS results, so we have a high confidence in the simulations,” Eisenbach said.

Computer simulations revealed that grain boundaries have a strong effect on magnetism. “We found that the magnetic anisotropy energy suddenly transitions at the grain boundaries. These magnetic properties are very important,” Miao said.

In the future, researchers hope that advances in computing and simulation will make a full-particle simulation possible — as first-principles calculations are currently too intensive to solve small-scale magnetism for regions larger than a few thousand atoms.

Also, future simulations like these could show how different fabrication processes, such as the temperature at which nanoparticles are formed, influence magnetism and performance.

“There’s a hope going forward that one would be able to use these techniques to look at nanoparticle growth and understand how to optimize growth for performance,” Kent said.

Finally, here’s a link to and a citation for the paper,

Deciphering chemical order/disorder and material properties at the single-atom level by Yongsoo Yang, Chien-Chun Chen, M. C. Scott, Colin Ophus, Rui Xu, Alan Pryor, Li Wu, Fan Sun, Wolfgang Theis, Jihan Zhou, Markus Eisenbach, Paul R. C. Kent, Renat F. Sabirianov, Hao Zeng, Peter Ercius, & Jianwei Miao. Nature 542, 75–79 (02 February 2017) doi:10.1038/nature21042 Published online 01 February 2017

This paper is behind a paywall.

US report on Women, minorities, and people with disabilities in science and engineerin

A Jan. 31, 2017 news item on ScienceDaily announces a new report from the US National Science Foundation’s (NSF) National Center for Science and Engineering Statistics (NCSES),

The National Center for Science and Engineering Statistics (NCSES) today [Jan. 31, 2017,] announced the release of the 2017 Women, Minorities, and Persons with Disabilities in Science and Engineering (WMPD) report, the federal government’s most comprehensive look at the participation of these three demographic groups in science and engineering education and employment.

The report shows the degree to which women, people with disabilities and minorities from three racial and ethnic groups — black, Hispanic and American Indian or Alaska Native — are underrepresented in science and engineering (S&E). Women have reached parity with men in educational attainment but not in S&E employment. Underrepresented minorities account for disproportionately smaller percentages in both S&E education and employment

Congress mandated the biennial report in the Science and Engineering Equal Opportunities Act as part of the National Science Foundation’s (NSF) mission to encourage and strengthen the participation of underrepresented groups in S&E.

A Jan. 31, 2017 NSF news release (also on EurekAlert), which originated the news item, provides information about why the report is issued every two years and provides highlights from the 2017 report,

“An important part of fulfilling our mission to further the progress of science is producing current, accurate information about the U.S. STEM workforce,” said NSF Director France Córdova. “This report is a valuable resource to the science and engineering policy community.”

NSF maintains a portfolio of programs aimed at broadening participation in S&E, including ADVANCE: Increasing the Participation and Advancement of Women in Academic Science and Engineering Careers; LSAMP: the Louis Stokes Alliances for Minority Participation; and NSF INCLUDES, which focuses on building networks that can scale up proven approaches to broadening participation.

The digest provides highlights and analysis in five topic areas: enrollment, field of degree, occupation, employment status and early career doctorate holders. That last topic area includes analysis of pilot study data from the Early Career Doctorates Survey, a new NCSES product. NCSES also maintains expansive WMPD data tables, updated periodically as new data become available, which present the latest S&E education and workforce data available from NCSES and other agencies. The tables provide the public access to detailed, field-by-field information that includes both percentages and the actual numbers of people involved in S&E.

“WMPD is more than just a single report or presentation,” said NCSES Director John Gawalt. “It is a vast and unique information resource, carefully curated and maintained, that allows anyone (from the general public to highly trained researchers) ready access to data that facilitate and support their own exploration and analyses.”

Key findings from the new digest include:

  • The types of schools where students enroll vary among racial and ethnic groups. Hispanics, American Indians or Alaska Natives and Native Hawaiians or Other Pacific Islanders are more likely to enroll in community colleges. Blacks and Native Hawaiian or Other Pacific Islanders are more likely to enroll in private, for profit schools.
  • Since the late 1990s, women have earned about half of S&E bachelor’s degrees. But their representation varies widely by field, ranging from 70 percent in psychology to 18 percent in computer sciences.
  • At every level — bachelor’s, master’s and doctorate — underrepresented minority women earn a higher proportion of degrees than their male counterparts. White women, in contrast earn a smaller proportion of degrees than their male counterparts.
  • Despite two decades of progress, a wide gap in educational attainment remains between underrepresented minorities and whites and Asians, two groups that have higher representation in S&E education than they do in the U.S. population.
  • White men constitute about one-third of the overall U.S. population; they comprise half of the S&E workforce. Blacks, Hispanics and people with disabilities are underrepresented in the S&E workforce.
  • Women’s participation in the workforce varies greatly by field of occupation.
  • In 2015, scientists and engineers had a lower unemployment rate compared to the general U.S. population (3.3 percent versus 5.8 percent), although the rate varied among groups. For example, it was 2.8 percent among white women in S&E but 6.0 percent for underrepresented minority women.

For more information, including access to the digest and data tables, see the updated WMPD website.

Caption: In 2015, women and some minority groups were represented less in science and engineering (S&E) occupations than they were in the US general population.. Credit: NSF

Wars (such as they are) on science

I hinted in a Jan. 27, 2017 posting (scroll down abotu 15% of the way) that advice from Canadians with regard to an ‘American war on science’ might not be such a good idea. It seems that John Dupuis (mentioned in the Jan. 27, 2017 posting) has yet more advice for our neighbours to the south in his Feb. 5, 2017 posting (on the Confessions of a Science Librarian blog; Note: A link has been removed),

My advice? Don’t bring a test tube to a Bunsen burner fight. Mobilize, protest, form partnerships, wrote op-eds and blog posts and books and articles, speak about science at every public event you get a chance, run for office, help out someone who’s a science supporter run for office.

Don’t want your science to be seen as political or for your “objectivity” to be compromised? Too late, the other side made it political while you weren’t looking. And you’re the only one that thinks you’re objective. What difference will it make?

Don’t worry about changing the other side’s mind. Worry about mobilizing and energizing your side so they’ll turn out to protest and vote and send letters and all those other good things.

Worried that you will ruin your reputation and that when the good guys come back into power your “objectivity” will be forever compromised? Worry first about getting the good guys back in power. They will understand what you went through and why you had to mobilize. And they never thought your were “objective” to begin with.

Proof? The Canadian experience. After all, even the Guardian wants to talk about How science helped to swing the Canadian election? Two or four years from now, you want them to be writing articles about how science swung the US mid-term or presidential elections.

Dupuis goes on to offer a good set of links to articles about the Canadian experience written for media outlets from across the world.

The thing is, Stephen Harper is not Donald Trump. So, all this Canadian experience may not be as helpful as we or our neighbours to the south might like.

This Feb . 6, 2017 article by Daniel Engber for Slate.com gives a perspective that I think has been missed in this ‘Canadian’ discussion about the latest US ‘war on science’ (Note: Link have been removed),

An army of advocates for science will march on Washington, D.C. on April 22, according to a press release out last Thursday. The show of force aims to “draw attention to dangerous trends in the politicization of science,” the organizers say, citing “threats to the scientific community” and the need to “safeguard” researchers from a menacing regime. If Donald Trump plans to escalate his apparent assault on scientific values, then let him be on notice: Science will fight back.

We’ve been through this before. Casting opposition to a sitting president as resistance to a “war on science” likely helped progressives 10 or 15 years ago, when George W. Bush alienated voters with his apparent disrespect for climate science and embryonic stem-cell research (among other fields of study). The Bush administration’s meddling in research and disregard for expertise turned out to be a weakness, as the historian Daniel Sarewitz described in an insightful essay from 2009. Who could really argue with the free pursuit of knowledge? Democratic challengers made a weapon of their support for scientific progress: “Americans deserve a president who believes in science,” said John Kerry during the 2004 campaign. “We will end the Bush administration’s war on science, restore scientific integrity and return to evidence-based decision-making,” the Democratic Party platform stated four years later.

But what had been a sharp-edged political strategy may now have lost its edge. I don’t mean to say that the broad appeal of science has been on the wane; overall, Americans are about as sanguine on the value of our scientific institutions as they were before. Rather, the electorate has reorganized itself, or has been reorganized by Trump, in such a way that fighting on behalf of science no longer cuts across party lines, and it doesn’t influence as many votes beyond the Democratic base.

The War on Science works for Trump because it’s always had more to do with social class than politics. A glance at data from the National Science Foundation shows how support for science tracks reliably with socioeconomic status. As of 2014, 50 percent of Americans in the highest income quartile and more than 55 percent of those with college degrees reported having great confidence in the nation’s scientific leaders. Among those in the lowest income bracket or with very little education, that support drops to 33 percent or less. Meanwhile, about five-sixths of rich or college-educated people—compared to less than half of poor people or those who never finished high school—say they believe that the benefits of science outweigh the potential harms. To put this in crude, horse-race terms, the institution of scientific research consistently polls about 30 points higher among the elites than it does among the uneducated working class.

Ten years ago, that distinction didn’t matter quite so much for politics. …

… with the battle lines redrawn, the same approach to activism now seems as though it could have the opposite effect. In the same way that fighting the War on Journalism delegitimizes the press by making it seem partisan and petty, so might the present fight against the War on Science sap scientific credibility. By confronting it directly, science activists may end up helping to consolidate Trump’s support among his most ardent, science-skeptical constituency. If they’re not careful where and how they step, the science march could turn into an ambush.

I think Engber is making an important point and the strategies and tactics being employed need to be carefully reviewed.

As for the Canadian situation, things are indeed better now but my experience is that while we rarely duplicate the situation in the US, we often find ourselves echoing their cries, albeit years later and more faintly. The current leadership race for the Conservative party has at least one Trump admirer (Kelly Leitch see the section titled: Controversy) fashioning her campaign in light of his perceived successes. Our next so called ‘war on science’ could echo in some ways the current situation in the US and we’d best keep that in mind.

Powering up your graphene implants so you don’t get fried in the process

A Sept. 23, 2016 news item on phys.org describes a way of making graphene-based medical implants safer,

In the future, our health may be monitored and maintained by tiny sensors and drug dispensers, deployed within the body and made from graphene—one of the strongest, lightest materials in the world. Graphene is composed of a single sheet of carbon atoms, linked together like razor-thin chicken wire, and its properties may be tuned in countless ways, making it a versatile material for tiny, next-generation implants.

But graphene is incredibly stiff, whereas biological tissue is soft. Because of this, any power applied to operate a graphene implant could precipitously heat up and fry surrounding cells.

Now, engineers from MIT [Massachusetts Institute of Technology] and Tsinghua University in Beijing have precisely simulated how electrical power may generate heat between a single layer of graphene and a simple cell membrane. While direct contact between the two layers inevitably overheats and kills the cell, the researchers found they could prevent this effect with a very thin, in-between layer of water.

A Sept. 23, 2016 MIT news release by Emily Chu, which originated the news item, provides more technical details,

By tuning the thickness of this intermediate water layer, the researchers could carefully control the amount of heat transferred between graphene and biological tissue. They also identified the critical power to apply to the graphene layer, without frying the cell membrane. …

Co-author Zhao Qin, a research scientist in MIT’s Department of Civil and Environmental Engineering (CEE), says the team’s simulations may help guide the development of graphene implants and their optimal power requirements.

“We’ve provided a lot of insight, like what’s the critical power we can accept that will not fry the cell,” Qin says. “But sometimes we might want to intentionally increase the temperature, because for some biomedical applications, we want to kill cells like cancer cells. This work can also be used as guidance [for those efforts.]”

Sandwich model

Typically, heat travels between two materials via vibrations in each material’s atoms. These atoms are always vibrating, at frequencies that depend on the properties of their materials. As a surface heats up, its atoms vibrate even more, causing collisions with other atoms and transferring heat in the process.

The researchers sought to accurately characterize the way heat travels, at the level of individual atoms, between graphene and biological tissue. To do this, they considered the simplest interface, comprising a small, 500-nanometer-square sheet of graphene and a simple cell membrane, separated by a thin layer of water.

“In the body, water is everywhere, and the outer surface of membranes will always like to interact with water, so you cannot totally remove it,” Qin says. “So we came up with a sandwich model for graphene, water, and membrane, that is a crystal clear system for seeing the thermal conductance between these two materials.”

Qin’s colleagues at Tsinghua University had previously developed a model to precisely simulate the interactions between atoms in graphene and water, using density functional theory — a computational modeling technique that considers the structure of an atom’s electrons in determining how that atom will interact with other atoms.

However, to apply this modeling technique to the group’s sandwich model, which comprised about half a million atoms, would have required an incredible amount of computational power. Instead, Qin and his colleagues used classical molecular dynamics — a mathematical technique based on a “force field” potential function, or a simplified version of the interactions between atoms — that enabled them to efficiently calculate interactions within larger atomic systems.

The researchers then built an atom-level sandwich model of graphene, water, and a cell membrane, based on the group’s simplified force field. They carried out molecular dynamics simulations in which they changed the amount of power applied to the graphene, as well as the thickness of the intermediate water layer, and observed the amount of heat that carried over from the graphene to the cell membrane.

Watery crystals

Because the stiffness of graphene and biological tissue is so different, Qin and his colleagues expected that heat would conduct rather poorly between the two materials, building up steeply in the graphene before flooding and overheating the cell membrane. However, the intermediate water layer helped dissipate this heat, easing its conduction and preventing a temperature spike in the cell membrane.

Looking more closely at the interactions within this interface, the researchers made a surprising discovery: Within the sandwich model, the water, pressed against graphene’s chicken-wire pattern, morphed into a similar crystal-like structure.

“Graphene’s lattice acts like a template to guide the water to form network structures,” Qin explains. “The water acts more like a solid material and makes the stiffness transition from graphene and membrane less abrupt. We think this helps heat to conduct from graphene to the membrane side.”

The group varied the thickness of the intermediate water layer in simulations, and found that a 1-nanometer-wide layer of water helped to dissipate heat very effectively. In terms of the power applied to the system, they calculated that about a megawatt of power per meter squared, applied in tiny, microsecond bursts, was the most power that could be applied to the interface without overheating the cell membrane.

Qin says going forward, implant designers can use the group’s model and simulations to determine the critical power requirements for graphene devices of different dimensions. As for how they might practically control the thickness of the intermediate water layer, he says graphene’s surface may be modified to attract a particular number of water molecules.

“I think graphene provides a very promising candidate for implantable devices,” Qin says. “Our calculations can provide knowledge for designing these devices in the future, for specific applications, like sensors, monitors, and other biomedical applications.”

This research was supported in part by the MIT International Science and Technology Initiative (MISTI): MIT-China Seed Fund, the National Natural Science Foundation of China, DARPA [US Defense Advanced Research Projects Agency], the Department of Defense (DoD) Office of Naval Research, the DoD Multidisciplinary Research Initiatives program, the MIT Energy Initiative, and the National Science Foundation.

Here’s a link to and a citation for the paper,

Intercalated water layers promote thermal dissipation at bio–nano interfaces by Yanlei Wang, Zhao Qin, Markus J. Buehler, & Zhiping Xu. Nature Communications 7, Article number: 12854 doi:10.1038/ncomms12854 Published 23 September 2016

This paper is open access.

Nanotechnology and water sustainability webinar, Oct. 19, 2016

An upcoming (Oct. 19, 2016) webinar from the US National Nanotechnology Initiative (NNI) is the first of a new series (from an Oct. 7, 2016 news item on Nanowerk),

“Water Sustainability through Nanotechnology: A Federal Perspective” – This webinar is the first in a series exploring the confluence of nanotechnology and water. This event will introduce the Nanotechnology Signature Initiative (NSI): Water Sustainability through Nanotechnology and highlight the activities of several participating Federal agencies. …

The NNI event page for the Water Sustainability through Nanotechnology webinar provides more detail,

Panelists include Nora Savage (National Science Foundation), Daniel Barta (National Aeronautics and Space Adminstration), Paul Shapiro (U.S. Environmental Protection Agency), Jim Dobrowolski (USDA National Institute of Food and Agriculture), and Hongda Chen (USDA National Institute of Food and Agriculture).

Webinar viewers will be able to submit questions for the panelists to answer during the Q&A period. Submitted questions will be considered in the order received and may be posted on the NNI website. A moderator will identify relevant questions and pose them to the speakers. Due to time constraints, not all questions may be addressed during the webinar. The moderator reserves the right to group similar questions and to skip questions, as appropriate.

There will be more in this series according to the webinar event page,

  • Increase water availability.
  • Improve the efficiency of water delivery and use.
  • Enable next-generation water monitoring systems.

You can register here to participate.

The NNI has a webpage dedicated to Water Sustainability through Nanotechnology: Nanoscale solutions for a Global-Scale Challenge, which explains their perspective on the matter,

Water is essential to all life, and its significance bridges many critical areas for society: food, energy, security, and the environment. Projected population growth in the coming decades and associated increases in demands for water exacerbate the mounting pressure to address water sustainability. Yet, only 2.5% of the world’s water is fresh water, and some of the most severe impacts of climate change are on our country’s water resources. For example, in 2012, droughts affected about two-thirds of the continental United States, impacting water supplies, tourism, transportation, energy, and fisheries – costing the agricultural sector alone $30 billion. In addition, the ground water in many of the Nation’s aquifers is being depleted at unsustainable rates, which necessitates drilling ever deeper to tap groundwater resources. Finally, water infrastructure is a critically important but sometimes overlooked aspect of water treatment and distribution. Both technological and sociopolitical solutions are required to address these problems.

The text also goes on to describe how nanotechnology could  assist with this challenge.