Tag Archives: US National Science Foundation

New path to viable memristor/neuristor?

I first stumbled onto memristors and the possibility of brain-like computing sometime in 2008 (around the time that R. Stanley Williams and his team at HP Labs first published the results of their research linking Dr. Leon Chua’s memristor theory to their attempts to shrink computer chips). In the almost 10 years since, scientists have worked hard to utilize memristors in the field of neuromorphic (brain-like) engineering/computing.

A January 22, 2018 news item on phys.org describes the latest work,

When it comes to processing power, the human brain just can’t be beat.

Packed within the squishy, football-sized organ are somewhere around 100 billion neurons. At any given moment, a single neuron can relay instructions to thousands of other neurons via synapses—the spaces between neurons, across which neurotransmitters are exchanged. There are more than 100 trillion synapses that mediate neuron signaling in the brain, strengthening some connections while pruning others, in a process that enables the brain to recognize patterns, remember facts, and carry out other learning tasks, at lightning speeds.

Researchers in the emerging field of “neuromorphic computing” have attempted to design computer chips that work like the human brain. Instead of carrying out computations based on binary, on/off signaling, like digital chips do today, the elements of a “brain on a chip” would work in an analog fashion, exchanging a gradient of signals, or “weights,” much like neurons that activate in various ways depending on the type and number of ions that flow across a synapse.

In this way, small neuromorphic chips could, like the brain, efficiently process millions of streams of parallel computations that are currently only possible with large banks of supercomputers. But one significant hangup on the way to such portable artificial intelligence has been the neural synapse, which has been particularly tricky to reproduce in hardware.

Now engineers at MIT [Massachusetts Institute of Technology] have designed an artificial synapse in such a way that they can precisely control the strength of an electric current flowing across it, similar to the way ions flow between neurons. The team has built a small chip with artificial synapses, made from silicon germanium. In simulations, the researchers found that the chip and its synapses could be used to recognize samples of handwriting, with 95 percent accuracy.

A January 22, 2018 MIT news release by Jennifer Chua (also on EurekAlert), which originated the news item, provides more detail about the research,

The design, published today [January 22, 2018] in the journal Nature Materials, is a major step toward building portable, low-power neuromorphic chips for use in pattern recognition and other learning tasks.

The research was led by Jeehwan Kim, the Class of 1947 Career Development Assistant Professor in the departments of Mechanical Engineering and Materials Science and Engineering, and a principal investigator in MIT’s Research Laboratory of Electronics and Microsystems Technology Laboratories. His co-authors are Shinhyun Choi (first author), Scott Tan (co-first author), Zefan Li, Yunjo Kim, Chanyeol Choi, and Hanwool Yeon of MIT, along with Pai-Yu Chen and Shimeng Yu of Arizona State University.

Too many paths

Most neuromorphic chip designs attempt to emulate the synaptic connection between neurons using two conductive layers separated by a “switching medium,” or synapse-like space. When a voltage is applied, ions should move in the switching medium to create conductive filaments, similarly to how the “weight” of a synapse changes.

But it’s been difficult to control the flow of ions in existing designs. Kim says that’s because most switching mediums, made of amorphous materials, have unlimited possible paths through which ions can travel — a bit like Pachinko, a mechanical arcade game that funnels small steel balls down through a series of pins and levers, which act to either divert or direct the balls out of the machine.

Like Pachinko, existing switching mediums contain multiple paths that make it difficult to predict where ions will make it through. Kim says that can create unwanted nonuniformity in a synapse’s performance.

“Once you apply some voltage to represent some data with your artificial neuron, you have to erase and be able to write it again in the exact same way,” Kim says. “But in an amorphous solid, when you write again, the ions go in different directions because there are lots of defects. This stream is changing, and it’s hard to control. That’s the biggest problem — nonuniformity of the artificial synapse.”

A perfect mismatch

Instead of using amorphous materials as an artificial synapse, Kim and his colleagues looked to single-crystalline silicon, a defect-free conducting material made from atoms arranged in a continuously ordered alignment. The team sought to create a precise, one-dimensional line defect, or dislocation, through the silicon, through which ions could predictably flow.

To do so, the researchers started with a wafer of silicon, resembling, at microscopic resolution, a chicken-wire pattern. They then grew a similar pattern of silicon germanium — a material also used commonly in transistors — on top of the silicon wafer. Silicon germanium’s lattice is slightly larger than that of silicon, and Kim found that together, the two perfectly mismatched materials can form a funnel-like dislocation, creating a single path through which ions can flow.

The researchers fabricated a neuromorphic chip consisting of artificial synapses made from silicon germanium, each synapse measuring about 25 nanometers across. They applied voltage to each synapse and found that all synapses exhibited more or less the same current, or flow of ions, with about a 4 percent variation between synapses — a much more uniform performance compared with synapses made from amorphous material.

They also tested a single synapse over multiple trials, applying the same voltage over 700 cycles, and found the synapse exhibited the same current, with just 1 percent variation from cycle to cycle.

“This is the most uniform device we could achieve, which is the key to demonstrating artificial neural networks,” Kim says.

Writing, recognized

As a final test, Kim’s team explored how its device would perform if it were to carry out actual learning tasks — specifically, recognizing samples of handwriting, which researchers consider to be a first practical test for neuromorphic chips. Such chips would consist of “input/hidden/output neurons,” each connected to other “neurons” via filament-based artificial synapses.

Scientists believe such stacks of neural nets can be made to “learn.” For instance, when fed an input that is a handwritten ‘1,’ with an output that labels it as ‘1,’ certain output neurons will be activated by input neurons and weights from an artificial synapse. When more examples of handwritten ‘1s’ are fed into the same chip, the same output neurons may be activated when they sense similar features between different samples of the same letter, thus “learning” in a fashion similar to what the brain does.

Kim and his colleagues ran a computer simulation of an artificial neural network consisting of three sheets of neural layers connected via two layers of artificial synapses, the properties of which they based on measurements from their actual neuromorphic chip. They fed into their simulation tens of thousands of samples from a handwritten recognition dataset commonly used by neuromorphic designers, and found that their neural network hardware recognized handwritten samples 95 percent of the time, compared to the 97 percent accuracy of existing software algorithms.

The team is in the process of fabricating a working neuromorphic chip that can carry out handwriting-recognition tasks, not in simulation but in reality. Looking beyond handwriting, Kim says the team’s artificial synapse design will enable much smaller, portable neural network devices that can perform complex computations that currently are only possible with large supercomputers.

“Ultimately we want a chip as big as a fingernail to replace one big supercomputer,” Kim says. “This opens a stepping stone to produce real artificial hardware.”

This research was supported in part by the National Science Foundation.

Here’s a link to and a citation for the paper,

SiGe epitaxial memory for neuromorphic computing with reproducible high performance based on engineered dislocations by Shinhyun Choi, Scott H. Tan, Zefan Li, Yunjo Kim, Chanyeol Choi, Pai-Yu Chen, Hanwool Yeon, Shimeng Yu, & Jeehwan Kim. Nature Materials (2018) doi:10.1038/s41563-017-0001-5 Published online: 22 January 2018

This paper is behind a paywall.

For the curious I have included a number of links to recent ‘memristor’ postings here,

January 22, 2018: Memristors at Masdar

January 3, 2018: Mott memristor

August 24, 2017: Neuristors and brainlike computing

June 28, 2017: Dr. Wei Lu and bio-inspired ‘memristor’ chips

May 2, 2017: Predicting how a memristor functions

December 30, 2016: Changing synaptic connectivity with a memristor

December 5, 2016: The memristor as computing device

November 1, 2016: The memristor as the ‘missing link’ in bioelectronic medicine?

You can find more by using ‘memristor’ as the search term in the blog search function or on the search engine of your choice.

University of Washington (state) is accelerating nanoscale research with Institute for Nano-Engineered Systems

A December 5, 2017 news item on Nanowerk announced a new research institute at the University of Washington (state),

The University of Washington [UW} has launched a new institute aimed at accelerating research at the nanoscale: the Institute for Nano-Engineered Systems, or NanoES. Housed in a new, multimillion-dollar facility on the UW’s Seattle campus, the institute will pursue impactful advancements in a variety of disciplines — including energy, materials science, computation and medicine. Yet these advancements will be at a technological scale a thousand times smaller than the width of a human hair.

The institute was launched at a reception Dec. 4 [2017] at its headquarters in the $87.8-million Nano Engineering and Sciences Building. During the event, speakers including UW officials and NanoES partners celebrated the NanoES mission to capitalize on the university’s strong record of research at the nanoscale and engage partners in industry at the onset of new projects.

A December 5, 2017 UW news release, which originated the news item, somewhat clarifies the declarations in the two excerpted paragraphs in the above,

The vision of the NanoES, which is part of the UW’s College of Engineering, is to act as a magnet for researchers in nanoscale science and engineering, with a focus on enabling industry partnership and entrepreneurship at the earliest stages of research projects. According to Karl Böhringer, director of the NanoES and a UW professor of electrical engineering and bioengineering, this unique approach will hasten the development of solutions to the field’s most pressing challenges: the manufacturing of scalable, high-yield nano-engineered systems for applications in information processing, energy, health and interconnected life.

“The University of Washington is well known for its expertise in nanoscale materials, processing, physics and biology — as well as its cutting-edge nanofabrication, characterization and testing facilities,” said Böhringer, who stepped down as director of the UW-based Washington Nanofabrication Facility to lead the NanoES. “NanoES will build on these strengths, bringing together people, tools and opportunities to develop nanoscale devices and systems.”

The centerpiece of the NanoES is its headquarters, the Nano Engineering and Sciences Building. The building houses 90,300 square feet of research and learning space, and was funded largely by the College of Engineering and Sound Transit. It contains an active learning classroom, a teaching laboratory and a 3,000-square-foot common area designed expressly to promote the sharing and exchanging of ideas. The remainder includes “incubator-style” office space and more than 40,000 square feet of flexible multipurpose laboratory and instrumentation space. The building’s location and design elements are intended to limit vibrations and electromagnetic interference so it can house sensitive experiments.

NanoES will house research in nanotechnology fields that hold promise for high impact, such as:

  • Augmented humanity, which includes technology to both aid and replace human capability in a way that joins user and machine as one – and foresees portable, wearable, implantable and networked technology for applications such as personalized medical care, among others.
  • Integrated photonics, which ranges from single-photon sensors for health care diagnostic tests to large-scale, integrated networks of photonic devices.
  • Scalable nanomanufacturing, which aims to develop low-cost, high-volume manufacturing processes. These would translate device prototypes constructed in research laboratories into system- and network-level nanomanufacturing methods for applications ranging from the 3-D printing of cell and tissue scaffolds to ultrathin solar cells.

A ribbon cutting ceremony.

Cutting the ribbon for the NanoES on Dec. 4. Left-to-right: Karl Böhringer, director of the NanoES and a UW professor of electrical engineering and bioengineering; Nena Golubovic, physical sciences director for IP Group; Mike Bragg, Dean of the UW College of Engineering; Jevne Micheau-Cunningham, deputy director of the NanoES.Kathryn Sauber/University of Washington

Collaborations with other UW-based institutions will provide additional resources for the NanoES. Endeavors in scalable nanomanufacturing, for example, will rely on the roll-to-roll processing facility at the UW Clean Energy Institute‘s Washington Clean Energy Testbeds or on advanced surface characterization capabilities at the Molecular Analysis Facility. In addition, the Washington Nanofabrication Facility recently completed a three-year, $37 million upgrade to raise it to an ISO Class 5 nanofabrication facility.

UW faculty and outside collaborators will build new research programs in the Nano Engineering and Sciences Building. Eric Klavins, a UW professor of electrical engineering, recently moved part of his synthetic biology research team to the building, adjacent to his collaborators in the Molecular Engineering & Sciences Institute and the Institute for Protein Design.

“We are extremely excited about the interdisciplinary and collaborative potential of the new space,” said Klavins.

The NanoES also has already produced its first spin-out company, Tunoptix, which was co-founded by Böhringer and recently received startup funding from IP Group, a U.K.-based venture capital firm.

“IP Group is very excited to work with the University of Washington,” said Nena Golubovic, physical sciences director for IP Group. “We are looking forward to the new collaborations and developments in science and technology that will grow from this new partnership.”

A woman speaking at a podium.

Nena Golubovic, physical sciences director for IP Group, delivering remarks at the Dec. 4 opening of NanoES.Kathryn Sauber/University of Washington

“We are eager to work with our partners at the IP Group to bring our technology to the market, and we appreciate their vision and investment in the NanoES Integrated Photonics Initiative,” said Tunoptix entrepreneurial lead Mike Robinson. “NanoES was the ideal environment in which to start our company.”

The NanoES leaders hope to forge similar partnerships with researchers, investors and industry leaders to develop technologies for portable, wearable, implantable and networked nanotechnologies for personalized medical care, a more efficient interconnected life and interconnected mobility. In addition to expertise, personnel and state-of-the-art research space and equipment, the NanoES will provide training, research support and key connections to capital and corporate partners.

“We believe this unique approach is the best way to drive innovations from idea to fabrication to scale-up and testing,” said Böhringer. “Some of the most promising solutions to these huge challenges are rooted in nanotechnology.”

The NanoES is supported by funds from the College of Engineering and the National Science Foundation, as well as capital investments from investors and industry partners.

You can find out more about Nano ES here.

Colliding organic nanoparticles caught on camera for the first time

There is high excitement about this development in a November 17, 2017 news item on Nanowerk,

A Northwestern University research team is the first to capture on video organic nanoparticles colliding and fusing together. This unprecedented view of “chemistry in motion” will aid Northwestern nanoscientists developing new drug delivery methods as well as demonstrate to researchers around the globe how an emerging imaging technique opens a new window on a very tiny world.

A November 17, 2017 Northwestern University news release (also on EurekAlert) by Megan Fellman, which originated the news item, further illuminates the matter,

This is a rare example of particles in motion. The dynamics are reminiscent of two bubbles coming together and merging into one: first they join and have a membrane between them, but then they fuse and become one larger bubble.

“I had an image in my mind, but the first time I saw these fusing nanoparticles in black and white was amazing,” said professor Nathan C. Gianneschi, who led the interdisciplinary study and works at the intersection of nanotechnology and biomedicine.

“To me, it’s literally a window opening up to this world you have always known was there, but now you’ve finally got an image of it. I liken it to the first time I saw Jupiter’s moons through a telescope. Nothing compares to actually seeing,” he said.

Gianneschi is the Jacob and Rosaline Cohn Professor in the department of chemistry in the Weinberg College of Arts and Sciences and in the departments of materials science and engineering and of biomedical engineering in the McCormick School of Engineering.

The study, which includes videos of different nanoparticle fusion events, was published today (Nov. 1 [2017]7) by the Journal of the American Chemical Society.

The research team used liquid-cell transmission electron microscopy to directly image how polymer-based nanoparticles, or micelles, that Gianneschi’s lab is developing for treating cancer and heart attacks change over time. The powerful new technique enabled the scientists to directly observe the particles’ transformation and characterize their dynamics.

“We can see on the molecular level how the polymeric matter rearranges when the particles fuse into one object,” said Lucas R. Parent, first author of the paper and a National Institutes of Health Postdoctoral Fellow in Gianneschi’s research group. “This is the first study of many to come in which researchers will use this method to look at all kinds of dynamic phenomena in organic materials systems on the nanoscale.”

In the Northwestern study, organic particles in water bounce off each other, and some collide and merge, undergoing a physical transformation. The researchers capture the action by shining an electron beam through the sample. The tiny particles — the largest are only approximately 200 nanometers in diameter — cast shadows that are captured directly by a camera below.

“We’ve observed classical fusion behavior on the nanoscale,” said Gianneschi, a member of Northwestern’s International Institute for Nanotechnology. “Capturing the fundamental growth and evolution processes of these particles in motion will help us immensely in our work with synthetic materials and their interactions with biological systems.”

The National Institutes of Health, the National Science Foundation, the Air Force Office of Scientific Research and the Army Research Office supported the research.

Here’s a link to and a citation for the paper,

Directly Observing Micelle Fusion and Growth in Solution by Liquid-Cell Transmission Electron Microscopy by Lucas R. Parent, Evangelos Bakalis, Abelardo Ramírez-Hernández, Jacquelin K. Kammeyer, Chiwoo Park, Juan de Pablo, Francesco Zerbetto, Joseph P. Patterson, and Nathan C. Gianneschi. J. Am. Chem. Soc., Article ASAP DOI: 10.1021/jacs.7b09060 Publication Date (Web): November 17, 2017

Copyright © 2017 American Chemical Society

This paper is behind a paywall.

3-D integration of nanotechnologies on a single computer chip

By integrating nanomaterials , a new technique for a 3D computer chip capable of handling today’s huge amount of data has been developed. Weirdly, the first two paragraphs of a July 5, 2017 news item on Nanowerk do not convey the main point (Note: A link has been removed),

As embedded intelligence is finding its way into ever more areas of our lives, fields ranging from autonomous driving to personalized medicine are generating huge amounts of data. But just as the flood of data is reaching massive proportions, the ability of computer chips to process it into useful information is stalling.

Now, researchers at Stanford University and MIT have built a new chip to overcome this hurdle. The results are published today in the journal Nature (“Three-dimensional integration of nanotechnologies for computing and data storage on a single chip”), by lead author Max Shulaker, an assistant professor of electrical engineering and computer science at MIT. Shulaker began the work as a PhD student alongside H.-S. Philip Wong and his advisor Subhasish Mitra, professors of electrical engineering and computer science at Stanford. The team also included professors Roger Howe and Krishna Saraswat, also from Stanford.

This image helps to convey the main points,

Instead of relying on silicon-based devices, a new chip uses carbon nanotubes and resistive random-access memory (RRAM) cells. The two are built vertically over one another, making a new, dense 3-D computer architecture with interleaving layers of logic and memory. Courtesy MIT

As I hove been quite impressed with their science writing, it was a bit surprising to find that the Massachusetts Institute of Technology (MIT) had issued this news release (news item) as it didn’t follow the ‘rules’, i.e., cover as many of the journalistic questions (Who, What, Where, When, Why, and, sometimes, How) as possible in the first sentence/paragraph. This is written more in the style of a magazine article and so the details take a while to emerge, from a July 5, 2017 MIT news release, which originated the news item,

Computers today comprise different chips cobbled together. There is a chip for computing and a separate chip for data storage, and the connections between the two are limited. As applications analyze increasingly massive volumes of data, the limited rate at which data can be moved between different chips is creating a critical communication “bottleneck.” And with limited real estate on the chip, there is not enough room to place them side-by-side, even as they have been miniaturized (a phenomenon known as Moore’s Law).

To make matters worse, the underlying devices, transistors made from silicon, are no longer improving at the historic rate that they have for decades.

The new prototype chip is a radical change from today’s chips. It uses multiple nanotechnologies, together with a new computer architecture, to reverse both of these trends.

Instead of relying on silicon-based devices, the chip uses carbon nanotubes, which are sheets of 2-D graphene formed into nanocylinders, and resistive random-access memory (RRAM) cells, a type of nonvolatile memory that operates by changing the resistance of a solid dielectric material. The researchers integrated over 1 million RRAM cells and 2 million carbon nanotube field-effect transistors, making the most complex nanoelectronic system ever made with emerging nanotechnologies.

The RRAM and carbon nanotubes are built vertically over one another, making a new, dense 3-D computer architecture with interleaving layers of logic and memory. By inserting ultradense wires between these layers, this 3-D architecture promises to address the communication bottleneck.

However, such an architecture is not possible with existing silicon-based technology, according to the paper’s lead author, Max Shulaker, who is a core member of MIT’s Microsystems Technology Laboratories. “Circuits today are 2-D, since building conventional silicon transistors involves extremely high temperatures of over 1,000 degrees Celsius,” says Shulaker. “If you then build a second layer of silicon circuits on top, that high temperature will damage the bottom layer of circuits.”

The key in this work is that carbon nanotube circuits and RRAM memory can be fabricated at much lower temperatures, below 200 C. “This means they can be built up in layers without harming the circuits beneath,” Shulaker says.

This provides several simultaneous benefits for future computing systems. “The devices are better: Logic made from carbon nanotubes can be an order of magnitude more energy-efficient compared to today’s logic made from silicon, and similarly, RRAM can be denser, faster, and more energy-efficient compared to DRAM,” Wong says, referring to a conventional memory known as dynamic random-access memory.

“In addition to improved devices, 3-D integration can address another key consideration in systems: the interconnects within and between chips,” Saraswat adds.

“The new 3-D computer architecture provides dense and fine-grained integration of computating and data storage, drastically overcoming the bottleneck from moving data between chips,” Mitra says. “As a result, the chip is able to store massive amounts of data and perform on-chip processing to transform a data deluge into useful information.”

To demonstrate the potential of the technology, the researchers took advantage of the ability of carbon nanotubes to also act as sensors. On the top layer of the chip they placed over 1 million carbon nanotube-based sensors, which they used to detect and classify ambient gases.

Due to the layering of sensing, data storage, and computing, the chip was able to measure each of the sensors in parallel, and then write directly into its memory, generating huge bandwidth, Shulaker says.

Three-dimensional integration is the most promising approach to continue the technology scaling path set forth by Moore’s laws, allowing an increasing number of devices to be integrated per unit volume, according to Jan Rabaey, a professor of electrical engineering and computer science at the University of California at Berkeley, who was not involved in the research.

“It leads to a fundamentally different perspective on computing architectures, enabling an intimate interweaving of memory and logic,” Rabaey says. “These structures may be particularly suited for alternative learning-based computational paradigms such as brain-inspired systems and deep neural nets, and the approach presented by the authors is definitely a great first step in that direction.”

“One big advantage of our demonstration is that it is compatible with today’s silicon infrastructure, both in terms of fabrication and design,” says Howe.

“The fact that this strategy is both CMOS [complementary metal-oxide-semiconductor] compatible and viable for a variety of applications suggests that it is a significant step in the continued advancement of Moore’s Law,” says Ken Hansen, president and CEO of the Semiconductor Research Corporation, which supported the research. “To sustain the promise of Moore’s Law economics, innovative heterogeneous approaches are required as dimensional scaling is no longer sufficient. This pioneering work embodies that philosophy.”

The team is working to improve the underlying nanotechnologies, while exploring the new 3-D computer architecture. For Shulaker, the next step is working with Massachusetts-based semiconductor company Analog Devices to develop new versions of the system that take advantage of its ability to carry out sensing and data processing on the same chip.

So, for example, the devices could be used to detect signs of disease by sensing particular compounds in a patient’s breath, says Shulaker.

“The technology could not only improve traditional computing, but it also opens up a whole new range of applications that we can target,” he says. “My students are now investigating how we can produce chips that do more than just computing.”

“This demonstration of the 3-D integration of sensors, memory, and logic is an exceptionally innovative development that leverages current CMOS technology with the new capabilities of carbon nanotube field–effect transistors,” says Sam Fuller, CTO emeritus of Analog Devices, who was not involved in the research. “This has the potential to be the platform for many revolutionary applications in the future.”

This work was funded by the Defense Advanced Research Projects Agency [DARPA], the National Science Foundation, Semiconductor Research Corporation, STARnet SONIC, and member companies of the Stanford SystemX Alliance.

Here’s a link to and a citation for the paper,

Three-dimensional integration of nanotechnologies for computing and data storage on a single chip by Max M. Shulaker, Gage Hills, Rebecca S. Park, Roger T. Howe, Krishna Saraswat, H.-S. Philip Wong, & Subhasish Mitra. Nature 547, 74–78 (06 July 2017) doi:10.1038/nature22994 Published online 05 July 2017

This paper is behind a paywall.

Using only sunlight to desalinate water

The researchers seem to believe that this new desalination technique could be a game changer. From a June 20, 2017 news item on Azonano,

An off-grid technology using only the energy from sunlight to transform salt water into fresh drinking water has been developed as an outcome of the effort from a federally funded research.

The desalination system uses a combination of light-harvesting nanophotonics and membrane distillation technology and is considered to be the first major innovation from the Center for Nanotechnology Enabled Water Treatment (NEWT), which is a multi-institutional engineering research center located at Rice University.

NEWT’s “nanophotonics-enabled solar membrane distillation” technology (NESMD) integrates tried-and-true water treatment methods with cutting-edge nanotechnology capable of transforming sunlight to heat. …

A June 19, 2017 Rice University news release, which originated the news item, expands on the theme,

More than 18,000 desalination plants operate in 150 countries, but NEWT’s desalination technology is unlike any other used today.

“Direct solar desalination could be a game changer for some of the estimated 1 billion people who lack access to clean drinking water,” said Rice scientist and water treatment expert Qilin Li, a corresponding author on the study. “This off-grid technology is capable of providing sufficient clean water for family use in a compact footprint, and it can be scaled up to provide water for larger communities.”

The oldest method for making freshwater from salt water is distillation. Salt water is boiled, and the steam is captured and run through a condensing coil. Distillation has been used for centuries, but it requires complex infrastructure and is energy inefficient due to the amount of heat required to boil water and produce steam. More than half the cost of operating a water distillation plant is for energy.

An emerging technology for desalination is membrane distillation, where hot salt water is flowed across one side of a porous membrane and cold freshwater is flowed across the other. Water vapor is naturally drawn through the membrane from the hot to the cold side, and because the seawater need not be boiled, the energy requirements are less than they would be for traditional distillation. However, the energy costs are still significant because heat is continuously lost from the hot side of the membrane to the cold.

“Unlike traditional membrane distillation, NESMD benefits from increasing efficiency with scale,” said Rice’s Naomi Halas, a corresponding author on the paper and the leader of NEWT’s nanophotonics research efforts. “It requires minimal pumping energy for optimal distillate conversion, and there are a number of ways we can further optimize the technology to make it more productive and efficient.”

NEWT’s new technology builds upon research in Halas’ lab to create engineered nanoparticles that harvest as much as 80 percent of sunlight to generate steam. By adding low-cost, commercially available nanoparticles to a porous membrane, NEWT has essentially turned the membrane itself into a one-sided heating element that alone heats the water to drive membrane distillation.

“The integration of photothermal heating capabilities within a water purification membrane for direct, solar-driven desalination opens new opportunities in water purification,” said Yale University ‘s Menachem “Meny” Elimelech, a co-author of the new study and NEWT’s lead researcher for membrane processes.

In the PNAS study, researchers offered proof-of-concept results based on tests with an NESMD chamber about the size of three postage stamps and just a few millimeters thick. The distillation membrane in the chamber contained a specially designed top layer of carbon black nanoparticles infused into a porous polymer. The light-capturing nanoparticles heated the entire surface of the membrane when exposed to sunlight. A thin half-millimeter-thick layer of salt water flowed atop the carbon-black layer, and a cool freshwater stream flowed below.

Li, the leader of NEWT’s advanced treatment test beds at Rice, said the water production rate increased greatly by concentrating the sunlight. “The intensity got up 17.5 kilowatts per meter squared when a lens was used to concentrate sunlight by 25 times, and the water production increased to about 6 liters per meter squared per hour.”

Li said NEWT’s research team has already made a much larger system that contains a panel that is about 70 centimeters by 25 centimeters. Ultimately, she said, NEWT hopes to produce a modular system where users could order as many panels as they needed based on their daily water demands.

“You could assemble these together, just as you would the panels in a solar farm,” she said. “Depending on the water production rate you need, you could calculate how much membrane area you would need. For example, if you need 20 liters per hour, and the panels produce 6 liters per hour per square meter, you would order a little over 3 square meters of panels.”

Established by the National Science Foundation in 2015, NEWT aims to develop compact, mobile, off-grid water-treatment systems that can provide clean water to millions of people who lack it and make U.S. energy production more sustainable and cost-effective. NEWT, which is expected to leverage more than $40 million in federal and industrial support over the next decade, is the first NSF Engineering Research Center (ERC) in Houston and only the third in Texas since NSF began the ERC program in 1985. NEWT focuses on applications for humanitarian emergency response, rural water systems and wastewater treatment and reuse at remote sites, including both onshore and offshore drilling platforms for oil and gas exploration.

There is a video but it is focused on the NEWT center rather than any specific water technologies,

For anyone interested in the technology, here’s a link to and a citation for the researchers’ paper,

Nanophotonics-enabled solar membrane distillation for off-grid water purification by Pratiksha D. Dongare, Alessandro Alabastri, Seth Pedersen, Katherine R. Zodrow, Nathaniel J. Hogan, Oara Neumann, Jinjian Wu, Tianxiao Wang, Akshay Deshmukh,f, Menachem Elimelech, Qilin Li, Peter Nordlander, and Naomi J. Halas. PNAS {Proceedings of the National Academy of Sciences] doi: 10.1073/pnas.1701835114 June 19, 2017

This paper appears to be open access.

Biodegradable nanoparticles to program immune cells for cancer treatments

The Fred Hutchinson Cancer Research Centre in Seattle, Washington has announced a proposed cancer treatment using nanoparticle-programmed T cells according to an April 12, 2017 news release (received via email; also on EurekAlert), Note: A link has been removed,

Researchers at Fred Hutchinson Cancer Research Center have developed biodegradable nanoparticles that can be used to genetically program immune cells to recognize and destroy cancer cells — while the immune cells are still inside the body.

In a proof-of-principle study to be published April 17 [2017] in Nature Nanotechnology, the team showed that nanoparticle-programmed immune cells, known as T cells, can rapidly clear or slow the progression of leukemia in a mouse model.

“Our technology is the first that we know of to quickly program tumor-recognizing capabilities into T cells without extracting them for laboratory manipulation,” said Fred Hutch’s Dr. Matthias Stephan, the study’s senior author. “The reprogrammed cells begin to work within 24 to 48 hours and continue to produce these receptors for weeks. This suggests that our technology has the potential to allow the immune system to quickly mount a strong enough response to destroy cancerous cells before the disease becomes fatal.”

Cellular immunotherapies have shown promise in clinical trials, but challenges remain to making them more widely available and to being able to deploy them quickly. At present, it typically takes a couple of weeks to prepare these treatments: the T cells must be removed from the patient and genetically engineered and grown in special cell processing facilities before they are infused back into the patient. These new nanoparticles could eliminate the need for such expensive and time consuming steps.

Although his T-cell programming method is still several steps away from the clinic, Stephan imagines a future in which nanoparticles transform cell-based immunotherapies — whether for cancer or infectious disease — into an easily administered, off-the-shelf treatment that’s available anywhere.

“I’ve never had cancer, but if I did get a cancer diagnosis I would want to start treatment right away,” Stephan said. “I want to make cellular immunotherapy a treatment option the day of diagnosis and have it able to be done in an outpatient setting near where people live.”

The body as a genetic engineering lab

Stephan created his T-cell homing nanoparticles as a way to bring the power of cellular cancer immunotherapy to more people.

In his method, the laborious, time-consuming T-cell programming steps all take place within the body, creating a potential army of “serial killers” within days.

As reported in the new study, Stephan and his team developed biodegradable nanoparticles that turned T cells into CAR T cells, a particular type of cellular immunotherapy that has delivered promising results against leukemia in clinical trials.

The researchers designed the nanoparticles to carry genes that encode for chimeric antigen receptors, or CARs, that target and eliminate cancer. They also tagged the nanoparticles with molecules that make them stick like burrs to T cells, which engulf the nanoparticles. The cell’s internal traffic system then directs the nanoparticle to the nucleus, and it dissolves.

The study provides proof-of-principle that the nanoparticles can educate the immune system to target cancer cells. Stephan and his team designed the new CAR genes to integrate into chromosomes housed in the nucleus, making it possible for T cells to begin decoding the new genes and producing CARs within just one or two days.

Once the team determined that their CAR-carrying nanoparticles reprogrammed a noticeable percent of T cells, they tested their efficacy. Using a preclinical mouse model of leukemia, Stephan and his colleagues compared their nanoparticle-programming strategy against chemotherapy followed by an infusion of T cells programmed in the lab to express CARs, which mimics current CAR-T-cell therapy.

The nanoparticle-programmed CAR-T cells held their own against the infused CAR-T cells. Treatment with nanoparticles or infused CAR-T cells improved survival 58 days on average, up from a median survival of about two weeks.

The study was funded by Fred Hutch’s Immunotherapy Initiative, the Leukemia & Lymphoma Society, the Phi Beta Psi Sorority, the National Science Foundation and the National Cancer Institute.

Next steps and other applications

Stephan’s nanoparticles still have to clear several hurdles before they get close to human trials. He’s pursuing new strategies to make the gene-delivery-and-expression system safe in people and working with companies that have the capacity to produce clinical-grade nanoparticles. Additionally, Stephan has turned his sights to treating solid tumors and is collaborating to this end with several research groups at Fred Hutch.

And, he said, immunotherapy may be just the beginning. In theory, nanoparticles could be modified to serve the needs of patients whose immune systems need a boost, but who cannot wait for several months for a conventional vaccine to kick in.

“We hope that this can be used for infectious diseases like hepatitis or HIV,” Stephan said. This method may be a way to “provide patients with receptors they don’t have in their own body,” he explained. “You just need a tiny number of programmed T cells to protect against a virus.”

Here’s a link to and a citation for the paper,

In situ programming of leukaemia-specific T cells using synthetic DNA nanocarriers by Tyrel T. Smith, Sirkka B. Stephan, Howell F. Moffett, Laura E. McKnight, Weihang Ji, Diana Reiman, Emmy Bonagofski, Martin E. Wohlfahrt, Smitha P. S. Pillai, & Matthias T. Stephan. Nature Nanotechnology (2017) doi:10.1038/nnano.2017.57 Published online 17 April 2017

This paper is behind a paywall.

High-performance, low-energy artificial synapse for neural network computing

This artificial synapse is apparently an improvement on the standard memristor-based artificial synapse but that doesn’t become clear until reading the abstract for the paper. First, there’s a Feb. 20, 2017 Stanford University news release by Taylor Kubota (dated Feb. 21, 2017 on EurekAlert), Note: Links have been removed,

For all the improvements in computer technology over the years, we still struggle to recreate the low-energy, elegant processing of the human brain. Now, researchers at Stanford University and Sandia National Laboratories have made an advance that could help computers mimic one piece of the brain’s efficient design – an artificial version of the space over which neurons communicate, called a synapse.

“It works like a real synapse but it’s an organic electronic device that can be engineered,” said Alberto Salleo, associate professor of materials science and engineering at Stanford and senior author of the paper. “It’s an entirely new family of devices because this type of architecture has not been shown before. For many key metrics, it also performs better than anything that’s been done before with inorganics.”

The new artificial synapse, reported in the Feb. 20 issue of Nature Materials, mimics the way synapses in the brain learn through the signals that cross them. This is a significant energy savings over traditional computing, which involves separately processing information and then storing it into memory. Here, the processing creates the memory.

This synapse may one day be part of a more brain-like computer, which could be especially beneficial for computing that works with visual and auditory signals. Examples of this are seen in voice-controlled interfaces and driverless cars. Past efforts in this field have produced high-performance neural networks supported by artificially intelligent algorithms but these are still distant imitators of the brain that depend on energy-consuming traditional computer hardware.

Building a brain

When we learn, electrical signals are sent between neurons in our brain. The most energy is needed the first time a synapse is traversed. Every time afterward, the connection requires less energy. This is how synapses efficiently facilitate both learning something new and remembering what we’ve learned. The artificial synapse, unlike most other versions of brain-like computing, also fulfills these two tasks simultaneously, and does so with substantial energy savings.

“Deep learning algorithms are very powerful but they rely on processors to calculate and simulate the electrical states and store them somewhere else, which is inefficient in terms of energy and time,” said Yoeri van de Burgt, former postdoctoral scholar in the Salleo lab and lead author of the paper. “Instead of simulating a neural network, our work is trying to make a neural network.”

The artificial synapse is based off a battery design. It consists of two thin, flexible films with three terminals, connected by an electrolyte of salty water. The device works as a transistor, with one of the terminals controlling the flow of electricity between the other two.

Like a neural path in a brain being reinforced through learning, the researchers program the artificial synapse by discharging and recharging it repeatedly. Through this training, they have been able to predict within 1 percent of uncertainly what voltage will be required to get the synapse to a specific electrical state and, once there, it remains at that state. In other words, unlike a common computer, where you save your work to the hard drive before you turn it off, the artificial synapse can recall its programming without any additional actions or parts.

Testing a network of artificial synapses

Only one artificial synapse has been produced but researchers at Sandia used 15,000 measurements from experiments on that synapse to simulate how an array of them would work in a neural network. They tested the simulated network’s ability to recognize handwriting of digits 0 through 9. Tested on three datasets, the simulated array was able to identify the handwritten digits with an accuracy between 93 to 97 percent.

Although this task would be relatively simple for a person, traditional computers have a difficult time interpreting visual and auditory signals.

“More and more, the kinds of tasks that we expect our computing devices to do require computing that mimics the brain because using traditional computing to perform these tasks is becoming really power hungry,” said A. Alec Talin, distinguished member of technical staff at Sandia National Laboratories in Livermore, California, and senior author of the paper. “We’ve demonstrated a device that’s ideal for running these type of algorithms and that consumes a lot less power.”

This device is extremely well suited for the kind of signal identification and classification that traditional computers struggle to perform. Whereas digital transistors can be in only two states, such as 0 and 1, the researchers successfully programmed 500 states in the artificial synapse, which is useful for neuron-type computation models. In switching from one state to another they used about one-tenth as much energy as a state-of-the-art computing system needs in order to move data from the processing unit to the memory.

This, however, means they are still using about 10,000 times as much energy as the minimum a biological synapse needs in order to fire. The researchers are hopeful that they can attain neuron-level energy efficiency once they test the artificial synapse in smaller devices.

Organic potential

Every part of the device is made of inexpensive organic materials. These aren’t found in nature but they are largely composed of hydrogen and carbon and are compatible with the brain’s chemistry. Cells have been grown on these materials and they have even been used to make artificial pumps for neural transmitters. The voltages applied to train the artificial synapse are also the same as those that move through human neurons.

All this means it’s possible that the artificial synapse could communicate with live neurons, leading to improved brain-machine interfaces. The softness and flexibility of the device also lends itself to being used in biological environments. Before any applications to biology, however, the team plans to build an actual array of artificial synapses for further research and testing.

Additional Stanford co-authors of this work include co-lead author Ewout Lubberman, also of the University of Groningen in the Netherlands, Scott T. Keene and Grégorio C. Faria, also of Universidade de São Paulo, in Brazil. Sandia National Laboratories co-authors include Elliot J. Fuller and Sapan Agarwal in Livermore and Matthew J. Marinella in Albuquerque, New Mexico. Salleo is an affiliate of the Stanford Precourt Institute for Energy and the Stanford Neurosciences Institute. Van de Burgt is now an assistant professor in microsystems and an affiliate of the Institute for Complex Molecular Studies (ICMS) at Eindhoven University of Technology in the Netherlands.

This research was funded by the National Science Foundation, the Keck Faculty Scholar Funds, the Neurofab at Stanford, the Stanford Graduate Fellowship, Sandia’s Laboratory-Directed Research and Development Program, the U.S. Department of Energy, the Holland Scholarship, the University of Groningen Scholarship for Excellent Students, the Hendrik Muller National Fund, the Schuurman Schimmel-van Outeren Foundation, the Foundation of Renswoude (The Hague and Delft), the Marco Polo Fund, the Instituto Nacional de Ciência e Tecnologia/Instituto Nacional de Eletrônica Orgânica in Brazil, the Fundação de Amparo à Pesquisa do Estado de São Paulo and the Brazilian National Council.

Here’s an abstract for the researchers’ paper (link to paper provided after abstract) and it’s where you’ll find the memristor connection explained,

The brain is capable of massively parallel information processing while consuming only ~1–100fJ per synaptic event1, 2. Inspired by the efficiency of the brain, CMOS-based neural architectures3 and memristors4, 5 are being developed for pattern recognition and machine learning. However, the volatility, design complexity and high supply voltages for CMOS architectures, and the stochastic and energy-costly switching of memristors complicate the path to achieve the interconnectivity, information density, and energy efficiency of the brain using either approach. Here we describe an electrochemical neuromorphic organic device (ENODe) operating with a fundamentally different mechanism from existing memristors. ENODe switches at low voltage and energy (<10pJ for 103μm2 devices), displays >500 distinct, non-volatile conductance states within a ~1V range, and achieves high classification accuracy when implemented in neural network simulations. Plastic ENODes are also fabricated on flexible substrates enabling the integration of neuromorphic functionality in stretchable electronic systems6, 7. Mechanical flexibility makes ENODes compatible with three-dimensional architectures, opening a path towards extreme interconnectivity comparable to the human brain.

Here’s a link to and a citation for the paper,

A non-volatile organic electrochemical device as a low-voltage artificial synapse for neuromorphic computing by Yoeri van de Burgt, Ewout Lubberman, Elliot J. Fuller, Scott T. Keene, Grégorio C. Faria, Sapan Agarwal, Matthew J. Marinella, A. Alec Talin, & Alberto Salleo. Nature Materials (2017) doi:10.1038/nmat4856 Published online 20 February 2017

This paper is behind a paywall.

ETA March 8, 2017 10:28 PST: You may find this this piece on ferroelectricity and neuromorphic engineering of interest (March 7, 2017 posting titled: Ferroelectric roadmap to neuromorphic computing).

Mapping 23,000 atoms in a nanoparticle

Identification of the precise 3-D coordinates of iron, shown in red, and platinum atoms in an iron-platinum nanoparticle.. Courtesy of Colin Ophus and Florian Nickel/Berkeley Lab

The image of the iron-platinum nanoparticle (referenced in the headline) reminds of foetal ultrasound images. A Feb. 1, 2017 news item on ScienceDaily tells us more,

In the world of the very tiny, perfection is rare: virtually all materials have defects on the atomic level. These imperfections — missing atoms, atoms of one type swapped for another, and misaligned atoms — can uniquely determine a material’s properties and function. Now, UCLA [University of California at Los Angeles] physicists and collaborators have mapped the coordinates of more than 23,000 individual atoms in a tiny iron-platinum nanoparticle to reveal the material’s defects.

The results demonstrate that the positions of tens of thousands of atoms can be precisely identified and then fed into quantum mechanics calculations to correlate imperfections and defects with material properties at the single-atom level.

A Feb. 1, 2017 UCLA news release, which originated the news item, provides more detail about the work,

Jianwei “John” Miao, a UCLA professor of physics and astronomy and a member of UCLA’s California NanoSystems Institute, led the international team in mapping the atomic-level details of the bimetallic nanoparticle, more than a trillion of which could fit within a grain of sand.

“No one has seen this kind of three-dimensional structural complexity with such detail before,” said Miao, who is also a deputy director of the Science and Technology Center on Real-Time Functional Imaging. This new National Science Foundation-funded consortium consists of scientists at UCLA and five other colleges and universities who are using high-resolution imaging to address questions in the physical sciences, life sciences and engineering.

Miao and his team focused on an iron-platinum alloy, a very promising material for next-generation magnetic storage media and permanent magnet applications.

By taking multiple images of the iron-platinum nanoparticle with an advanced electron microscope at Lawrence Berkeley National Laboratory and using powerful reconstruction algorithms developed at UCLA, the researchers determined the precise three-dimensional arrangement of atoms in the nanoparticle.

“For the first time, we can see individual atoms and chemical composition in three dimensions. Everything we look at, it’s new,” Miao said.

The team identified and located more than 6,500 iron and 16,600 platinum atoms and showed how the atoms are arranged in nine grains, each of which contains different ratios of iron and platinum atoms. Miao and his colleagues showed that atoms closer to the interior of the grains are more regularly arranged than those near the surfaces. They also observed that the interfaces between grains, called grain boundaries, are more disordered.

“Understanding the three-dimensional structures of grain boundaries is a major challenge in materials science because they strongly influence the properties of materials,” Miao said. “Now we are able to address this challenge by precisely mapping out the three-dimensional atomic positions at the grain boundaries for the first time.”

The researchers then used the three-dimensional coordinates of the atoms as inputs into quantum mechanics calculations to determine the magnetic properties of the iron-platinum nanoparticle. They observed abrupt changes in magnetic properties at the grain boundaries.

“This work makes significant advances in characterization capabilities and expands our fundamental understanding of structure-property relationships, which is expected to find broad applications in physics, chemistry, materials science, nanoscience and nanotechnology,” Miao said.

In the future, as the researchers continue to determine the three-dimensional atomic coordinates of more materials, they plan to establish an online databank for the physical sciences, analogous to protein databanks for the biological and life sciences. “Researchers can use this databank to study material properties truly on the single-atom level,” Miao said.

Miao and his team also look forward to applying their method called GENFIRE (GENeralized Fourier Iterative Reconstruction) to biological and medical applications. “Our three-dimensional reconstruction algorithm might be useful for imaging like CT scans,” Miao said. Compared with conventional reconstruction methods, GENFIRE requires fewer images to compile an accurate three-dimensional structure.

That means that radiation-sensitive objects can be imaged with lower doses of radiation.

The US Dept. of Energy (DOE) Lawrence Berkeley National Laboratory issued their own Feb. 1, 2017 news release (also on EurekAlert) about the work with a focus on how their equipment made this breakthrough possible (it repeats a little of the info. from the UCLA news release),

Scientists used one of the world’s most powerful electron microscopes to map the precise location and chemical type of 23,000 atoms in an extremely small particle made of iron and platinum.

The 3-D reconstruction reveals the arrangement of atoms in unprecedented detail, enabling the scientists to measure chemical order and disorder in individual grains, which sheds light on the material’s properties at the single-atom level. Insights gained from the particle’s structure could lead to new ways to improve its magnetic performance for use in high-density, next-generation hard drives.

What’s more, the technique used to create the reconstruction, atomic electron tomography (which is like an incredibly high-resolution CT scan), lays the foundation for precisely mapping the atomic composition of other useful nanoparticles. This could reveal how to optimize the particles for more efficient catalysts, stronger materials, and disease-detecting fluorescent tags.

Microscopy data was obtained and analyzed by scientists from the Department of Energy’s Lawrence Berkeley National Laboratory (Berkeley Lab) at the Molecular Foundry, in collaboration with Foundry users from UCLA, Oak Ridge National Laboratory, and the United Kingdom’s University of Birmingham. …

Atoms are the building blocks of matter, and the patterns in which they’re arranged dictate a material’s properties. These patterns can also be exploited to greatly improve a material’s function, which is why scientists are eager to determine the 3-D structure of nanoparticles at the smallest scale possible.

“Our research is a big step in this direction. We can now take a snapshot that shows the positions of all the atoms in a nanoparticle at a specific point in its growth. This will help us learn how nanoparticles grow atom by atom, and it sets the stage for a materials-design approach starting from the smallest building blocks,” says Mary Scott, who conducted the research while she was a Foundry user, and who is now a staff scientist. Scott and fellow Foundry scientists Peter Ercius and Colin Ophus developed the method in close collaboration with Jianwei Miao, a UCLA professor of physics and astronomy.

Their nanoparticle reconstruction builds on an achievement they reported last year in which they measured the coordinates of more than 3,000 atoms in a tungsten needle to a precision of 19 trillionths of a meter (19 picometers), which is many times smaller than a hydrogen atom. Now, they’ve taken the same precision, added the ability to distinguish different elements, and scaled up the reconstruction to include tens of thousands of atoms.

Importantly, their method maps the position of each atom in a single, unique nanoparticle. In contrast, X-ray crystallography and cryo-electron microscopy plot the average position of atoms from many identical samples. These methods make assumptions about the arrangement of atoms, which isn’t a good fit for nanoparticles because no two are alike.

“We need to determine the location and type of each atom to truly understand how a nanoparticle functions at the atomic scale,” says Ercius.

A TEAM approach

The scientists’ latest accomplishment hinged on the use of one of the highest-resolution transmission electron microscopes in the world, called TEAM I. It’s located at the National Center for Electron Microscopy, which is a Molecular Foundry facility. The microscope scans a sample with a focused beam of electrons, and then measures how the electrons interact with the atoms in the sample. It also has a piezo-controlled stage that positions samples with unmatched stability and position-control accuracy.

The researchers began growing an iron-platinum nanoparticle from its constituent elements, and then stopped the particle’s growth before it was fully formed. They placed the “partially baked” particle in the TEAM I stage, obtained a 2-D projection of its atomic structure, rotated it a few degrees, obtained another projection, and so on. Each 2-D projection provides a little more information about the full 3-D structure of the nanoparticle.

They sent the projections to Miao at UCLA, who used a sophisticated computer algorithm to convert the 2-D projections into a 3-D reconstruction of the particle. The individual atomic coordinates and chemical types were then traced from the 3-D density based on the knowledge that iron atoms are lighter than platinum atoms. The resulting atomic structure contains 6,569 iron atoms and 16,627 platinum atoms, with each atom’s coordinates precisely plotted to less than the width of a hydrogen atom.

Translating the data into scientific insights

Interesting features emerged at this extreme scale after Molecular Foundry scientists used code they developed to analyze the atomic structure. For example, the analysis revealed chemical order and disorder in interlocking grains, in which the iron and platinum atoms are arranged in different patterns. This has large implications for how the particle grew and its real-world magnetic properties. The analysis also revealed single-atom defects and the width of disordered boundaries between grains, which was not previously possible in complex 3-D boundaries.

“The important materials science problem we are tackling is how this material transforms from a highly randomized structure, what we call a chemically-disordered structure, into a regular highly-ordered structure with the desired magnetic properties,” says Ophus.

To explore how the various arrangements of atoms affect the nanoparticle’s magnetic properties, scientists from DOE’s Oak Ridge National Laboratory ran computer calculations on the Titan supercomputer at ORNL–using the coordinates and chemical type of each atom–to simulate the nanoparticle’s behavior in a magnetic field. This allowed the scientists to see patterns of atoms that are very magnetic, which is ideal for hard drives. They also saw patterns with poor magnetic properties that could sap a hard drive’s performance.

“This could help scientists learn how to steer the growth of iron-platinum nanoparticles so they develop more highly magnetic patterns of atoms,” says Ercius.

Adds Scott, “More broadly, the imaging technique will shed light on the nucleation and growth of ordered phases within nanoparticles, which isn’t fully theoretically understood but is critically important to several scientific disciplines and technologies.”

The folks at the Berkeley Lab have created a video (notice where the still image from the beginning of this post appears),

The Oak Ridge National Laboratory (ORNL), not wanting to be left out, has been mentioned in a Feb. 3, 2017 news item on ScienceDaily,

… researchers working with magnetic nanoparticles at the University of California, Los Angeles (UCLA), and the US Department of Energy’s (DOE’s) Lawrence Berkeley National Laboratory (Berkeley Lab) approached computational scientists at DOE’s Oak Ridge National Laboratory (ORNL) to help solve a unique problem: to model magnetism at the atomic level using experimental data from a real nanoparticle.

“These types of calculations have been done for ideal particles with ideal crystal structures but not for real particles,” said Markus Eisenbach, a computational scientist at the Oak Ridge Leadership Computing Facility (OLCF), a DOE Office of Science User Facility located at ORNL.

A Feb. 2, 2017 ORNL news release on EurekAlert, which originated the news item, elucidates further on how their team added to the research,

Eisenbach develops quantum mechanical electronic structure simulations that predict magnetic properties in materials. Working with Paul Kent, a computational materials scientist at ORNL’s Center for Nanophase Materials Sciences, the team collaborated with researchers at UCLA and Berkeley Lab’s Molecular Foundry to combine world-class experimental data with world-class computing to do something new–simulate magnetism atom by atom in a real nanoparticle.

Using the new data from the research teams on the West Coast, Eisenbach and Kent were able to precisely model the measured atomic structure, including defects, from a unique iron-platinum (FePt) nanoparticle and simulate its magnetic properties on the 27-petaflop Titan supercomputer at the OLCF.

Electronic structure codes take atomic and chemical structure and solve for the corresponding magnetic properties. However, these structures are typically derived from many 2-D electron microscopy or x-ray crystallography images averaged together, resulting in a representative, but not true, 3-D structure.

“In this case, researchers were able to get the precise 3-D structure for a real particle,” Eisenbach said. “The UCLA group has developed a new experimental technique where they can tell where the atoms are–the coordinates–and the chemical resolution, or what they are — iron or platinum.”

The ORNL news release goes on to describe the work from the perspective of the people who ran the supercompute simulationsr,

A Supercomputing Milestone

Magnetism at the atomic level is driven by quantum mechanics — a fact that has shaken up classical physics calculations and called for increasingly complex, first-principle calculations, or calculations working forward from fundamental physics equations rather than relying on assumptions that reduce computational workload.

For magnetic recording and storage devices, researchers are particularly interested in magnetic anisotropy, or what direction magnetism favors in an atom.

“If the anisotropy is too weak, a bit written to the nanoparticle might flip at room temperature,” Kent said.

To solve for magnetic anisotropy, Eisenbach and Kent used two computational codes to compare and validate results.

To simulate a supercell of about 1,300 atoms from strongly magnetic regions of the 23,000-atom nanoparticle, they used the Linear Scaling Multiple Scattering (LSMS) code, a first-principles density functional theory code developed at ORNL.

“The LSMS code was developed for large magnetic systems and can tackle lots of atoms,” Kent said.

As principal investigator on 2017, 2016, and previous INCITE program awards, Eisenbach has scaled the LSMS code to Titan for a range of magnetic materials projects, and the in-house code has been optimized for Titan’s accelerated architecture, speeding up calculations more than 8 times on the machine’s GPUs. Exceptionally capable of crunching large magnetic systems quickly, the LSMS code received an Association for Computing Machinery Gordon Bell Prize in high-performance computing achievement in 1998 and 2009, and developments continue to enhance the code for new architectures.

Working with Renat Sabirianov at the University of Nebraska at Omaha, the team also ran VASP, a simulation package that is better suited for smaller atom counts, to simulate regions of about 32 atoms.

“With both approaches, we were able to confirm that the local VASP results were consistent with the LSMS results, so we have a high confidence in the simulations,” Eisenbach said.

Computer simulations revealed that grain boundaries have a strong effect on magnetism. “We found that the magnetic anisotropy energy suddenly transitions at the grain boundaries. These magnetic properties are very important,” Miao said.

In the future, researchers hope that advances in computing and simulation will make a full-particle simulation possible — as first-principles calculations are currently too intensive to solve small-scale magnetism for regions larger than a few thousand atoms.

Also, future simulations like these could show how different fabrication processes, such as the temperature at which nanoparticles are formed, influence magnetism and performance.

“There’s a hope going forward that one would be able to use these techniques to look at nanoparticle growth and understand how to optimize growth for performance,” Kent said.

Finally, here’s a link to and a citation for the paper,

Deciphering chemical order/disorder and material properties at the single-atom level by Yongsoo Yang, Chien-Chun Chen, M. C. Scott, Colin Ophus, Rui Xu, Alan Pryor, Li Wu, Fan Sun, Wolfgang Theis, Jihan Zhou, Markus Eisenbach, Paul R. C. Kent, Renat F. Sabirianov, Hao Zeng, Peter Ercius, & Jianwei Miao. Nature 542, 75–79 (02 February 2017) doi:10.1038/nature21042 Published online 01 February 2017

This paper is behind a paywall.