Monthly Archives: February 2017

Figuring out how stars are born by watching neutrons ‘quantum tunnelling’ on graphene

A Feb. 3, 2017 news item on Nanowerk announces research that could help us better understand how stars are ‘born’,

Graphene is known as the world’s thinnest material due to its 2D structure, where each sheet is only one carbon atom thick, allowing each atom to engage in a chemical reaction from two sides. Graphene flakes can have a very large proportion of edge atoms, all of which have a particular chemical reactivity.

In addition, chemically active voids created by missing atoms are a surface defect of graphene sheets. These structural defects and edges play a vital role in carbon chemistry and physics, as they alter the chemical reactivity of graphene. In fact, chemical reactions have repeatedly been shown to be favoured at these defect sites.

Interstellar molecular clouds are predominantly composed of hydrogen in molecular form (H2), but also contain a small percentage of dust particles mostly in the form of carbon nanostructures, called polyaromatic hydrocarbons (PAH). These clouds are often referred to as ‘star nurseries’ as their low temperature and high density allows gravity to locally condense matter in such a way that it initiates H fusion, the nuclear reaction at the heart of each star.

Graphene-based materials, prepared from the exfoliation of graphite oxide, are used as a model of interstellar carbon dust as they contain a relatively large amount of atomic defects, either at their edges or on their surface. These defects are thought to sustain the Eley-Rideal chemical reaction, which recombines two H atoms into one H2 molecule. The observation of interstellar clouds in inhospitable regions of space, including in the direct proximity of giant stars, poses the question of the origin of the stability of hydrogen in the molecular form (H2).

This question stands because the clouds are constantly being washed out by intense radiation, hence cracking the hydrogen molecules into atoms. Astrochemists suggest that the chemical mechanism responsible for the recombination of atomic H into molecular H2 is catalysed by carbon flakes in interstellar clouds.

A Feb. 2, 2017 Institut Laue-Langevin press release, which originated the news item, provides more insight into the research,

Their [astrochemists’s] theories are challenged by the need for a very efficient surface chemistry scenario to explain the observed equilibrium between dissociation and recombination. They had to introduce highly reactive sites into their models so that the capture of an atomic H nearby occurs without fail. These sites, in the form of atomic defects at the surface or edge of the carbon flakes, should be such that the C-H bond formed thereafter allows the H atom to be released easily to recombine with another H atom flying nearby.

A collaboration between the Institut Laue-Langevin (ILL), France, the University of Parma, Italy, and the ISIS Neutron and Muon Source, UK, combined neutron spectroscopy with density functional theory (DFT) molecular dynamics simulations in order to characterise the local environment and vibrations of hydrogen atoms chemically bonded at the surface of substantially defected graphene flakes. Additional analyses were carried out using muon spectroscopy (muSR) and nuclear magnetic resonance (NMR). As availability of the samples is very low, these highly specific techniques were necessary to study the samples; neutron spectroscopy is highly sensitive to hydrogen and allowed accurate data to be gathered at small concentrations.

For the first time ever, this study showed ‘quantum tunnelling’ in these systems, allowing the H atoms bound to C atoms to explore relatively long distances at temperatures as low as those in interstitial clouds. The process involves hydrogen ‘quantum hopping’ from one carbon atom to another in its direct vicinity, tunnelling through energy barriers which could not be overcome given the lack of heat in the interstellar cloud environment. This movement is sustained by the fluctuations of the graphene structure, which bring the H atom into unstable regions and catalyse the recombination process by allowing the release of the chemically bonded H atom. Therefore, it is believed that quantum tunnelling facilitates the reaction for the formation of molecular H2.

ILL scientist and carbon nanostructure specialist, Stéphane Rols says: “The question of how molecular hydrogen forms at the low temperatures in interstellar clouds has always been a driver in astrochemistry research. We’re proud to have combined spectroscopy expertise with the sensitivity of neutrons to identify the intriguing quantum tunnelling phenomenon as a possible mechanism behind the formation of H2; these observations are significant in furthering our understanding of the universe.”

Here’s a link to and a citation for the paper (which dates from Aug. 2016),

Hydrogen motions in defective graphene: the role of surface defects by Chiara Cavallari, Daniele Pontiroli, Mónica Jiménez-Ruiz, Mark Johnson, Matteo Aramini, Mattia Gaboardi, Stewart F. Parker, Mauro Riccó, and Stéphane Rols. Phys. Chem. Chem. Phys., 2016, Issue 36, 18, 24820-24824 DOI: 10.1039/C6CP04727K First published online 22 Aug 2016

This paper is behind a paywall.

Mapping 23,000 atoms in a nanoparticle

Identification of the precise 3-D coordinates of iron, shown in red, and platinum atoms in an iron-platinum nanoparticle.. Courtesy of Colin Ophus and Florian Nickel/Berkeley Lab

The image of the iron-platinum nanoparticle (referenced in the headline) reminds of foetal ultrasound images. A Feb. 1, 2017 news item on ScienceDaily tells us more,

In the world of the very tiny, perfection is rare: virtually all materials have defects on the atomic level. These imperfections — missing atoms, atoms of one type swapped for another, and misaligned atoms — can uniquely determine a material’s properties and function. Now, UCLA [University of California at Los Angeles] physicists and collaborators have mapped the coordinates of more than 23,000 individual atoms in a tiny iron-platinum nanoparticle to reveal the material’s defects.

The results demonstrate that the positions of tens of thousands of atoms can be precisely identified and then fed into quantum mechanics calculations to correlate imperfections and defects with material properties at the single-atom level.

A Feb. 1, 2017 UCLA news release, which originated the news item, provides more detail about the work,

Jianwei “John” Miao, a UCLA professor of physics and astronomy and a member of UCLA’s California NanoSystems Institute, led the international team in mapping the atomic-level details of the bimetallic nanoparticle, more than a trillion of which could fit within a grain of sand.

“No one has seen this kind of three-dimensional structural complexity with such detail before,” said Miao, who is also a deputy director of the Science and Technology Center on Real-Time Functional Imaging. This new National Science Foundation-funded consortium consists of scientists at UCLA and five other colleges and universities who are using high-resolution imaging to address questions in the physical sciences, life sciences and engineering.

Miao and his team focused on an iron-platinum alloy, a very promising material for next-generation magnetic storage media and permanent magnet applications.

By taking multiple images of the iron-platinum nanoparticle with an advanced electron microscope at Lawrence Berkeley National Laboratory and using powerful reconstruction algorithms developed at UCLA, the researchers determined the precise three-dimensional arrangement of atoms in the nanoparticle.

“For the first time, we can see individual atoms and chemical composition in three dimensions. Everything we look at, it’s new,” Miao said.

The team identified and located more than 6,500 iron and 16,600 platinum atoms and showed how the atoms are arranged in nine grains, each of which contains different ratios of iron and platinum atoms. Miao and his colleagues showed that atoms closer to the interior of the grains are more regularly arranged than those near the surfaces. They also observed that the interfaces between grains, called grain boundaries, are more disordered.

“Understanding the three-dimensional structures of grain boundaries is a major challenge in materials science because they strongly influence the properties of materials,” Miao said. “Now we are able to address this challenge by precisely mapping out the three-dimensional atomic positions at the grain boundaries for the first time.”

The researchers then used the three-dimensional coordinates of the atoms as inputs into quantum mechanics calculations to determine the magnetic properties of the iron-platinum nanoparticle. They observed abrupt changes in magnetic properties at the grain boundaries.

“This work makes significant advances in characterization capabilities and expands our fundamental understanding of structure-property relationships, which is expected to find broad applications in physics, chemistry, materials science, nanoscience and nanotechnology,” Miao said.

In the future, as the researchers continue to determine the three-dimensional atomic coordinates of more materials, they plan to establish an online databank for the physical sciences, analogous to protein databanks for the biological and life sciences. “Researchers can use this databank to study material properties truly on the single-atom level,” Miao said.

Miao and his team also look forward to applying their method called GENFIRE (GENeralized Fourier Iterative Reconstruction) to biological and medical applications. “Our three-dimensional reconstruction algorithm might be useful for imaging like CT scans,” Miao said. Compared with conventional reconstruction methods, GENFIRE requires fewer images to compile an accurate three-dimensional structure.

That means that radiation-sensitive objects can be imaged with lower doses of radiation.

The US Dept. of Energy (DOE) Lawrence Berkeley National Laboratory issued their own Feb. 1, 2017 news release (also on EurekAlert) about the work with a focus on how their equipment made this breakthrough possible (it repeats a little of the info. from the UCLA news release),

Scientists used one of the world’s most powerful electron microscopes to map the precise location and chemical type of 23,000 atoms in an extremely small particle made of iron and platinum.

The 3-D reconstruction reveals the arrangement of atoms in unprecedented detail, enabling the scientists to measure chemical order and disorder in individual grains, which sheds light on the material’s properties at the single-atom level. Insights gained from the particle’s structure could lead to new ways to improve its magnetic performance for use in high-density, next-generation hard drives.

What’s more, the technique used to create the reconstruction, atomic electron tomography (which is like an incredibly high-resolution CT scan), lays the foundation for precisely mapping the atomic composition of other useful nanoparticles. This could reveal how to optimize the particles for more efficient catalysts, stronger materials, and disease-detecting fluorescent tags.

Microscopy data was obtained and analyzed by scientists from the Department of Energy’s Lawrence Berkeley National Laboratory (Berkeley Lab) at the Molecular Foundry, in collaboration with Foundry users from UCLA, Oak Ridge National Laboratory, and the United Kingdom’s University of Birmingham. …

Atoms are the building blocks of matter, and the patterns in which they’re arranged dictate a material’s properties. These patterns can also be exploited to greatly improve a material’s function, which is why scientists are eager to determine the 3-D structure of nanoparticles at the smallest scale possible.

“Our research is a big step in this direction. We can now take a snapshot that shows the positions of all the atoms in a nanoparticle at a specific point in its growth. This will help us learn how nanoparticles grow atom by atom, and it sets the stage for a materials-design approach starting from the smallest building blocks,” says Mary Scott, who conducted the research while she was a Foundry user, and who is now a staff scientist. Scott and fellow Foundry scientists Peter Ercius and Colin Ophus developed the method in close collaboration with Jianwei Miao, a UCLA professor of physics and astronomy.

Their nanoparticle reconstruction builds on an achievement they reported last year in which they measured the coordinates of more than 3,000 atoms in a tungsten needle to a precision of 19 trillionths of a meter (19 picometers), which is many times smaller than a hydrogen atom. Now, they’ve taken the same precision, added the ability to distinguish different elements, and scaled up the reconstruction to include tens of thousands of atoms.

Importantly, their method maps the position of each atom in a single, unique nanoparticle. In contrast, X-ray crystallography and cryo-electron microscopy plot the average position of atoms from many identical samples. These methods make assumptions about the arrangement of atoms, which isn’t a good fit for nanoparticles because no two are alike.

“We need to determine the location and type of each atom to truly understand how a nanoparticle functions at the atomic scale,” says Ercius.

A TEAM approach

The scientists’ latest accomplishment hinged on the use of one of the highest-resolution transmission electron microscopes in the world, called TEAM I. It’s located at the National Center for Electron Microscopy, which is a Molecular Foundry facility. The microscope scans a sample with a focused beam of electrons, and then measures how the electrons interact with the atoms in the sample. It also has a piezo-controlled stage that positions samples with unmatched stability and position-control accuracy.

The researchers began growing an iron-platinum nanoparticle from its constituent elements, and then stopped the particle’s growth before it was fully formed. They placed the “partially baked” particle in the TEAM I stage, obtained a 2-D projection of its atomic structure, rotated it a few degrees, obtained another projection, and so on. Each 2-D projection provides a little more information about the full 3-D structure of the nanoparticle.

They sent the projections to Miao at UCLA, who used a sophisticated computer algorithm to convert the 2-D projections into a 3-D reconstruction of the particle. The individual atomic coordinates and chemical types were then traced from the 3-D density based on the knowledge that iron atoms are lighter than platinum atoms. The resulting atomic structure contains 6,569 iron atoms and 16,627 platinum atoms, with each atom’s coordinates precisely plotted to less than the width of a hydrogen atom.

Translating the data into scientific insights

Interesting features emerged at this extreme scale after Molecular Foundry scientists used code they developed to analyze the atomic structure. For example, the analysis revealed chemical order and disorder in interlocking grains, in which the iron and platinum atoms are arranged in different patterns. This has large implications for how the particle grew and its real-world magnetic properties. The analysis also revealed single-atom defects and the width of disordered boundaries between grains, which was not previously possible in complex 3-D boundaries.

“The important materials science problem we are tackling is how this material transforms from a highly randomized structure, what we call a chemically-disordered structure, into a regular highly-ordered structure with the desired magnetic properties,” says Ophus.

To explore how the various arrangements of atoms affect the nanoparticle’s magnetic properties, scientists from DOE’s Oak Ridge National Laboratory ran computer calculations on the Titan supercomputer at ORNL–using the coordinates and chemical type of each atom–to simulate the nanoparticle’s behavior in a magnetic field. This allowed the scientists to see patterns of atoms that are very magnetic, which is ideal for hard drives. They also saw patterns with poor magnetic properties that could sap a hard drive’s performance.

“This could help scientists learn how to steer the growth of iron-platinum nanoparticles so they develop more highly magnetic patterns of atoms,” says Ercius.

Adds Scott, “More broadly, the imaging technique will shed light on the nucleation and growth of ordered phases within nanoparticles, which isn’t fully theoretically understood but is critically important to several scientific disciplines and technologies.”

The folks at the Berkeley Lab have created a video (notice where the still image from the beginning of this post appears),

The Oak Ridge National Laboratory (ORNL), not wanting to be left out, has been mentioned in a Feb. 3, 2017 news item on ScienceDaily,

… researchers working with magnetic nanoparticles at the University of California, Los Angeles (UCLA), and the US Department of Energy’s (DOE’s) Lawrence Berkeley National Laboratory (Berkeley Lab) approached computational scientists at DOE’s Oak Ridge National Laboratory (ORNL) to help solve a unique problem: to model magnetism at the atomic level using experimental data from a real nanoparticle.

“These types of calculations have been done for ideal particles with ideal crystal structures but not for real particles,” said Markus Eisenbach, a computational scientist at the Oak Ridge Leadership Computing Facility (OLCF), a DOE Office of Science User Facility located at ORNL.

A Feb. 2, 2017 ORNL news release on EurekAlert, which originated the news item, elucidates further on how their team added to the research,

Eisenbach develops quantum mechanical electronic structure simulations that predict magnetic properties in materials. Working with Paul Kent, a computational materials scientist at ORNL’s Center for Nanophase Materials Sciences, the team collaborated with researchers at UCLA and Berkeley Lab’s Molecular Foundry to combine world-class experimental data with world-class computing to do something new–simulate magnetism atom by atom in a real nanoparticle.

Using the new data from the research teams on the West Coast, Eisenbach and Kent were able to precisely model the measured atomic structure, including defects, from a unique iron-platinum (FePt) nanoparticle and simulate its magnetic properties on the 27-petaflop Titan supercomputer at the OLCF.

Electronic structure codes take atomic and chemical structure and solve for the corresponding magnetic properties. However, these structures are typically derived from many 2-D electron microscopy or x-ray crystallography images averaged together, resulting in a representative, but not true, 3-D structure.

“In this case, researchers were able to get the precise 3-D structure for a real particle,” Eisenbach said. “The UCLA group has developed a new experimental technique where they can tell where the atoms are–the coordinates–and the chemical resolution, or what they are — iron or platinum.”

The ORNL news release goes on to describe the work from the perspective of the people who ran the supercompute simulationsr,

A Supercomputing Milestone

Magnetism at the atomic level is driven by quantum mechanics — a fact that has shaken up classical physics calculations and called for increasingly complex, first-principle calculations, or calculations working forward from fundamental physics equations rather than relying on assumptions that reduce computational workload.

For magnetic recording and storage devices, researchers are particularly interested in magnetic anisotropy, or what direction magnetism favors in an atom.

“If the anisotropy is too weak, a bit written to the nanoparticle might flip at room temperature,” Kent said.

To solve for magnetic anisotropy, Eisenbach and Kent used two computational codes to compare and validate results.

To simulate a supercell of about 1,300 atoms from strongly magnetic regions of the 23,000-atom nanoparticle, they used the Linear Scaling Multiple Scattering (LSMS) code, a first-principles density functional theory code developed at ORNL.

“The LSMS code was developed for large magnetic systems and can tackle lots of atoms,” Kent said.

As principal investigator on 2017, 2016, and previous INCITE program awards, Eisenbach has scaled the LSMS code to Titan for a range of magnetic materials projects, and the in-house code has been optimized for Titan’s accelerated architecture, speeding up calculations more than 8 times on the machine’s GPUs. Exceptionally capable of crunching large magnetic systems quickly, the LSMS code received an Association for Computing Machinery Gordon Bell Prize in high-performance computing achievement in 1998 and 2009, and developments continue to enhance the code for new architectures.

Working with Renat Sabirianov at the University of Nebraska at Omaha, the team also ran VASP, a simulation package that is better suited for smaller atom counts, to simulate regions of about 32 atoms.

“With both approaches, we were able to confirm that the local VASP results were consistent with the LSMS results, so we have a high confidence in the simulations,” Eisenbach said.

Computer simulations revealed that grain boundaries have a strong effect on magnetism. “We found that the magnetic anisotropy energy suddenly transitions at the grain boundaries. These magnetic properties are very important,” Miao said.

In the future, researchers hope that advances in computing and simulation will make a full-particle simulation possible — as first-principles calculations are currently too intensive to solve small-scale magnetism for regions larger than a few thousand atoms.

Also, future simulations like these could show how different fabrication processes, such as the temperature at which nanoparticles are formed, influence magnetism and performance.

“There’s a hope going forward that one would be able to use these techniques to look at nanoparticle growth and understand how to optimize growth for performance,” Kent said.

Finally, here’s a link to and a citation for the paper,

Deciphering chemical order/disorder and material properties at the single-atom level by Yongsoo Yang, Chien-Chun Chen, M. C. Scott, Colin Ophus, Rui Xu, Alan Pryor, Li Wu, Fan Sun, Wolfgang Theis, Jihan Zhou, Markus Eisenbach, Paul R. C. Kent, Renat F. Sabirianov, Hao Zeng, Peter Ercius, & Jianwei Miao. Nature 542, 75–79 (02 February 2017) doi:10.1038/nature21042 Published online 01 February 2017

This paper is behind a paywall.

New principles for AI (artificial intelligence) research along with some history and a plea for a democratic discussion

For almost a month I’ve been meaning to get to this Feb. 1, 2017 essay by Andrew Maynard (director of Risk Innovation Lab at Arizona State University) and Jack Stilgoe (science policy lecturer at University College London [UCL]) on the topic of artificial intelligence and principles (Note: Links have been removed). First, a walk down memory lane,

Today [Feb. 1, 2017] in Washington DC, leading US and UK scientists are meeting to share dispatches from the frontiers of machine learning – an area of research that is creating new breakthroughs in artificial intelligence (AI). Their meeting follows the publication of a set of principles for beneficial AI that emerged from a conference earlier this year at a place with an important history.

In February 1975, 140 people – mostly scientists, with a few assorted lawyers, journalists and others – gathered at a conference centre on the California coast. A magazine article from the time by Michael Rogers, one of the few journalists allowed in, reported that most of the four days’ discussion was about the scientific possibilities of genetic modification. Two years earlier, scientists had begun using recombinant DNA to genetically modify viruses. The Promethean nature of this new tool prompted scientists to impose a moratorium on such experiments until they had worked out the risks. By the time of the Asilomar conference, the pent-up excitement was ready to burst. It was only towards the end of the conference when a lawyer stood up to raise the possibility of a multimillion-dollar lawsuit that the scientists focussed on the task at hand – creating a set of principles to govern their experiments.

The 1975 Asilomar meeting is still held up as a beacon of scientific responsibility. However, the story told by Rogers, and subsequently by historians, is of scientists motivated by a desire to head-off top down regulation with a promise of self-governance. Geneticist Stanley Cohen said at the time, ‘If the collected wisdom of this group doesn’t result in recommendations, the recommendations may come from other groups less well qualified’. The mayor of Cambridge, Massachusetts was a prominent critic of the biotechnology experiments then taking place in his city. He said, ‘I don’t think these scientists are thinking about mankind at all. I think that they’re getting the thrills and the excitement and the passion to dig in and keep digging to see what the hell they can do’.

The concern in 1975 was with safety and containment in research, not with the futures that biotechnology might bring about. A year after Asilomar, Cohen’s colleague Herbert Boyer founded Genentech, one of the first biotechnology companies. Corporate interests barely figured in the conversations of the mainly university scientists.

Fast-forward 42 years and it is clear that machine learning, natural language processing and other technologies that come under the AI umbrella are becoming big business. The cast list of the 2017 Asilomar meeting included corporate wunderkinds from Google, Facebook and Tesla as well as researchers, philosophers, and other academics. The group was more intellectually diverse than their 1975 equivalents, but there were some notable absences – no public and their concerns, no journalists, and few experts in the responsible development of new technologies.

Maynard and Stilgoe offer a critique of the latest principles,

The principles that came out of the meeting are, at least at first glance, a comforting affirmation that AI should be ‘for the people’, and not to be developed in ways that could cause harm. They promote the idea of beneficial and secure AI, development for the common good, and the importance of upholding human values and shared prosperity.

This is good stuff. But it’s all rather Motherhood and Apple Pie: comforting and hard to argue against, but lacking substance. The principles are short on accountability, and there are notable absences, including the need to engage with a broader set of stakeholders and the public. At the early stages of developing new technologies, public concerns are often seen as an inconvenience. In a world in which populism appears to be trampling expertise into the dirt, it is easy to understand why scientists may be defensive.

I encourage you to read this thoughtful essay in its entirety although I do have one nit to pick:  Why only US and UK scientists? I imagine the answer may lie in funding and logistics issues but I find it surprising that the critique makes no mention of the international community as a nod to inclusion.

For anyone interested in the Asolimar AI principles (2017), you can find them here. You can also find videos of the two-day workshop (Jan. 31 – Feb. 1, 2017 workshop titled The Frontiers of Machine Learning (a Raymond and Beverly Sackler USA-UK Scientific Forum [US National Academy of Sciences]) here (videos for each session are available on Youtube).

The physics of melting in two-dimensional systems

You might want to skip over the reference to snow as it doesn’t have much relevance to this story about ‘melting’, from a Feb. 1, 2017 news item on Nanowerk (Note: A link has been removed),

Snow falls in winter and melts in spring, but what drives the phase change in between?
Although melting is a familiar phenomenon encountered in everyday life, playing a part in many industrial and commercial processes, much remains to be discovered about this transformation at a fundamental level.

In 2015, a team led by the University of Michigan’s Sharon Glotzer used high-performance computing at the Department of Energy’s (DOE’s) Oak Ridge National Laboratory [ORNL] to study melting in two-dimensional (2-D) systems, a problem that could yield insights into surface interactions in materials important to technologies like solar panels, as well as into the mechanism behind three-dimensional melting. The team explored how particle shape affects the physics of a solid-to-fluid melting transition in two dimensions.

Using the Cray XK7 Titan supercomputer at the Oak Ridge Leadership Computing Facility (OLCF), a DOE Office of Science User Facility, the team’s [latest?] work revealed that the shape and symmetry of particles can dramatically affect the melting process (“Shape and symmetry determine two-dimensional melting transitions of hard regular polygons”). This fundamental finding could help guide researchers in search of nanoparticles with desirable properties for energy applications.

There is a video  of the ‘melting’ process but I have to confess to finding it a bit enigmatic,

A Feb. 1, 2017 ORNL news release (also on EurekAlert), which originated the news item, provides more detail about the research,

o tackle the problem, Glotzer’s team needed a supercomputer capable of simulating systems of up to 1 million hard polygons, simple particles used as stand-ins for atoms, ranging from triangles to 14-sided shapes. Unlike traditional molecular dynamics simulations that attempt to mimic nature, hard polygon simulations give researchers a pared-down environment in which to evaluate shape-influenced physics.

“Within our simulated 2-D environment, we found that the melting transition follows one of three different scenarios depending on the shape of the systems’ polygons,” University of Michigan research scientist Joshua Anderson said. “Notably, we found that systems made up of hexagons perfectly follow a well-known theory for 2-D melting, something that hasn’t been described until now.”

Shifting Shape Scenarios

In 3-D systems such as a thinning icicle, melting takes the form of a first-order phase transition. This means that collections of molecules within these systems exist in either solid or liquid form with no in-between in the presence of latent heat, the energy that fuels a solid-to-fluid phase change . In 2-D systems, such as thin-film materials used in batteries and other technologies, melting can be more complex, sometimes exhibiting an intermediate phase known as the hexatic phase.

The hexatic phase, a state characterized as a halfway point between an ordered solid and a disordered liquid, was first theorized in the 1970s by researchers John Kosterlitz, David Thouless, Burt Halperin, David Nelson, and Peter Young. The phase is a principle feature of the KTHNY theory, a 2-D melting theory posited by the researchers (and named based on the first letters of their last names). In 2016 Kosterlitz and Thouless were awarded the Nobel Prize in Physics, along with physicist Duncan Haldane, for their contributions to 2-D materials research.

At the molecular level, solid, hexatic, and liquid systems are defined by the arrangement of their atoms. In a crystalline solid, two types of order are present: translational and orientational. Translational order describes the well-defined paths between atoms over distances, like blocks in a carefully constructed Jenga tower. Orientational order describes the relational and clustered order shared between atoms and groups of atoms over distances. Think of that same Jenga tower turned askew after several rounds of play. The general shape of the tower remains, but its order is now fragmented.

The hexatic phase has no translational order but possesses orientational order. (A liquid has neither translational nor orientational order but exhibits short-range order, meaning any atom will have some average number of neighbors nearby but with no predicable order.)

Deducing the presence of a hexatic phase requires a leadership-class computer that can calculate large hard-particle systems. Glotzer’s team gained access to the OLCF’s 27-petaflop Titan through the Innovative and Novel Computational Impact on Theory and Experiment (INCITE) program, running its GPU-accelerated HOOMD-blue code to maximize time on the machine.

On Titan, HOOMD-blue used 64 GPUs for each massively parallel Monte Carlo simulation of up to 1 million particles. Researchers explored 11 different shape systems, applying an external pressure to push the particles together. Each system was simulated at 21 different densities, with the lowest densities representing a fluid state and the highest densities a solid state.

The simulations demonstrated multiple melting scenarios hinging on the polygons’ shape. Systems with polygons of seven sides or more closely followed the melting behavior of hard disks, or circles, exhibiting a continuous phase transition from the solid to the hexatic phase and a first-order phase transition from the hexatic to the liquid phase. A continuous phase transition means a constantly changing area in response to a changing external pressure. A first-order phase transition is characterized by a discontinuity in which the volume jumps across the phase transition in response to the changing external pressure. The team found pentagons and fourfold pentilles, irregular pentagons with two different edge lengths, exhibit a first-order solid-to-liquid phase transition.

The most significant finding, however, emerged from hexagon systems, which perfectly followed the phase transition described by the KTHNY theory. In this scenario, the particles’ shift from solid to hexatic and hexatic to fluid in a perfect continuous phase transition pattern.

“It was actually sort of surprising that no one else has found that until now,” Anderson said, “because it seems natural that the hexagon, with its six sides, and the honeycomb-like hexagonal arrangement would be a perfect match for this theory” in which the hexatic phase generally contains sixfold orientational order.

Glotzer’s team, which recently received a 2017 INCITE allocation, is now applying its leadership-class computing prowess to tackle phase transitions in 3-D. The team is focusing on how fluid particles crystallize into complex colloids—mixtures in which particles are suspended throughout another substance. Common examples of colloids include milk, paper, fog, and stained glass.

“We’re planning on using Titan to study how complexity can arise from these simple interactions, and to do that we’re actually going to look at how the crystals grow and study the kinetics of how that happens,” said Anderson.

There is a paper on arXiv,

Shape and symmetry determine two-dimensional melting transitions of hard regular polygons by Joshua A. Anderson, James Antonaglia, Jaime A. Millan, Michael Engel, Sharon C. Glotzer
(Submitted on 2 Jun 2016 (v1), last revised 23 Dec 2016 (this version, v2))  arXiv:1606.00687 [cond-mat.soft] (or arXiv:1606.00687v2

This paper is open access and open to public peer review.

US report on Women, minorities, and people with disabilities in science and engineerin

A Jan. 31, 2017 news item on ScienceDaily announces a new report from the US National Science Foundation’s (NSF) National Center for Science and Engineering Statistics (NCSES),

The National Center for Science and Engineering Statistics (NCSES) today [Jan. 31, 2017,] announced the release of the 2017 Women, Minorities, and Persons with Disabilities in Science and Engineering (WMPD) report, the federal government’s most comprehensive look at the participation of these three demographic groups in science and engineering education and employment.

The report shows the degree to which women, people with disabilities and minorities from three racial and ethnic groups — black, Hispanic and American Indian or Alaska Native — are underrepresented in science and engineering (S&E). Women have reached parity with men in educational attainment but not in S&E employment. Underrepresented minorities account for disproportionately smaller percentages in both S&E education and employment

Congress mandated the biennial report in the Science and Engineering Equal Opportunities Act as part of the National Science Foundation’s (NSF) mission to encourage and strengthen the participation of underrepresented groups in S&E.

A Jan. 31, 2017 NSF news release (also on EurekAlert), which originated the news item, provides information about why the report is issued every two years and provides highlights from the 2017 report,

“An important part of fulfilling our mission to further the progress of science is producing current, accurate information about the U.S. STEM workforce,” said NSF Director France Córdova. “This report is a valuable resource to the science and engineering policy community.”

NSF maintains a portfolio of programs aimed at broadening participation in S&E, including ADVANCE: Increasing the Participation and Advancement of Women in Academic Science and Engineering Careers; LSAMP: the Louis Stokes Alliances for Minority Participation; and NSF INCLUDES, which focuses on building networks that can scale up proven approaches to broadening participation.

The digest provides highlights and analysis in five topic areas: enrollment, field of degree, occupation, employment status and early career doctorate holders. That last topic area includes analysis of pilot study data from the Early Career Doctorates Survey, a new NCSES product. NCSES also maintains expansive WMPD data tables, updated periodically as new data become available, which present the latest S&E education and workforce data available from NCSES and other agencies. The tables provide the public access to detailed, field-by-field information that includes both percentages and the actual numbers of people involved in S&E.

“WMPD is more than just a single report or presentation,” said NCSES Director John Gawalt. “It is a vast and unique information resource, carefully curated and maintained, that allows anyone (from the general public to highly trained researchers) ready access to data that facilitate and support their own exploration and analyses.”

Key findings from the new digest include:

  • The types of schools where students enroll vary among racial and ethnic groups. Hispanics, American Indians or Alaska Natives and Native Hawaiians or Other Pacific Islanders are more likely to enroll in community colleges. Blacks and Native Hawaiian or Other Pacific Islanders are more likely to enroll in private, for profit schools.
  • Since the late 1990s, women have earned about half of S&E bachelor’s degrees. But their representation varies widely by field, ranging from 70 percent in psychology to 18 percent in computer sciences.
  • At every level — bachelor’s, master’s and doctorate — underrepresented minority women earn a higher proportion of degrees than their male counterparts. White women, in contrast earn a smaller proportion of degrees than their male counterparts.
  • Despite two decades of progress, a wide gap in educational attainment remains between underrepresented minorities and whites and Asians, two groups that have higher representation in S&E education than they do in the U.S. population.
  • White men constitute about one-third of the overall U.S. population; they comprise half of the S&E workforce. Blacks, Hispanics and people with disabilities are underrepresented in the S&E workforce.
  • Women’s participation in the workforce varies greatly by field of occupation.
  • In 2015, scientists and engineers had a lower unemployment rate compared to the general U.S. population (3.3 percent versus 5.8 percent), although the rate varied among groups. For example, it was 2.8 percent among white women in S&E but 6.0 percent for underrepresented minority women.

For more information, including access to the digest and data tables, see the updated WMPD website.

Caption: In 2015, women and some minority groups were represented less in science and engineering (S&E) occupations than they were in the US general population.. Credit: NSF

R.I.P. Mildred Dresselhaus, Queen of Carbon

I’ve been hearing about Mildred Dresselhaus, professor emerita (retired professor) at the Massachusetts Institute of Technology (MIT), just about as long as I’ve been researching and writing about nanotechnology (about 10 years including the work for my master’s project with the almost eight years on this blog).

She died on Monday, Feb. 20, 2017 at the age of 86 having broken through barriers for those of her gender, barriers for her subject area, and barriers for her age.

Mark Anderson in his Feb. 22, 2017 obituary for the IEEE (Institute of Electrical and Electronics Engineers) Spectrum website provides a brief overview of her extraordinary life and accomplishments,

Called the “Queen of Carbon Science,” Dresselhaus pioneered the study of carbon nanostructures at a time when studying physical and material properties of commonplace atoms like carbon was out of favor. Her visionary perspectives on the sixth atom in the periodic table—including exploring individual layers of carbon atoms (precursors to graphene), developing carbon fibers stronger than steel, and revealing new carbon structures that were ultimately developed into buckyballs and nanotubes—invigorated the field.

“Millie Dresselhaus began life as the child of poor Polish immigrants in the Bronx; by the end, she was Institute Professor Emerita, the highest distinction awarded by the MIT faculty. A physicist, materials scientist, and electrical engineer, she was known as the ‘Queen of Carbon’ because her work paved the way for much of today’s carbon-based nanotechnology,” MIT president Rafael Reif said in a prepared statement.

Friends and colleagues describe Dresselhaus as a gifted instructor as well as a tireless and inspired researcher. And her boundless generosity toward colleagues, students, and girls and women pursuing careers in science is legendary.

In 1963, Dresselhaus began her own career studying carbon by publishing a paper on graphite in the IBM Journal for Research and Development, a foundational work in the history of nanotechnology. To this day, her studies of the electronic structure of this material serve as a reference point for explorations of the electronic structure of fullerenes and carbon nanotubes. Coauthor, with her husband Gene Dresselhaus, of a leading book on carbon fibers, she began studying the laser vaporation of carbon and the “carbon clusters” that resulted. Researchers who followed her lead discovered a 60-carbon structure that was soon identified as the icosahedral “soccer ball” molecular configuration known as buckminsterfullerene, or buckyball. In 1991, Dresselhaus further suggested that fullerene could be elongated as a tube, and she outlined these imagined objects’ symmetries. Not long after, researchers announced the discovery of carbon nanotubes.

When she began her nearly half-century career at MIT, as a visiting professor, women consisted of just 4 percent of the undergraduate student population.  So Dresselhaus began working toward the improvement of living conditions for women students at the university. Through her leadership, MIT adopted an equal and joint admission process for women and men. (Previously, MIT had propounded the self-fulfilling prophecy of harboring more stringent requirements for women based on less dormitory space and perceived poorer performance.) And so promoting women in STEM—before it was ever called STEM—became one of her passions. Serving as president of the American Physical Society, she spearheaded and launched initiatives like the Committee on the Status of Women in Physics and the society’s more informal committees of visiting women physicists on campuses around the United States, which have increased the female faculty and student populations on the campuses they visit.

If you have the time, please read Anderson’s piece in its entirety.

One fact that has impressed me greatly is that Dresselhaus kept working into her eighties. I featured a paper she published in an April 27, 2012 posting at the age of 82 and she was described in the MIT write up at the time as a professor, not a professor emerita. I later featured Dresselhaus in a May 31, 2012 posting when she was awarded the Kavli Prize for Nanoscience.

It seems she worked almost to the end. Recently, GE (General Electric) posted a video “What If Scientists Were Celebrities?” starring Mildred Dresselhaus,

H/t Mark Anderson’s obituary Feb. 22, 2017 piece. The video was posted on Feb. 8, 2017.

Goodbye to the Queen of Carbon!

University of Alberta scientists use ultra fast (terahertz) microscopy to see ultra small (electron dynamics)

This is exciting news for Canadian science and the second time there has been a breakthrough development from the province of Alberta within the last five months (see Sept. 21, 2016 posting on quantum teleportation). From a Feb. 21, 2017 news item on ScienceDaily,

For the first time ever, scientists have captured images of terahertz electron dynamics of a semiconductor surface on the atomic scale. The successful experiment indicates a bright future for the new and quickly growing sub-field called terahertz scanning tunneling microscopy (THz-STM), pioneered by the University of Alberta in Canada. THz-STM allows researchers to image electron behaviour at extremely fast timescales and explore how that behaviour changes between different atoms.

From a Feb. 21, 2017 University of Alberta news release on EurekAlert, which originated the news item, expands on the theme,

“We can essentially zoom in to observe very fast processes with atomic precision and over super fast time scales,” says Vedran Jelic, PhD student at the University of Alberta and lead author on the new study. “THz-STM provides us with a new window into the nanoworld, allowing us to explore ultrafast processes on the atomic scale. We’re talking a picosecond, or a millionth millionth of a second. It’s something that’s never been done before.”

Jelic and his collaborators used their scanning tunneling microscope (STM) to capture images of silicon atoms by raster scanning a very sharp tip across the surface and recording the tip height as it follows the atomic corrugations of the surface. While the original STM can measure and manipulate single atoms–for which its creators earned a Nobel Prize in 1986–it does so using wired electronics and is ultimately limited in speed and thus time resolution.

Modern lasers produce very short light pulses that can measure a whole range of ultra-fast processes, but typically over length scales limited by the wavelength of light at hundreds of nanometers. Much effort has been expended to overcome the challenges of combining ultra-fast lasers with ultra-small microscopy. The University of Alberta scientists addressed these challenges by working in a unique terahertz frequency range of the electromagnetic spectrum that allows wireless implementation. Normally the STM needs an applied voltage in order to operate, but Jelic and his collaborators are able to drive their microscope using pulses of light instead. These pulses occur over really fast timescales, which means the microscope is able to see really fast events.

By incorporating the THz-STM into an ultrahigh vacuum chamber, free from any external contamination or vibration, they are able to accurately position their tip and maintain a perfectly clean surface while imaging ultrafast dynamics of atoms on surfaces. Their next step is to collaborate with fellow material scientists and image a variety of new surfaces on the nanoscale that may one day revolutionize the speed and efficiency of current technology, ranging from solar cells to computer processing.

“Terahertz scanning tunneling microscopy is opening the door to an unexplored regime in physics,” concludes Jelic, who is studying in the Ultrafast Nanotools Lab with University of Alberta professor Frank Hegmann, a world expert in ultra-fast terahertz science and nanophysics.

Here’s are links to and citations for the team’s 2013 paper and their latest,

An ultrafast terahertz scanning tunnelling microscope by Tyler L. Cocker, Vedran Jelic, Manisha Gupta, Sean J. Molesky, Jacob A. J. Burgess, Glenda De Los Reyes, Lyubov V. Titova, Ying Y. Tsui, Mark R. Freeman, & Frank A. Hegmann. Nature Photonics 7, 620–625 (2013) doi:10.1038/nphoton.2013.151 Published online 07 July 2013

Ultrafast terahertz control of extreme tunnel currents through single atoms on a silicon surface by Vedran Jelic, Krzysztof Iwaszczuk, Peter H. Nguyen, Christopher Rathje, Graham J. Hornig, Haille M. Sharum, James R. Hoffman, Mark R. Freeman, & Frank A. Hegmann. Nature Physics (2017)  doi:10.1038/nphys4047 Published online 20 February 2017

Both papers are behind a paywall.

Quantum Shorts & Quantum Applications event at Vancouver’s (Canada) Science World

This is very short notice but if you do have some free time on Thursday, Feb. 23, 2017 from 6 – 8:30 pm, you can check out Science World’s Quantum: The Exhibition for free and watch a series of short films. Here’s more from the Quantum Shorts & Quantum Applications event page,

Join us for an evening of quantum art and science. Visit Quantum: The Exhibition and view a series of short films inspired by the science, history, and philosophy of quantum. Find some answers to your Quantum questions at this mind-expanding panel discussion.

Thursday, February 23: 

6pm                      Check out Quantum: The Exhibition
7pm                      Quantum Shorts Screening
7:45pm                 Panel Discussion/Presentation
8:30pm                 Q & A

Light refreshments will be available.

There are still spaces as of Weds., Feb. 22, 2017:; you can register for the event here.

This will be of the last chances you’ll have to see Quantum: The Exhibition as the show’s here last day is scheduled for Feb. 26, 2017.

Nominations open for Kabiller Prizes in Nanoscience and Nanomedicine ($250,000 for visionary researcher and $10,000 for young investigator)

For a change I can publish something that doesn’t have a deadline in three days or less! Without more ado (from a Feb. 20, 2017 Northwestern University news release by Megan Fellman [h/t Nanowerk’s Feb. 20, 2017 news item]),

Northwestern University’s International Institute for Nanotechnology (IIN) is now accepting nominations for two prestigious international prizes: the $250,000 Kabiller Prize in Nanoscience and Nanomedicine and the $10,000 Kabiller Young Investigator Award in Nanoscience and Nanomedicine.

The deadline for nominations is May 15, 2017. Details are available on the IIN website.

“Our goal is to recognize the outstanding accomplishments in nanoscience and nanomedicine that have the potential to benefit all humankind,” said David G. Kabiller, a Northwestern trustee and alumnus. He is a co-founder of AQR Capital Management, a global investment management firm in Greenwich, Connecticut.

The two prizes, awarded every other year, were established in 2015 through a generous gift from Kabiller. Current Northwestern-affiliated researchers are not eligible for nomination until 2018 for the 2019 prizes.

The Kabiller Prize — the largest monetary award in the world for outstanding achievement in the field of nanomedicine — celebrates researchers who have made the most significant contributions to the field of nanotechnology and its application to medicine and biology.

The Kabiller Young Investigator Award recognizes young emerging researchers who have made recent groundbreaking discoveries with the potential to make a lasting impact in nanoscience and nanomedicine.

“The IIN at Northwestern University is a hub of excellence in the field of nanotechnology,” said Kabiller, chair of the IIN executive council and a graduate of Northwestern’s Weinberg College of Arts and Sciences and Kellogg School of Management. “As such, it is the ideal organization from which to launch these awards recognizing outstanding achievements that have the potential to substantially benefit society.”

Nanoparticles for medical use are typically no larger than 100 nanometers — comparable in size to the molecules in the body. At this scale, the essential properties (e.g., color, melting point, conductivity, etc.) of structures behave uniquely. Researchers are capitalizing on these unique properties in their quest to realize life-changing advances in the diagnosis, treatment and prevention of disease.

“Nanotechnology is one of the key areas of distinction at Northwestern,” said Chad A. Mirkin, IIN director and George B. Rathmann Professor of Chemistry in Weinberg. “We are very grateful for David’s ongoing support and are honored to be stewards of these prestigious awards.”

An international committee of experts in the field will select the winners of the 2017 Kabiller Prize and the 2017 Kabiller Young Investigator Award and announce them in September.

The recipients will be honored at an awards banquet Sept. 27 in Chicago. They also will be recognized at the 2017 IIN Symposium, which will include talks from prestigious speakers, including 2016 Nobel Laureate in Chemistry Ben Feringa, from the University of Groningen, the Netherlands.

2015 recipient of the Kabiller Prize

The winner of the inaugural Kabiller Prize, in 2015, was Joseph DeSimone the Chancellor’s Eminent Professor of Chemistry at the University of North Carolina at Chapel Hill and the William R. Kenan Jr. Distinguished Professor of Chemical Engineering at North Carolina State University and of Chemistry at UNC-Chapel Hill.

DeSimone was honored for his invention of particle replication in non-wetting templates (PRINT) technology that enables the fabrication of precisely defined, shape-specific nanoparticles for advances in disease treatment and prevention. Nanoparticles made with PRINT technology are being used to develop new cancer treatments, inhalable therapeutics for treating pulmonary diseases, such as cystic fibrosis and asthma, and next-generation vaccines for malaria, pneumonia and dengue.

2015 recipient of the Kabiller Young Investigator Award

Warren Chan, professor at the Institute of Biomaterials and Biomedical Engineering at the University of Toronto, was the recipient of the inaugural Kabiller Young Investigator Award, also in 2015. Chan and his research group have developed an infectious disease diagnostic device for a point-of-care use that can differentiate symptoms.

BTW, Warren Chan, winner of the ‘Young Investigator Award’, and/or his work have been featured here a few times, most recently in a Nov. 1, 2016 posting, which is mostly about another award he won but also includes links to some his work including my April 27, 2016 post about the discovery that fewer than 1% of nanoparticle-based drugs reach their destination.

Aliens wreak havoc on our personal electronics

The aliens in question are subatomic particles and the havoc they wreak is low-grade according to the scientist who was presenting on the topic at the AAAS (American Association for the Advancement of Science) 2017 Annual Meeting (Feb. 16 – 20, 2017) in Boston, Massachusetts. From a Feb. 17, 2017 news item on ScienceDaily,

You may not realize it but alien subatomic particles raining down from outer space are wreaking low-grade havoc on your smartphones, computers and other personal electronic devices.

When your computer crashes and you get the dreaded blue screen or your smartphone freezes and you have to go through the time-consuming process of a reset, most likely you blame the manufacturer: Microsoft or Apple or Samsung. In many instances, however, these operational failures may be caused by the impact of electrically charged particles generated by cosmic rays that originate outside the solar system.

“This is a really big problem, but it is mostly invisible to the public,” said Bharat Bhuva, professor of electrical engineering at Vanderbilt University, in a presentation on Friday, Feb. 17 at a session titled “Cloudy with a Chance of Solar Flares: Quantifying the Risk of Space Weather” at the annual meeting of the American Association for the Advancement of Science in Boston.

A Feb. 17, 2017 Vanderbilt University news release (also on EurekAlert), which originated the news item, expands on  the theme,

When cosmic rays traveling at fractions of the speed of light strike the Earth’s atmosphere they create cascades of secondary particles including energetic neutrons, muons, pions and alpha particles. Millions of these particles strike your body each second. Despite their numbers, this subatomic torrent is imperceptible and has no known harmful effects on living organisms. However, a fraction of these particles carry enough energy to interfere with the operation of microelectronic circuitry. When they interact with integrated circuits, they may alter individual bits of data stored in memory. This is called a single-event upset or SEU.

Since it is difficult to know when and where these particles will strike and they do not do any physical damage, the malfunctions they cause are very difficult to characterize. As a result, determining the prevalence of SEUs is not easy or straightforward. “When you have a single bit flip, it could have any number of causes. It could be a software bug or a hardware flaw, for example. The only way you can determine that it is a single-event upset is by eliminating all the other possible causes,” Bhuva explained.

There have been a number of incidents that illustrate how serious the problem can be, Bhuva reported. For example, in 2003 in the town of Schaerbeek, Belgium a bit flip in an electronic voting machine added 4,096 extra votes to one candidate. The error was only detected because it gave the candidate more votes than were possible and it was traced to a single bit flip in the machine’s register. In 2008, the avionics system of a Qantus passenger jet flying from Singapore to Perth appeared to suffer from a single-event upset that caused the autopilot to disengage. As a result, the aircraft dove 690 feet in only 23 seconds, injuring about a third of the passengers seriously enough to cause the aircraft to divert to the nearest airstrip. In addition, there have been a number of unexplained glitches in airline computers – some of which experts feel must have been caused by SEUs – that have resulted in cancellation of hundreds of flights resulting in significant economic losses.

An analysis of SEU failure rates for consumer electronic devices performed by Ritesh Mastipuram and Edwin Wee at Cypress Semiconductor on a previous generation of technology shows how prevalent the problem may be. Their results were published in 2004 in Electronic Design News and provided the following estimates:

  • A simple cell phone with 500 kilobytes of memory should only have one potential error every 28 years.
  • A router farm like those used by Internet providers with only 25 gigabytes of memory may experience one potential networking error that interrupts their operation every 17 hours.
  • A person flying in an airplane at 35,000 feet (where radiation levels are considerably higher than they are at sea level) who is working on a laptop with 500 kilobytes of memory may experience one potential error every five hours.

Bhuva is a member of Vanderbilt’s Radiation Effects Research Group, which was established in 1987 and is the largest academic program in the United States that studies the effects of radiation on electronic systems. The group’s primary focus was on military and space applications. Since 2001, the group has also been analyzing radiation effects on consumer electronics in the terrestrial environment. They have studied this phenomenon in the last eight generations of computer chip technology, including the current generation that uses 3D transistors (known as FinFET) that are only 16 nanometers in size. The 16-nanometer study was funded by a group of top microelectronics companies, including Altera, ARM, AMD, Broadcom, Cisco Systems, Marvell, MediaTek, Renesas, Qualcomm, Synopsys, and TSMC

“The semiconductor manufacturers are very concerned about this problem because it is getting more serious as the size of the transistors in computer chips shrink and the power and capacity of our digital systems increase,” Bhuva said. “In addition, microelectronic circuits are everywhere and our society is becoming increasingly dependent on them.”

To determine the rate of SEUs in 16-nanometer chips, the Vanderbilt researchers took samples of the integrated circuits to the Irradiation of Chips and Electronics (ICE) House at Los Alamos National Laboratory. There they exposed them to a neutron beam and analyzed how many SEUs the chips experienced. Experts measure the failure rate of microelectronic circuits in a unit called a FIT, which stands for failure in time. One FIT is one failure per transistor in one billion hours of operation. That may seem infinitesimal but it adds up extremely quickly with billions of transistors in many of our devices and billions of electronic systems in use today (the number of smartphones alone is in the billions). Most electronic components have failure rates measured in 100’s and 1,000’s of FITs.

chart

Trends in single event upset failure rates at the individual transistor, integrated circuit and system or device level for the three most recent manufacturing technologies. (Bharat Bhuva, Radiation Effects Research Group, Vanderbilt University)

“Our study confirms that this is a serious and growing problem,” said Bhuva.“This did not come as a surprise. Through our research on radiation effects on electronic circuits developed for military and space applications, we have been anticipating such effects on electronic systems operating in the terrestrial environment.”

Although the details of the Vanderbilt studies are proprietary, Bhuva described the general trend that they have found in the last three generations of integrated circuit technology: 28-nanometer, 20-nanometer and 16-nanometer.

As transistor sizes have shrunk, they have required less and less electrical charge to represent a logical bit. So the likelihood that one bit will “flip” from 0 to 1 (or 1 to 0) when struck by an energetic particle has been increasing. This has been partially offset by the fact that as the transistors have gotten smaller they have become smaller targets so the rate at which they are struck has decreased.

More significantly, the current generation of 16-nanometer circuits have a 3D architecture that replaced the previous 2D architecture and has proven to be significantly less susceptible to SEUs. Although this improvement has been offset by the increase in the number of transistors in each chip, the failure rate at the chip level has also dropped slightly. However, the increase in the total number of transistors being used in new electronic systems has meant that the SEU failure rate at the device level has continued to rise.

Unfortunately, it is not practical to simply shield microelectronics from these energetic particles. For example, it would take more than 10 feet of concrete to keep a circuit from being zapped by energetic neutrons. However, there are ways to design computer chips to dramatically reduce their vulnerability.

For cases where reliability is absolutely critical, you can simply design the processors in triplicate and have them vote. Bhuva pointed out: “The probability that SEUs will occur in two of the circuits at the same time is vanishingly small. So if two circuits produce the same result it should be correct.” This is the approach that NASA used to maximize the reliability of spacecraft computer systems.

The good news, Bhuva said, is that the aviation, medical equipment, IT, transportation, communications, financial and power industries are all aware of the problem and are taking steps to address it. “It is only the consumer electronics sector that has been lagging behind in addressing this problem.”

The engineer’s bottom line: “This is a major problem for industry and engineers, but it isn’t something that members of the general public need to worry much about.”

That’s fascinating and I hope the consumer electronics industry catches up with this ‘alien invasion’ issue. Finally, the ‘bit flips’ made me think of the 1956 movie ‘Invasion of the Body Snatchers‘.