Tag Archives: NIST

A solution to the problem of measuring nanoparticles

As you might expect from the US National Institute of Standards and Technology (NIST) this research concerns techniques for measurements. From an August 15, 2019 news item on Nanowerk (Note: Links have been removed),

Tiny nanoparticles play a gargantuan role in modern life, even if most consumers are unaware of their presence. They provide essential ingredients in sunscreen lotions, prevent athlete’s foot fungus in socks, and fight microbes on bandages. They enhance the colors of popular candies and keep the powdered sugar on doughnuts powdery. They are even used in advanced drugs that target specific types of cells in cancer treatments.

When chemists analyze a sample, however, it is challenging to measure the sizes and quantities of these particles — which are often 100,000 times smaller than the thickness of a piece of paper. Technology offers many options for assessing nanoparticles, but experts have not reached a consensus on which technique is best.

In a new paper from the National Institute of Standards and Technology (NIST) and collaborating institutions, researchers have concluded that measuring the range of sizes in nanoparticles — instead of just the average particle size — is optimal for most applications.

An August 14, 2019 NIST news release (also received via email and on EurkAlert), which originated the news item, delves further into the research,

“It seems like a simple choice,” said NIST’s Elijah Petersen, the lead author of the paper, which was published today in Environmental Science: Nano. “But it can have a big impact on the outcome of your assessment.”

As with many measurement questions, precision is key. Exposure to a certain amount of some nanoparticles could have adverse effects. Pharmaceutical researchers often need exactitude to maximize a drug’s efficacy. And environmental scientists need to know, for example, how many nanoparticles of gold, silver or titanium could potentially cause a risk to organisms in soil or water.

Using more nanoparticles than needed in a product because of inconsistent measurements could also waste money for manufacturers.

Although they might sound ultramodern, nanoparticles are neither new nor based solely on high-tech manufacturing processes. A nanoparticle is really just a submicroscopic particle that measures less than 100 nanometers on at least one of its dimensions. It would be possible to place hundreds of thousands of them onto the head of a pin. They are exciting to researchers because many materials act differently at the nanometer scale than they do at larger scales, and nanoparticles can be made to do lots of useful things.

Nanoparticles have been in use since the days of ancient Mesopotamia [emphasis mine], when ceramic artists used extremely small bits of metal to decorate vases and other vessels. In fourth-century Rome, glass artisans ground metal into tiny particles to change the color of their wares under different lighting. These techniques were forgotten for a while but rediscovered in the 1600s by resourceful manufacturers for glassmaking [emphasis mine] again. Then, in the 1850s, scientist Michael Faraday extensively researched ways to use various kinds of wash mixes to change the performance of gold particles.

Modern nanoparticle research advanced quickly in the mid-20th century due to technological innovations in optics. Being able to see the individual particles and study their behavior expanded the possibilities for experimentation. The largest advances came, however, after experimental nanotechnology took off in the 1990s. Suddenly, the behavior of single particles of gold and many other substances could be closely examined and manipulated. Discoveries about the ways that small amounts of a substance would reflect light, absorb light, or change in behavior were numerous, leading to the incorporation of nanoparticles into many more products

Debates have since followed about their measurement. When assessing the response of cells or organisms to nanoparticles, some researchers prefer measuring particle number concentrations (sometimes called PNCs by scientists). Many find PNCs challenging since extra formulas must be employed when determining the final measurement. Others prefer measuring mass or surface area concentrations.

PNCs are often used for characterizing metals in chemistry. The situation for nanoparticles is inherently more complex, however, than it is for dissolved organic or inorganic substances because unlike dissolved chemicals, nanoparticles can come in a wide variety of sizes and sometimes stick together when added to testing materials.

“If you have a dissolved chemical, it’s always going to have the same molecular formula, by definition,” Petersen says. “Nanoparticles don’t just have a certain number of atoms, however. Some will be 9 nanometers, some will be 11, some might be 18, and some might be 3.”

The problem is that each of those particles may be fulfilling an important role. While a simple estimate of particle number is perfectly fine for some industrial applications, therapeutic applications require much more robust measurement. In the case of cancer therapies, for example, each particle, no matter how big or small, may be delivering a needed antidote. And just as with any other kind of dosage, nanoparticle dosage must be exact in order to be safe and effective.

Using the range of particle sizes to calculate the PNC will often be the most helpful in most cases, said Petersen. The size distribution doesn’t use a mean or an average but notes the complete distribution of sizes of particles so that formulas can be used to effectively discover how many particles are in a sample.

But no matter which approach is used, researchers need to make note of it in their papers, for the sake of comparability with other studies. “Don’t assume that different approaches will give you the same result,” he said.

Petersen adds that he and his colleagues were surprised by how much the coatings on nanoparticles could impact measurement. Some coatings, he noted, can have a positive electrical charge, causing clumping.

Petersen worked in collaboration with researchers from federal laboratories in Switzerland, and with scientists from 3M who have previously made many nanoparticle measurements for use in industrial settings. Researchers from Switzerland, like those in much of the rest of Europe, are keen to learn more about measuring nanoparticles because PNCs are required in many regulatory situations. There hasn’t been much information on which techniques are best or more likely to yield the most precise results across many applications.

“Until now we didn’t even know if we could find agreement among labs about particle number concentrations,” Petersen says. “They are complex. But now we are beginning to see it can be done.”

I love the reference to glassmaking and ancient Mesopotamia. Getting back to current times, here’s a link to and a citation for the paper,

Determining what really counts: modeling and measuring nanoparticle number concentrations by Elijah J. Petersen, Antonio R. Montoro Bustos, Blaza Toman, Monique E. Johnson, Mark Ellefson, George C. Caceres, Anna Lena Neuer, Qilin Chan, Jonathan W. Kemling, Brian Mader, Karen Murphy and Matthias Roesslein. Environmental Science: Nano. Published August 14, 2019. DOI: 10.1039/c9en00462a

This paper is behind a paywall.

Analyzing a buckyball’s (buckminsterfullerene) quantum structure

The work was done jointly by the US National Institute of Standards and Technology (NIST) and JILA (Joint Institute for Laboratory Astrophysics), which is operated ‘jointly’ by NIST and the University of Colorado. On to buckyballs, a nickname for buckminsterfullerenes or C60.

From a January 28, 2019 news item on ScienceDaily,

JILA researchers have measured hundreds of individual quantum energy levels in the buckyball, a spherical cage of 60 carbon atoms. It’s the largest molecule that has ever been analyzed at this level of experimental detail in the history of quantum mechanics. Fully understanding and controlling this molecule’s quantum details could lead to new scientific fields and applications, such as an entire quantum computer contained in a single buckyball.

Caption: JILA researchers used frequency combs, or “rulers of light,” to observe individual quantum energy transitions in buckyballs. Credit: Steven Burrows/JILA

There are two types of spherical objects in the image: the smooth blue ones, which are not buckyballs, and the ones with ridged spheres, which are.

A January 28, 2019 NIST news release (also on EurekAlert), which originated the news item, describes the buckyball molecule and the research in more detail,

The buckyball, formally known as buckminsterfullerene, is extremely complex. Due to its enormous 60-atom size, the overall molecule has a staggeringly high number of ways to vibrate–at least 100,000,000,000,000,000,000,000,000 vibrational quantum states when the molecule is warm. That’s in addition to the many different energy states for the buckyball’s rotation and other properties.

As described in the January 4 [2019] issue of Science, the JILA team used an updated version of their frequency comb spectroscopy and cryogenic buffer gas cooling system to observe isolated, individual energy transitions among rotational and vibrational states in cold, gaseous buckyballs. This is the first time anyone has been able to prepare buckyballs in this form to analyze its rotations and vibrations at the quantum level.

JILA is jointly operated by the National Institute of Standards and Technology (NIST) and the University of Colorado Boulder.

Buckyballs, first discovered in 1984, have created great scientific excitement. But high-resolution spectroscopy, which can reveal the details of the molecule’s rotational and vibrational properties, didn’t work at ordinary room temperatures because the signals were too congested, NIST/JILA Fellow Jun Ye said. Low temperatures (about -138 degrees Celsius, which is -216 degrees Fahrenheit) enabled researchers to concentrate the molecules into a single rotational-vibrational quantum state at the lowest energy level and probe them with high-resolution spectroscopy.

The buckyball is the most symmetric molecule known, with a soccer-ball-like shape known as a modified icosahedron. It is small enough to be fully understood with basic quantum mechanics principles. Yet it is large enough to reveal insights into the extreme quantum complexity that emerges in huge systems.

As an example of practical applications, buckyballs could act as a pristine network of 60 atoms. The core of each atom possesses an identical property known as “nuclear spin,” which enables it to interact magnetically with its environment. Therefore, each spin could act as a magnetically controlled quantum bit or “qubit” in a quantum computer.

“If we had a buckyball made of pure isotopic carbon-13, each atom would have a nuclear spin of 1/2, and each buckyball could serve as a 60-qubit quantum computer,” Ye said. “Of course, we don’t have such capabilities yet; we would need to first capture these buckyballs in traps.”

A key part of the new quantum revolution, a quantum computer using qubits made of atoms or other materials could potentially solve important problems that are intractable using today’s machines. NIST has a major stake in quantum science

“There are also a lot of astrophysics connections,” Ye continued. “There are abundant buckyball signals coming from remote carbon stars,” so the new data will enable scientists to better understand the universe.

After they measured the quantum energy levels, the JILA researchers collected statistics on buckyballs’ nuclear spin values. They confirmed that all 60 atoms were indistinguishable, or virtually identical. Precise measurements of the buckyball’s transition energies between individual quantum states revealed its atoms interacted strongly with one another, providing insights into the complexities of its molecular structure and the forces between atoms.

For the experiments, an oven converted a solid sample of material into gaseous buckyballs. These hot molecules flowed into a cell (container) anchored to a cryogenic cold apparatus, such that the molecules were cooled by collisions with cold argon gas atoms. Then laser light at precise frequencies was aimed at the cold gas molecules, and researchers measured how much light was absorbed. The observed structure in the infrared spectrum encoded details of the quantum-mechanical energy-level structure.

The laser light was produced by an optical frequency comb, or “ruler of light,” and aimed into an optical cavity surrounding the cold cell to enhance the absorption signals. The comb contained about 1000 “teeth” at optical frequencies spanning the full band of buckyball vibrations. The comb light was generated from a single fiber laser.

Here’s a link to and a citation for the paper,

Rovibrational quantum state resolution of the C60 fullerene by P. Bryan Changala, Marissa L. Weichman, Kevin F. Lee, Martin E. Fermann, Jun Ye1. Science 04 Jan 2019: Vol. 363, Issue 6422, pp. 49-54 DOI: 10.1126/science.aav2616

This paper appears to be open access.

Quantum guitar music

The sound quality the physicists at the US National Institute of Standards and Technology (NIST) have achieved is quite good compared to carbon nanotube radio. If you’re curious, the audio file is embedded in both the American Institute of Physics (AIP) June 18, 2019 news release (and in the copy on EurekAlert),

It sounds like an old-school vinyl record, but the distinctive crackle in the music streamed into Chris Holloway’s laboratory is atomic in origin. The group at the National Institute for Standards and Technology, Boulder, Colorado, spent a long six years finding a way to directly measure electric fields using atoms, so who can blame them for then having a little fun with their new technology?

“My vision is to cut a CD in the lab — our studio — at some point and have the first CD recorded with Rydberg atoms,” said Holloway. While he doesn’t expect the atomic-recording’s lower sound quality to replace digital music recordings, the team of research scientists is considering how this “entertaining” example of atomic sensing could be applied in communication devices of the future.

“Atom-based antennas might give us a better way of picking up audio data in the presence of noise, potentially even the very weak signals transmitted in deep space communications,” said Holloway, who describes his atomic receiver in AIP Advances, from AIP Publishing.

The atoms in question — Rydberg atoms — are atoms excited by lasers into a high energy state that responds in a measurable way to radio waves (an electric field). After figuring out how to measure electric field strength using the Rydberg atoms, Holloway said it was a relatively simple step to apply the same atoms to record and play back music — starting with Holloway’s own guitar improvisations in A minor.

They encoded the music onto radio waves in much the same way cellphone conversations are encoded onto radio waves for transmission. The atoms respond to these radio waves, and in turn, the laser beams shined through the Rydberg atoms are affected. These changes are picked up on a photodetector, which feeds an electric signal into the speaker or computer — and voila! The atomic radio was born

The team used their quantum system to pick up stereo — with one atomic species recording the instrumental and another the vocal at two different sets of laser frequencies. They selected a Queen track — “Under Pressure” — to test if their system could handle Freddie Mercury’s extensive vocal range.

“One of the reasons for cutting stereo was to show that this one receiver can pick up two channels simultaneously, which is difficult with conventional receivers,” said Holloway, who explained that although it is the early days for atomic communications, there is potential to use this to improve the security of communications.

For now, Holloway’s team are staying tuned into atomic radio as they try to determine how weak a signal the Rydberg atoms can detect, and what data transfer speeds can be achieved.

They are not forgetting the atomic record they want to produce, with which they hope to inspire the next generation of quantum scientists.

Here’s a link to and a citation for the paper,

A “real-time” guitar recording using Rydberg atoms and electromagnetically induced transparency: Quantum physics meets music by Christopher L. Holloway, Matthew T. Simons, Abdulaziz H. Haddab, Carl J. Williams, and Maxwell W. Holloway. AIP Advances volume 9 (6), 065110 (2019) DOI: 10.1063/1.5099036 https://doi.org/10.1063/1.5099036 Open Published Online: 18 June 2019

This paper is open access and, if you want to hear the guitar music, click on the AIP news release or EurekAlert links at the top of this posting.

Researchers, manufacturers, and administrators need to consider shared quality control challenges to advance the nanoparticle manufacturing industry ‘

Manufacturing remains a bit of an issue where nanotechnology is concerned due to the difficulties of producing nanoparticles of a consistent size and type,


Electron micrograph showing gallium arsenide nanoparticles of varying shapes and sizes. Such heterogeneity [variation]  can increase costs and limit profits when making nanoparticles into products. A new NIST study recommends that researchers, manufacturers and administrators work together to solve this, and other common problems, in nanoparticle manufacturing. Credit: A. Demotiere, E. Shevchenko/Argonne National Laboratory

The US National Institute of Standards and Technology (NIST) has produced a paper focusing on how nanoparticle manufacturing might become more effective, from an August 22, 2018 news item on ScienceDaily,

Nanoparticle manufacturing, the production of material units less than 100 nanometers in size (100,000 times smaller than a marble), is proving the adage that “good things come in small packages.” Today’s engineered nanoparticles are integral components of everything from the quantum dot nanocrystals coloring the brilliant displays of state-of-the-art televisions to the miniscule bits of silver helping bandages protect against infection. However, commercial ventures seeking to profit from these tiny building blocks face quality control issues that, if unaddressed, can reduce efficiency, increase production costs and limit commercial impact of the products that incorporate them.

To help overcome these obstacles, the National Institute of Standards and Technology (NIST) and the nonprofit World Technology Evaluation Center (WTEC) advocate that nanoparticle researchers, manufacturers and administrators “connect the dots” by considering their shared challenges broadly and tackling them collectively rather than individually. This includes transferring knowledge across disciplines, coordinating actions between organizations and sharing resources to facilitate solutions.

The recommendations are presented in a new paper in the journal ACS Applied Nano Materials.

An August 22, 2018 NIST news release, which originated the news item, describes how the authors of the ACS [American Chemical Society) Applied Nano Materials paper developed their recommendations,

“We looked at the big picture of nanoparticle manufacturing to identify problems that are common for different materials, processes and applications,” said NIST physical scientist Samuel Stavis, lead author of the paper. “Solving these problems could advance the entire enterprise.”

The new paper provides a framework to better understand these issues. It is the culmination of a study initiated by a workshop organized by NIST that focused on the fundamental challenge of reducing or mitigating heterogeneity, the inadvertent variations in nanoparticle size, shape and other characteristics that occur during their manufacture.

“Heterogeneity can have significant consequences in nanoparticle manufacturing,” said NIST chemical engineer and co-author Jeffrey Fagan.

In their paper, the authors noted that the most profitable innovations in nanoparticle manufacturing minimize heterogeneity during the early stages of the operation, reducing the need for subsequent processing. This decreases waste, simplifies characterization and improves the integration of nanoparticles into products, all of which save money.

The authors illustrated the point by comparing the production of gold nanoparticles and carbon nanotubes. For gold, they stated, the initial synthesis costs can be high, but the similarity of the nanoparticles produced requires less purification and characterization. Therefore, they can be made into a variety of products, such as sensors, at relatively low costs.

In contrast, the more heterogeneous carbon nanotubes are less expensive to synthesize but require more processing to yield those with desired properties. The added costs during manufacturing currently make nanotubes only practical for high-value applications such as digital logic devices.

“Although these nanoparticles and their end products are very different, the stakeholders in their manufacture can learn much from each other’s best practices,” said NIST materials scientist and co-author J. Alexander Liddle. “By sharing knowledge, they might be able to improve both seemingly disparate operations.”

Finding ways like this to connect the dots, the authors said, is critically important for new ventures seeking to transfer nanoparticle technologies from laboratory to market.

“Nanoparticle manufacturing can become so costly that funding expires before the end product can be commercialized,” said WTEC nanotechnology consultant and co-author Michael Stopa. “In our paper, we outlined several opportunities for improving the odds that new ventures will survive their journeys through this technology transfer ‘valley of death.’”

Finally, the authors considered how manufacturing challenges and innovations are affecting the ever-growing number of applications for nanoparticles, including those in the areas of electronics, energy, health care and materials.

Here’s a link to and a citation for the paper,

Nanoparticle Manufacturing – Heterogeneity through Processes to Products by Samuel M. Stavis, Jeffrey A. Fagan, Michael Stopa, and J. Alexander Liddle. ACS Appl. Nano Mater., Article ASAP DOI: 10.1021/acsanm.8b01239 Publication Date (Web): August 16, 2018

Copyright © 2018 American Chemical Society

This paper is behind a paywall.

I looked at this paper briefly and found it to give a good overview. The focus is on manufacturing and making money. I imagine any discussion about the life cycle of the materials and possible environmental and health risks would have been considered ‘scope creep’.

I have two postings that provide additional information about manufacturing concerns, my February 10, 2014 posting:  ‘Valley of Death’, ‘Manufacturing Middle’, and other concerns in new government report about the future of nanomanufacturing in the US and my September 5, 2016 posting: An examination of nanomanufacturing and nanofabrication.

Announcing the ‘memtransistor’

Yet another advance toward ‘brainlike’ computing (how many times have I written this or a variation thereof in the last 10 years? See: Dexter Johnson’s take on the situation at the end of this post): Northwestern University announced their latest memristor research in a February 21, 2018 news item on Nanowerk,

Computer algorithms might be performing brain-like functions, such as facial recognition and language translation, but the computers themselves have yet to operate like brains.

“Computers have separate processing and memory storage units, whereas the brain uses neurons to perform both functions,” said Northwestern University’s Mark C. Hersam. “Neural networks can achieve complicated computation with significantly lower energy consumption compared to a digital computer.”

A February 21, 2018 Northwestern University news release (also on EurekAlert), which originated the news item, provides more information about the latest work from this team,

In recent years, researchers have searched for ways to make computers more neuromorphic, or brain-like, in order to perform increasingly complicated tasks with high efficiency. Now Hersam, a Walter P. Murphy Professor of Materials Science and Engineering in Northwestern’s McCormick School of Engineering, and his team are bringing the world closer to realizing this goal.

The research team has developed a novel device called a “memtransistor,” which operates much like a neuron by performing both memory and information processing. With combined characteristics of a memristor and transistor, the memtransistor also encompasses multiple terminals that operate more similarly to a neural network.

Supported by the National Institute of Standards and Technology and the National Science Foundation, the research was published online today, February 22 [2018], in Nature. Vinod K. Sangwan and Hong-Sub Lee, postdoctoral fellows advised by Hersam, served as the paper’s co-first authors.

The memtransistor builds upon work published in 2015, in which Hersam, Sangwan, and their collaborators used single-layer molybdenum disulfide (MoS2) to create a three-terminal, gate-tunable memristor for fast, reliable digital memory storage. Memristor, which is short for “memory resistors,” are resistors in a current that “remember” the voltage previously applied to them. Typical memristors are two-terminal electronic devices, which can only control one voltage channel. By transforming it into a three-terminal device, Hersam paved the way for memristors to be used in more complex electronic circuits and systems, such as neuromorphic computing.

To develop the memtransistor, Hersam’s team again used atomically thin MoS2 with well-defined grain boundaries, which influence the flow of current. Similar to the way fibers are arranged in wood, atoms are arranged into ordered domains – called “grains” – within a material. When a large voltage is applied, the grain boundaries facilitate atomic motion, causing a change in resistance.

“Because molybdenum disulfide is atomically thin, it is easily influenced by applied electric fields,” Hersam explained. “This property allows us to make a transistor. The memristor characteristics come from the fact that the defects in the material are relatively mobile, especially in the presence of grain boundaries.”

But unlike his previous memristor, which used individual, small flakes of MoS2, Hersam’s memtransistor makes use of a continuous film of polycrystalline MoS2 that comprises a large number of smaller flakes. This enabled the research team to scale up the device from one flake to many devices across an entire wafer.

“When length of the device is larger than the individual grain size, you are guaranteed to have grain boundaries in every device across the wafer,” Hersam said. “Thus, we see reproducible, gate-tunable memristive responses across large arrays of devices.”

After fabricating memtransistors uniformly across an entire wafer, Hersam’s team added additional electrical contacts. Typical transistors and Hersam’s previously developed memristor each have three terminals. In their new paper, however, the team realized a seven-terminal device, in which one terminal controls the current among the other six terminals.

“This is even more similar to neurons in the brain,” Hersam said, “because in the brain, we don’t usually have one neuron connected to only one other neuron. Instead, one neuron is connected to multiple other neurons to form a network. Our device structure allows multiple contacts, which is similar to the multiple synapses in neurons.”

Next, Hersam and his team are working to make the memtransistor faster and smaller. Hersam also plans to continue scaling up the device for manufacturing purposes.

“We believe that the memtransistor can be a foundational circuit element for new forms of neuromorphic computing,” he said. “However, making dozens of devices, as we have done in our paper, is different than making a billion, which is done with conventional transistor technology today. Thus far, we do not see any fundamental barriers that will prevent further scale up of our approach.”

The researchers have made this illustration available,

Caption: This is the memtransistor symbol overlaid on an artistic rendering of a hypothetical circuit layout in the shape of a brain. Credit; Hersam Research Group

Here’s a link to and a citation for the paper,

Multi-terminal memtransistors from polycrystalline monolayer molybdenum disulfide by Vinod K. Sangwan, Hong-Sub Lee, Hadallia Bergeron, Itamar Balla, Megan E. Beck, Kan-Sheng Chen, & Mark C. Hersam. Nature volume 554, pages 500–504 (22 February 2018 doi:10.1038/nature25747 Published online: 21 February 2018

This paper is behind a paywall.

The team’s earlier work referenced in the news release was featured here in an April 10, 2015 posting.

Dexter Johnson

From a Feb. 23, 2018 posting by Dexter Johnson on the Nanoclast blog (on the IEEE [Institute of Electrical and Electronics Engineers] website),

While this all seems promising, one of the big shortcomings in neuromorphic computing has been that it doesn’t mimic the brain in a very important way. In the brain, for every neuron there are a thousand synapses—the electrical signal sent between the neurons of the brain. This poses a problem because a transistor only has a single terminal, hardly an accommodating architecture for multiplying signals.

Now researchers at Northwestern University, led by Mark Hersam, have developed a new device that combines memristors—two-terminal non-volatile memory devices based on resistance switching—with transistors to create what Hersam and his colleagues have dubbed a “memtransistor” that performs both memory storage and information processing.

This most recent research builds on work that Hersam and his team conducted back in 2015 in which the researchers developed a three-terminal, gate-tunable memristor that operated like a kind of synapse.

While this work was recognized as mimicking the low-power computing of the human brain, critics didn’t really believe that it was acting like a neuron since it could only transmit a signal from one artificial neuron to another. This was far short of a human brain that is capable of making tens of thousands of such connections.

“Traditional memristors are two-terminal devices, whereas our memtransistors combine the non-volatility of a two-terminal memristor with the gate-tunability of a three-terminal transistor,” said Hersam to IEEE Spectrum. “Our device design accommodates additional terminals, which mimic the multiple synapses in neurons.”

Hersam believes that these unique attributes of these multi-terminal memtransistors are likely to present a range of new opportunities for non-volatile memory and neuromorphic computing.

If you have the time and the interest, Dexter’s post provides more context,

Less is more—a superconducting synapse

It seems the US National Institute of Standards and Technology (NIST) is more deeply invested into developing artificial brains than I had realized (See: April 17, 2018 posting). A January 26, 2018 NIST news release on EurekAlert describes the organization’s latest foray into the field,

Researchers at the National Institute of Standards and Technology (NIST) have built a superconducting switch that “learns” like a biological system and could connect processors and store memories in future computers operating like the human brain.

The NIST switch, described in Science Advances, is called a synapse, like its biological counterpart, and it supplies a missing piece for so-called neuromorphic computers. Envisioned as a new type of artificial intelligence, such computers could boost perception and decision-making for applications such as self-driving cars and cancer diagnosis.

A synapse is a connection or switch between two brain cells. NIST’s artificial synapse–a squat metallic cylinder 10 micrometers in diameter–is like the real thing because it can process incoming electrical spikes to customize spiking output signals. This processing is based on a flexible internal design that can be tuned by experience or its environment. The more firing between cells or processors, the stronger the connection. Both the real and artificial synapses can thus maintain old circuits and create new ones. Even better than the real thing, the NIST synapse can fire much faster than the human brain–1 billion times per second, compared to a brain cell’s 50 times per second–using just a whiff of energy, about one ten-thousandth as much as a human synapse. In technical terms, the spiking energy is less than 1 attojoule, lower than the background energy at room temperature and on a par with the chemical energy bonding two atoms in a molecule.

“The NIST synapse has lower energy needs than the human synapse, and we don’t know of any other artificial synapse that uses less energy,” NIST physicist Mike Schneider said.

The new synapse would be used in neuromorphic computers made of superconducting components, which can transmit electricity without resistance, and therefore, would be more efficient than other designs based on semiconductors or software. Data would be transmitted, processed and stored in units of magnetic flux. Superconducting devices mimicking brain cells and transmission lines have been developed, but until now, efficient synapses–a crucial piece–have been missing.

The brain is especially powerful for tasks like context recognition because it processes data both in sequence and simultaneously and stores memories in synapses all over the system. A conventional computer processes data only in sequence and stores memory in a separate unit.

The NIST synapse is a Josephson junction, long used in NIST voltage standards. These junctions are a sandwich of superconducting materials with an insulator as a filling. When an electrical current through the junction exceeds a level called the critical current, voltage spikes are produced. The synapse uses standard niobium electrodes but has a unique filling made of nanoscale clusters of manganese in a silicon matrix.

The nanoclusters–about 20,000 per square micrometer–act like tiny bar magnets with “spins” that can be oriented either randomly or in a coordinated manner.

“These are customized Josephson junctions,” Schneider said. “We can control the number of nanoclusters pointing in the same direction, which affects the superconducting properties of the junction.”

The synapse rests in a superconducting state, except when it’s activated by incoming current and starts producing voltage spikes. Researchers apply current pulses in a magnetic field to boost the magnetic ordering, that is, the number of nanoclusters pointing in the same direction. This magnetic effect progressively reduces the critical current level, making it easier to create a normal conductor and produce voltage spikes.

The critical current is the lowest when all the nanoclusters are aligned. The process is also reversible: Pulses are applied without a magnetic field to reduce the magnetic ordering and raise the critical current. This design, in which different inputs alter the spin alignment and resulting output signals, is similar to how the brain operates.

Synapse behavior can also be tuned by changing how the device is made and its operating temperature. By making the nanoclusters smaller, researchers can reduce the pulse energy needed to raise or lower the magnetic order of the device. Raising the operating temperature slightly from minus 271.15 degrees C (minus 456.07 degrees F) to minus 269.15 degrees C (minus 452.47 degrees F), for example, results in more and higher voltage spikes.

Crucially, the synapses can be stacked in three dimensions (3-D) to make large systems that could be used for computing. NIST researchers created a circuit model to simulate how such a system would operate.

The NIST synapse’s combination of small size, superfast spiking signals, low energy needs and 3-D stacking capability could provide the means for a far more complex neuromorphic system than has been demonstrated with other technologies, according to the paper.

NIST has prepared an animation illustrating the research,

Caption: This is an animation of how NIST’s artificial synapse works. Credit: Sean Kelley/NIST

Here’s a link to and a citation for the paper,

Ultralow power artificial synapses using nanotextured magnetic Josephson junctions by Michael L. Schneider, Christine A. Donnelly, Stephen E. Russek, Burm Baek, Matthew R. Pufall, Peter F. Hopkins, Paul D. Dresselhaus, Samuel P. Benz, and William H. Rippard. Science Advances 26 Jan 2018: Vol. 4, no. 1, e1701329 DOI: 10.1126/sciadv.1701329

This paper is open access.

Samuel K. Moore in a January 26, 2018 posting on the Nanoclast blog (on the IEEE [Institute for Electrical and Electronics Engineers] website) describes the research and adds a few technical explanations such as this about the Josephson junction,

In a magnetic Josephson junction, that “weak link” is magnetic. The higher the magnetic field, the lower the critical current needed to produce voltage spikes. In the device Schneider and his colleagues designed, the magnetic field is caused by 20,000 or so nanometer-scale clusters of manganese embedded in silicon. …

Moore also provides some additional links including this one to his November 29, 2017 posting where he describes four new approaches to computing including quantum computing and neuromorphic (brain-like) computing.

Thanks for the memory: the US National Institute of Standards and Technology (NIST) and memristors

In January 2018 it seemed like I was tripping across a lot of memristor stories . This came from a January 19, 2018 news item on Nanowerk,

In the race to build a computer that mimics the massive computational power of the human brain, researchers are increasingly turning to memristors, which can vary their electrical resistance based on the memory of past activity. Scientists at the National Institute of Standards and Technology (NIST) have now unveiled the long-mysterious inner workings of these semiconductor elements, which can act like the short-term memory of nerve cells.

A January 18, 2018 NIST news release (also on EurekAlert), which originated the news item, fills in the details,

Just as the ability of one nerve cell to signal another depends on how often the cells have communicated in the recent past, the resistance of a memristor depends on the amount of current that recently flowed through it. Moreover, a memristor retains that memory even when electrical power is switched off.

But despite the keen interest in memristors, scientists have lacked a detailed understanding of how these devices work and have yet to develop a standard toolset to study them.

Now, NIST scientists have identified such a toolset and used it to more deeply probe how memristors operate. Their findings could lead to more efficient operation of the devices and suggest ways to minimize the leakage of current.

Brian Hoskins of NIST and the University of California, Santa Barbara, along with NIST scientists Nikolai Zhitenev, Andrei Kolmakov, Jabez McClelland and their colleagues from the University of Maryland’s NanoCenter (link is external) in College Park and the Institute for Research and Development in Microtechnologies in Bucharest, reported the findings (link is external) in a recent Nature Communications.

To explore the electrical function of memristors, the team aimed a tightly focused beam of electrons at different locations on a titanium dioxide memristor. The beam knocked free some of the device’s electrons, which formed ultrasharp images of those locations. The beam also induced four distinct currents to flow within the device. The team determined that the currents are associated with the multiple interfaces between materials in the memristor, which consists of two metal (conducting) layers separated by an insulator.

“We know exactly where each of the currents are coming from because we are controlling the location of the beam that is inducing those currents,” said Hoskins.

In imaging the device, the team found several dark spots—regions of enhanced conductivity—which indicated places where current might leak out of the memristor during its normal operation. These leakage pathways resided outside the memristor’s core—where it switches between the low and high resistance levels that are useful in an electronic device. The finding suggests that reducing the size of a memristor could minimize or even eliminate some of the unwanted current pathways. Although researchers had suspected that might be the case, they had lacked experimental guidance about just how much to reduce the size of the device.

Because the leakage pathways are tiny, involving distances of only 100 to 300 nanometers, “you’re probably not going to start seeing some really big improvements until you reduce dimensions of the memristor on that scale,” Hoskins said.

To their surprise, the team also found that the current that correlated with the memristor’s switch in resistance didn’t come from the active switching material at all, but the metal layer above it. The most important lesson of the memristor study, Hoskins noted, “is that you can’t just worry about the resistive switch, the switching spot itself, you have to worry about everything around it.” The team’s study, he added, “is a way of generating much stronger intuition about what might be a good way to engineer memristors.”

Here’s a link to and a citation for the paper,

Stateful characterization of resistive switching TiO2 with electron beam induced currents by Brian D. Hoskins, Gina C. Adam, Evgheni Strelcov, Nikolai Zhitenev, Andrei Kolmakov, Dmitri B. Strukov, & Jabez J. McClelland. Nature Communications 8, Article number: 1972 (2017) doi:10.1038/s41467-017-02116-9 Published online: 07 December 2017

This is an open access paper.

It might be my imagination but it seemed like a lot of papers from 2017 were being publicized in early 2018.

Finally, I borrowed much of my headline from the NIST’s headline for its news release, specifically, “Thanks for the memory,” which is a rather old song,

Bob Hope and Shirley Ross in “The Big Broadcast of 1938.”

Watching rust turn into iron

a) Colorized SEM images of iron oxide nanoblades used in the experiment. b) Colorized cross-section of SEM image of the nanoblades. c) Colorized SEM image of nanoblades after 1 hour of reduction reaction at 500 °C in molecular hydrogen, showing the sawtooth shape along the edges (square). d) Colorized SEM image showing the formation of holes after 2 hours of reduction. The scale bar is 1 micrometer. Credit: W. Zhu et al./ACS Nano and K. Irvine/NIST

Here’s more about being able to watch iron transition from one state to the next according to an April 5, 2017 news item on phys.org

Using a state-of-the-art microscopy technique, experimenters at the National Institute of Standards and Technology (NIST) and their colleagues have witnessed a slow-motion, atomic-scale transformation of rust—iron oxide—back to pure iron metal, in all of its chemical steps.

An April 4, 2017 NIST news release describes the role iron plays in modern lifestyles and the purpose of this research,

Among the most abundant minerals on Earth, iron oxides play a leading role in magnetic data storage, cosmetics, the pigmentation of paints and drug delivery. These materials also serve as catalysts for several types of chemical reactions, including the production of ammonia for fertilizer.

To fine-tune the properties of these minerals for each application, scientists work with nanometer-scale particles of the oxides. But to do so, researchers need a detailed, atomic-level understanding of reduction, a key chemical reaction that iron oxides undergo. That knowledge, however, is often lacking because reduction—a process that is effectively the opposite of rusting—proceeds too rapidly for many types of probes to explore at such a fine level.

In a new effort to study the microscopic details of metal oxide reduction, researchers used a specially adapted transmission electron microscope (TEM) at NIST’s NanoLab facility to document the step-by-step transformation of nanocrystals of the iron oxide hematite (Fe2O3) to the iron oxide magnetite (Fe3O4), and finally to iron metal.

“Even though people have studied iron oxide for many years, there have been no dynamic studies at the atomic scale,” said Wenhui Zhu of the State University of New York at Binghamton, who worked on her doctorate in the NanoLab in 2015 and 2016. “We are seeing what’s actually happening during the entire reduction process instead of studying just the initial steps.”

That’s critical, added NIST’s Renu Sharma, “if you want to control the composition or properties of iron oxides and understand the relationships between them.”

By lowering the temperature of the reaction and decreasing the pressure of the hydrogen gas that acted as the reducing agent, the scientists slowed down the reduction process so that it could be captured with an environmental TEM—a specially configured TEM that can study both solids and gas. The instrument enables researchers to perform atomic-resolution imaging of a sample under real-life conditions—in this case the gaseous environment necessary for iron oxides to undergo reduction–rather than under the vacuum needed in ordinary TEMs.

“This is the most powerful tool I’ve used in my research and one of the very few in the United States,” said Zhu. She, Sharma and their colleagues describe their findings in a recent issue of ACS Nano.

The team examined the reduction process in a bicrystal of iron oxide, consisting of two identical iron oxide crystals rotated at 21.8 degrees with respect to each other. The bicrystal structure also served to slow down the reduction process, making it easier to follow with the environmental TEM.

In studying the reduction reaction, the researchers identified a previously unknown intermediate state in the transformation from magnetite to hematite. In the middle stage, the iron oxide retained its original chemical structure, Fe2O3, but changed the crystallographic arrangement of its atoms from rhombohedral (a diagonally stretched cube) to cubic.

This intermediate state featured a defect in which oxygen atoms fail to populate some of the sites in the crystal that they normally would. This so-called oxygen vacancy defect is not uncommon and is known to strongly influence the electrical and catalytic properties of oxides. But the researchers were surprised to find that the defects occurred in an ordered pattern, which had never been found before in the reduction of Fe2O3 to Fe3O4, Sharma said.

The significance of the intermediate state remains under study, but it may be important for controlling the reduction rate and other properties of the reduction process, she adds. “The more we understand, the better we can manipulate the microstructure of these oxides,” said Zhu. By manipulating the microstructure, researchers may be able to enhance the catalytic activity of iron oxides.

Even though a link has already been provided for the paper, I will give it again along with a citation,

In Situ Atomic-Scale Probing of the Reduction Dynamics of Two-Dimensional Fe2O3 Nanostructures by Wenhui Zhu, Jonathan P. Winterstein, Wei-Chang David Yang, Lu Yuan, Renu Sharma, and Guangwen Zhou. ACS Nano, 2017, 11 (1), pp 656–664 DOI: 10.1021/acsnano.6b06950 Publication Date (Web): December 13, 2016

Copyright © 2016 American Chemical Society

This paper is behind a paywall.

Transparent silver

This March 21, 2017 news item on Nanowerk is the first I’ve heard of transparent silver; it’s usually transparent aluminum (Note: A link has been removed),

The thinnest, smoothest layer of silver that can survive air exposure has been laid down at the University of Michigan, and it could change the way touchscreens and flat or flexible displays are made (Advanced Materials, “High-performance Doped Silver Films: Overcoming Fundamental Material Limits for Nanophotonic Applications”).

It could also help improve computing power, affecting both the transfer of information within a silicon chip and the patterning of the chip itself through metamaterial superlenses.

A March 21, 2017 University of Michigan  news release, which originated the news item, provides details about the research and features a mention about aluminum,

By combining the silver with a little bit of aluminum, the U-M researchers found that it was possible to produce exceptionally thin, smooth layers of silver that are resistant to tarnishing. They applied an anti-reflective coating to make one thin metal layer up to 92.4 percent transparent.

The team showed that the silver coating could guide light about 10 times as far as other metal waveguides—a property that could make it useful for faster computing. And they layered the silver films into a metamaterial hyperlens that could be used to create dense patterns with feature sizes a fraction of what is possible with ordinary ultraviolet methods, on silicon chips, for instance.

Screens of all stripes need transparent electrodes to control which pixels are lit up, but touchscreens are particularly dependent on them. A modern touch screen is made of a transparent conductive layer covered with a nonconductive layer. It senses electrical changes where a conductive object—such as a finger—is pressed against the screen.

“The transparent conductor market has been dominated to this day by one single material,” said L. Jay Guo, professor of electrical engineering and computer science.

This material, indium tin oxide, is projected to become expensive as demand for touch screens continues to grow; there are relatively few known sources of indium, Guo said.

“Before, it was very cheap. Now, the price is rising sharply,” he said.

The ultrathin film could make silver a worthy successor.

Usually, it’s impossible to make a continuous layer of silver less than 15 nanometers thick, or roughly 100 silver atoms. Silver has a tendency to cluster together in small islands rather than extend into an even coating, Guo said.

By adding about 6 percent aluminum, the researchers coaxed the metal into a film of less than half that thickness—seven nanometers. What’s more, when they exposed it to air, it didn’t immediately tarnish as pure silver films do. After several months, the film maintained its conductive properties and transparency. And it was firmly stuck on, whereas pure silver comes off glass with Scotch tape.

In addition to their potential to serve as transparent conductors for touch screens, the thin silver films offer two more tricks, both having to do with silver’s unparalleled ability to transport visible and infrared light waves along its surface. The light waves shrink and travel as so-called surface plasmon polaritons, showing up as oscillations in the concentration of electrons on the silver’s surface.

Those oscillations encode the frequency of the light, preserving it so that it can emerge on the other side. While optical fibers can’t scale down to the size of copper wires on today’s computer chips, plasmonic waveguides could allow information to travel in optical rather than electronic form for faster data transfer. As a waveguide, the smooth silver film could transport the surface plasmons over a centimeter—enough to get by inside a computer chip.

Here’s a link to and a citation for the paper,

High-Performance Doped Silver Films: Overcoming Fundamental Material Limits for Nanophotonic Applications by Cheng Zhang, Nathaniel Kinsey, Long Chen, Chengang Ji, Mingjie Xu, Marcello Ferrera, Xiaoqing Pan, Vladimir M. Shalaev, Alexandra Boltasseva, and Jay Guo. Advanced Materials DOI: 10.1002/adma.201605177 Version of Record online: 20 MAR 2017

© 2017 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim

This paper is behind a paywall.

Formation of a time (temporal) crystal

It’s a crystal arranged in time according to a March 8, 2017 University of Texas at Austin news release (also on EurekAlert), Note: Links have been removed,

Salt, snowflakes and diamonds are all crystals, meaning their atoms are arranged in 3-D patterns that repeat. Today scientists are reporting in the journal Nature on the creation of a phase of matter, dubbed a time crystal, in which atoms move in a pattern that repeats in time rather than in space.

The atoms in a time crystal never settle down into what’s known as thermal equilibrium, a state in which they all have the same amount of heat. It’s one of the first examples of a broad new class of matter, called nonequilibrium phases, that have been predicted but until now have remained out of reach. Like explorers stepping onto an uncharted continent, physicists are eager to explore this exotic new realm.

“This opens the door to a whole new world of nonequilibrium phases,” says Andrew Potter, an assistant professor of physics at The University of Texas at Austin. “We’ve taken these theoretical ideas that we’ve been poking around for the last couple of years and actually built it in the laboratory. Hopefully, this is just the first example of these, with many more to come.”

Some of these nonequilibrium phases of matter may prove useful for storing or transferring information in quantum computers.

Potter is part of the team led by researchers at the University of Maryland who successfully created the first time crystal from ions, or electrically charged atoms, of the element ytterbium. By applying just the right electrical field, the researchers levitated 10 of these ions above a surface like a magician’s assistant. Next, they whacked the atoms with a laser pulse, causing them to flip head over heels. Then they hit them again and again in a regular rhythm. That set up a pattern of flips that repeated in time.

Crucially, Potter noted, the pattern of atom flips repeated only half as fast as the laser pulses. This would be like pounding on a bunch of piano keys twice a second and notes coming out only once a second. This weird quantum behavior was a signature that he and his colleagues predicted, and helped confirm that the result was indeed a time crystal.

The team also consists of researchers at the National Institute of Standards and Technology, the University of California, Berkeley and Harvard University, in addition to the University of Maryland and UT Austin.

Frank Wilczek, a Nobel Prize-winning physicist at the Massachusetts Institute of Technology, was teaching a class about crystals in 2012 when he wondered whether a phase of matter could be created such that its atoms move in a pattern that repeats in time, rather than just in space.

Potter and his colleague Norman Yao at UC Berkeley created a recipe for building such a time crystal and developed ways to confirm that, once you had built such a crystal, it was in fact the real deal. That theoretical work was announced publically last August and then published in January in the journal Physical Review Letters.

A team led by Chris Monroe of the University of Maryland in College Park built a time crystal, and Potter and Yao helped confirm that it indeed had the properties they predicted. The team announced that breakthrough—constructing a working time crystal—last September and is publishing the full, peer-reviewed description today in Nature.

A team led by Mikhail Lukin at Harvard University created a second time crystal a month after the first team, in that case, from a diamond.

Here’s a link to and a citation for the paper,

Observation of a discrete time crystal by J. Zhang, P. W. Hess, A. Kyprianidis, P. Becker, A. Lee, J. Smith, G. Pagano, I.-D. Potirniche, A. C. Potter, A. Vishwanath, N. Y. Yao, & C. Monroe. Nature 543, 217–220 (09 March 2017) doi:10.1038/nature21413 Published online 08 March 2017

This paper is behind a paywall.