Tag Archives: Sandia National Laboratories

Preventing corrosion in oil pipelines at the nanoscale

A June 7, 2019 news item on Azonano announces research into the process of oil pipeline corrosion at the nanoscale (Note: A link has been removed),

Steel pipes tend to rust and sooner or later fail. To anticipate disasters, oil companies and others have developed computer models to foretell when replacement is necessary. However, if the models themselves are incorrect, they can be amended only through experience, an expensive problem if detection happens too late.

Currently, scientists at Sandia National Laboratories, the Department of Energy’s Center for Integrated Nanotechnologies and the Aramco Research Center in Boston, have discovered that a specific form of nanoscale corrosion is responsible for suddenly diminishing the working life of steel pipes, according to a paper recently published in Nature’s Materials Degradation journal.

A June 6, 2019 Sandia National Laboratories news release (also on EurekAlert), which originated the news item, provides more technical detail,

Using transmission electron microscopes, which shoot electrons through targets to take pictures, the researchers were able to pin the root of the problem on a triple junction formed by a grain of cementite — a compound of carbon and iron — and two grains of ferrite, a type of iron. This junction forms frequently during most methods of fashioning steel pipe.

Iron atoms slip-sliding away

The researchers found that disorder in the atomic structure of those triple junctions made it easier for the corrosive solution to remove iron atoms along that interface.
In the experiment, the corrosive process stopped when the triple junction had been consumed by corrosion, but the crevice left behind allowed the corrosive solution to attack the interior of the steel.

“We thought of a possible solution for forming new pipe, based on changing the microstructure of the steel surface during forging, but it still needs to be tested and have a patent filed if it works,” said Sandia’s principle investigator Katherine Jungjohann, a paper author and lead microscopist. “But now we think we know where the major problem is.”

Aramco senior research scientist Steven Hayden added, “This was the world’s first real-time observation of nanoscale corrosion in a real-world material — carbon steel — which is the most prevalent type of steel used in infrastructure worldwide. Through it, we identified the types of interfaces and mechanisms that play a role in the initiation and progression of localized steel corrosion. The work is already being translated into models used to prevent corrosion-related catastrophes like infrastructure collapse and pipeline breaks.”

To mimic the chemical exposure of pipe in the field, where the expensive, delicate microscopes could not be moved, very thin pipe samples were exposed at Sandia to a variety of chemicals known to pass through oil pipelines.

Sandia researcher and paper author Khalid Hattar put a dry sample in a vacuum and used a transmission electron microscope to create maps of the steel grain types and their orientation, much as a pilot in a plane might use a camera to create area maps of farmland and roads, except that Hattar’s maps had approximately 6 nanometers resolution. (A nanometer is one-billionth of a meter.)

“By comparing these maps before and after the liquid corrosion experiments, a direct identification of the first phase that fell out of the samples could be identified, essentially identifying the weakest link in the internal microstructure,” Hattar said.

Sandia researcher and paper author Paul Kotula said, “The sample we analyzed was considered a low-carbon steel, but it has relatively high-carbon inclusions of cementite which are the sites of localized corrosion attacks.

“Our transmission electron microscopes were a key piece of this work, allowing us to image the sample, observe the corrosion process, and do microanalysis before and after the corrosion occurred to identify the part played by the ferrite and cementite grains and the corrosion product.”

When Hayden first started working in corrosion research, he said, “I was daunted at how complex and poorly understood corrosion is. This is largely because realistic experiments would involve observing complex materials like steel in liquid environments and with nanoscale resolution, and the technology to accomplish such a feat had only recently been developed and yet to be applied to corrosion. Now we are optimistic that further work at Sandia and the Center for Integrated Nanotechnologies will allow us to rethink manufacturing processes to minimize the expression of the susceptible nanostructures that render the steel vulnerable to accelerated decay mechanisms.”

Invisible path of localized corrosion

Localized corrosion is different from uniform corrosion. The latter occurs in bulk form and is highly predictable. The former is invisible, creating a pathway observable only at its endpoint and increasing bulk corrosion rates by making it easier for corrosion to spread.

“A better understanding of the mechanisms by which corrosion initiates and progresses at these types of interfaces in steel will be key to mitigating corrosion-related losses,” according to the paper.

Here’s a link to and a citation for the paper,

Localized corrosion of low-carbon steel at the nanoscale by Steven C. Hayden, Claire Chisholm, Rachael O. Grudt, Jeffery A. Aguiar, William M. Mook, Paul G. Kotula, Tatiana S. Pilyugina, Daniel C. Bufford, Khalid Hattar, Timothy J. Kucharski, Ihsan M. Taie, Michele L. Ostraat & Katherine L. Jungjohann. npj Materials Degradation volume 3, Article number: 17 (2019) DOI: https://doi.org/10.1038/s41529-019-0078-1 Published 12 April 2019

This paper is open access.

Bad battery, good synapse from Stanford University

A May 4, 2019 news item on ScienceDaily announces the latest advance made by Stanford University and Sandia National Laboratories in the field of neuromorphic (brainlike) computing,

The brain’s capacity for simultaneously learning and memorizing large amounts of information while requiring little energy has inspired an entire field to pursue brain-like — or neuromorphic — computers. Researchers at Stanford University and Sandia National Laboratories previously developed one portion of such a computer: a device that acts as an artificial synapse, mimicking the way neurons communicate in the brain.

In a paper published online by the journal Science on April 25 [2019], the team reports that a prototype array of nine of these devices performed even better than expected in processing speed, energy efficiency, reproducibility and durability.

Looking forward, the team members want to combine their artificial synapse with traditional electronics, which they hope could be a step toward supporting artificially intelligent learning on small devices.

“If you have a memory system that can learn with the energy efficiency and speed that we’ve presented, then you can put that in a smartphone or laptop,” said Scott Keene, co-author of the paper and a graduate student in the lab of Alberto Salleo, professor of materials science and engineering at Stanford who is co-senior author. “That would open up access to the ability to train our own networks and solve problems locally on our own devices without relying on data transfer to do so.”

An April 25, 2019 Stanford University news release (also on EurekAlert but published May 3, 2019) by Taylor Kubota, which originated the news item, expands on the theme,

A bad battery, a good synapse

The team’s artificial synapse is similar to a battery, modified so that the researchers can dial up or down the flow of electricity between the two terminals. That flow of electricity emulates how learning is wired in the brain. This is an especially efficient design because data processing and memory storage happen in one action, rather than a more traditional computer system where the data is processed first and then later moved to storage.

Seeing how these devices perform in an array is a crucial step because it allows the researchers to program several artificial synapses simultaneously. This is far less time consuming than having to program each synapse one-by-one and is comparable to how the brain actually works.

In previous tests of an earlier version of this device, the researchers found their processing and memory action requires about one-tenth as much energy as a state-of-the-art computing system needs in order to carry out specific tasks. Still, the researchers worried that the sum of all these devices working together in larger arrays could risk drawing too much power. So, they retooled each device to conduct less electrical current – making them much worse batteries but making the array even more energy efficient.

The 3-by-3 array relied on a second type of device – developed by Joshua Yang at the University of Massachusetts, Amherst, who is co-author of the paper – that acts as a switch for programming synapses within the array.

“Wiring everything up took a lot of troubleshooting and a lot of wires. We had to ensure all of the array components were working in concert,” said Armantas Melianas, a postdoctoral scholar in the Salleo lab. “But when we saw everything light up, it was like a Christmas tree. That was the most exciting moment.”

During testing, the array outperformed the researchers’ expectations. It performed with such speed that the team predicts the next version of these devices will need to be tested with special high-speed electronics. After measuring high energy efficiency in the 3-by-3 array, the researchers ran computer simulations of a larger 1024-by-1024 synapse array and estimated that it could be powered by the same batteries currently used in smartphones or small drones. The researchers were also able to switch the devices over a billion times – another testament to its speed – without seeing any degradation in its behavior.

“It turns out that polymer devices, if you treat them well, can be as resilient as traditional counterparts made of silicon. That was maybe the most surprising aspect from my point of view,” Salleo said. “For me, it changes how I think about these polymer devices in terms of reliability and how we might be able to use them.”

Room for creativity

The researchers haven’t yet submitted their array to tests that determine how well it learns but that is something they plan to study. The team also wants to see how their device weathers different conditions – such as high temperatures – and to work on integrating it with electronics. There are also many fundamental questions left to answer that could help the researchers understand exactly why their device performs so well.

“We hope that more people will start working on this type of device because there are not many groups focusing on this particular architecture, but we think it’s very promising,” Melianas said. “There’s still a lot of room for improvement and creativity. We only barely touched the surface.”

Here’s a link to and a citation for the paper,

Parallel programming of an ionic floating-gate memory array for scalable neuromorphic computing by Elliot J. Fuller, Scott T. Keene, Armantas Melianas, Zhongrui Wang, Sapan Agarwal, Yiyang Li, Yaakov Tuchman, Conrad D. James, Matthew J. Marinella, J. Joshua Yang3, Alberto Salleo, A. Alec Talin1. Science 25 Apr 2019: eaaw5581 DOI: 10.1126/science.aaw5581

This paper is behind a paywall.

For anyone interested in more about brainlike/brain-like/neuromorphic computing/neuromorphic engineering/memristors, use any or all of those terms in this blog’s search engine.

High-performance, low-energy artificial synapse for neural network computing

This artificial synapse is apparently an improvement on the standard memristor-based artificial synapse but that doesn’t become clear until reading the abstract for the paper. First, there’s a Feb. 20, 2017 Stanford University news release by Taylor Kubota (dated Feb. 21, 2017 on EurekAlert), Note: Links have been removed,

For all the improvements in computer technology over the years, we still struggle to recreate the low-energy, elegant processing of the human brain. Now, researchers at Stanford University and Sandia National Laboratories have made an advance that could help computers mimic one piece of the brain’s efficient design – an artificial version of the space over which neurons communicate, called a synapse.

“It works like a real synapse but it’s an organic electronic device that can be engineered,” said Alberto Salleo, associate professor of materials science and engineering at Stanford and senior author of the paper. “It’s an entirely new family of devices because this type of architecture has not been shown before. For many key metrics, it also performs better than anything that’s been done before with inorganics.”

The new artificial synapse, reported in the Feb. 20 issue of Nature Materials, mimics the way synapses in the brain learn through the signals that cross them. This is a significant energy savings over traditional computing, which involves separately processing information and then storing it into memory. Here, the processing creates the memory.

This synapse may one day be part of a more brain-like computer, which could be especially beneficial for computing that works with visual and auditory signals. Examples of this are seen in voice-controlled interfaces and driverless cars. Past efforts in this field have produced high-performance neural networks supported by artificially intelligent algorithms but these are still distant imitators of the brain that depend on energy-consuming traditional computer hardware.

Building a brain

When we learn, electrical signals are sent between neurons in our brain. The most energy is needed the first time a synapse is traversed. Every time afterward, the connection requires less energy. This is how synapses efficiently facilitate both learning something new and remembering what we’ve learned. The artificial synapse, unlike most other versions of brain-like computing, also fulfills these two tasks simultaneously, and does so with substantial energy savings.

“Deep learning algorithms are very powerful but they rely on processors to calculate and simulate the electrical states and store them somewhere else, which is inefficient in terms of energy and time,” said Yoeri van de Burgt, former postdoctoral scholar in the Salleo lab and lead author of the paper. “Instead of simulating a neural network, our work is trying to make a neural network.”

The artificial synapse is based off a battery design. It consists of two thin, flexible films with three terminals, connected by an electrolyte of salty water. The device works as a transistor, with one of the terminals controlling the flow of electricity between the other two.

Like a neural path in a brain being reinforced through learning, the researchers program the artificial synapse by discharging and recharging it repeatedly. Through this training, they have been able to predict within 1 percent of uncertainly what voltage will be required to get the synapse to a specific electrical state and, once there, it remains at that state. In other words, unlike a common computer, where you save your work to the hard drive before you turn it off, the artificial synapse can recall its programming without any additional actions or parts.

Testing a network of artificial synapses

Only one artificial synapse has been produced but researchers at Sandia used 15,000 measurements from experiments on that synapse to simulate how an array of them would work in a neural network. They tested the simulated network’s ability to recognize handwriting of digits 0 through 9. Tested on three datasets, the simulated array was able to identify the handwritten digits with an accuracy between 93 to 97 percent.

Although this task would be relatively simple for a person, traditional computers have a difficult time interpreting visual and auditory signals.

“More and more, the kinds of tasks that we expect our computing devices to do require computing that mimics the brain because using traditional computing to perform these tasks is becoming really power hungry,” said A. Alec Talin, distinguished member of technical staff at Sandia National Laboratories in Livermore, California, and senior author of the paper. “We’ve demonstrated a device that’s ideal for running these type of algorithms and that consumes a lot less power.”

This device is extremely well suited for the kind of signal identification and classification that traditional computers struggle to perform. Whereas digital transistors can be in only two states, such as 0 and 1, the researchers successfully programmed 500 states in the artificial synapse, which is useful for neuron-type computation models. In switching from one state to another they used about one-tenth as much energy as a state-of-the-art computing system needs in order to move data from the processing unit to the memory.

This, however, means they are still using about 10,000 times as much energy as the minimum a biological synapse needs in order to fire. The researchers are hopeful that they can attain neuron-level energy efficiency once they test the artificial synapse in smaller devices.

Organic potential

Every part of the device is made of inexpensive organic materials. These aren’t found in nature but they are largely composed of hydrogen and carbon and are compatible with the brain’s chemistry. Cells have been grown on these materials and they have even been used to make artificial pumps for neural transmitters. The voltages applied to train the artificial synapse are also the same as those that move through human neurons.

All this means it’s possible that the artificial synapse could communicate with live neurons, leading to improved brain-machine interfaces. The softness and flexibility of the device also lends itself to being used in biological environments. Before any applications to biology, however, the team plans to build an actual array of artificial synapses for further research and testing.

Additional Stanford co-authors of this work include co-lead author Ewout Lubberman, also of the University of Groningen in the Netherlands, Scott T. Keene and Grégorio C. Faria, also of Universidade de São Paulo, in Brazil. Sandia National Laboratories co-authors include Elliot J. Fuller and Sapan Agarwal in Livermore and Matthew J. Marinella in Albuquerque, New Mexico. Salleo is an affiliate of the Stanford Precourt Institute for Energy and the Stanford Neurosciences Institute. Van de Burgt is now an assistant professor in microsystems and an affiliate of the Institute for Complex Molecular Studies (ICMS) at Eindhoven University of Technology in the Netherlands.

This research was funded by the National Science Foundation, the Keck Faculty Scholar Funds, the Neurofab at Stanford, the Stanford Graduate Fellowship, Sandia’s Laboratory-Directed Research and Development Program, the U.S. Department of Energy, the Holland Scholarship, the University of Groningen Scholarship for Excellent Students, the Hendrik Muller National Fund, the Schuurman Schimmel-van Outeren Foundation, the Foundation of Renswoude (The Hague and Delft), the Marco Polo Fund, the Instituto Nacional de Ciência e Tecnologia/Instituto Nacional de Eletrônica Orgânica in Brazil, the Fundação de Amparo à Pesquisa do Estado de São Paulo and the Brazilian National Council.

Here’s an abstract for the researchers’ paper (link to paper provided after abstract) and it’s where you’ll find the memristor connection explained,

The brain is capable of massively parallel information processing while consuming only ~1–100fJ per synaptic event1, 2. Inspired by the efficiency of the brain, CMOS-based neural architectures3 and memristors4, 5 are being developed for pattern recognition and machine learning. However, the volatility, design complexity and high supply voltages for CMOS architectures, and the stochastic and energy-costly switching of memristors complicate the path to achieve the interconnectivity, information density, and energy efficiency of the brain using either approach. Here we describe an electrochemical neuromorphic organic device (ENODe) operating with a fundamentally different mechanism from existing memristors. ENODe switches at low voltage and energy (<10pJ for 103μm2 devices), displays >500 distinct, non-volatile conductance states within a ~1V range, and achieves high classification accuracy when implemented in neural network simulations. Plastic ENODes are also fabricated on flexible substrates enabling the integration of neuromorphic functionality in stretchable electronic systems6, 7. Mechanical flexibility makes ENODes compatible with three-dimensional architectures, opening a path towards extreme interconnectivity comparable to the human brain.

Here’s a link to and a citation for the paper,

A non-volatile organic electrochemical device as a low-voltage artificial synapse for neuromorphic computing by Yoeri van de Burgt, Ewout Lubberman, Elliot J. Fuller, Scott T. Keene, Grégorio C. Faria, Sapan Agarwal, Matthew J. Marinella, A. Alec Talin, & Alberto Salleo. Nature Materials (2017) doi:10.1038/nmat4856 Published online 20 February 2017

This paper is behind a paywall.

ETA March 8, 2017 10:28 PST: You may find this this piece on ferroelectricity and neuromorphic engineering of interest (March 7, 2017 posting titled: Ferroelectric roadmap to neuromorphic computing).

Solar cells and ‘tinkertoys’

A Nov. 3, 2014 news item on Nanowerk features a project researchers hope will improve photovoltaic efficiency and make solar cells competitive with other sources of energy,

 Researchers at Sandia National Laboratories have received a $1.2 million award from the U.S. Department of Energy’s SunShot Initiative to develop a technique that they believe will significantly improve the efficiencies of photovoltaic materials and help make solar electricity cost-competitive with other sources of energy.

The work builds on Sandia’s recent successes with metal-organic framework (MOF) materials by combining them with dye-sensitized solar cells (DSSC).

“A lot of people are working with DSSCs, but we think our expertise with MOFs gives us a tool that others don’t have,” said Sandia’s Erik Spoerke, a materials scientist with a long history of solar cell exploration at the labs.

A Nov. 3, 2014 Sandia National Laboratories news release, which originated the news item, describes the project and the technology in more detail,

Sandia’s project is funded through SunShot’s Next Generation Photovoltaic Technologies III program, which sponsors projects that apply promising basic materials science that has been proven at the materials properties level to demonstrate photovoltaic conversion improvements to address or exceed SunShot goals.

The SunShot Initiative is a collaborative national effort that aggressively drives innovation with the aim of making solar energy fully cost-competitive with traditional energy sources before the end of the decade. Through SunShot, the Energy Department supports efforts by private companies, universities and national laboratories to drive down the cost of solar electricity to 6 cents per kilowatt-hour.

DSSCs provide basis for future advancements in solar electricity production

Dye-sensitized solar cells, invented in the 1980s, use dyes designed to efficiently absorb light in the solar spectrum. The dye is mated with a semiconductor, typically titanium dioxide, that facilitates conversion of the energy in the optically excited dye into usable electrical current.

DSSCs are considered a significant advancement in photovoltaic technology since they separate the various processes of generating current from a solar cell. Michael Grätzel, a professor at the École Polytechnique Fédérale de Lausanne in Switzerland, was awarded the 2010 Millennium Technology Prize for inventing the first high-efficiency DSSC.

“If you don’t have everything in the DSSC dependent on everything else, it’s a lot easier to optimize your photovoltaic device in the most flexible and effective way,” explained Sandia senior scientist Mark Allendorf. DSSCs, for example, can capture more of the sun’s energy than silicon-based solar cells by using varied or multiple dyes and also can use different molecular systems, Allendorf said.

“It becomes almost modular in terms of the cell’s components, all of which contribute to making electricity out of sunlight more efficiently,” said Spoerke.

MOFs’ structure, versatility and porosity help overcome DSSC limitations

Though a source of optimism for the solar research community, DSSCs possess certain challenges that the Sandia research team thinks can be overcome by combining them with MOFs.

Allendorf said researchers hope to use the ordered structure and versatile chemistry of MOFs to help the dyes in DSSCs absorb more solar light, which he says is a fundamental limit on their efficiency.

“Our hypothesis is that we can put a thin layer of MOF on top of the titanium dioxide, thus enabling us to order the dye in exactly the way we want it,” Allendorf explained. That, he said, should avoid the efficiency-decreasing problem of dye aggregation, since the dye would then be locked into the MOF’s crystalline structure.

MOFs are highly-ordered materials that also offer high levels of porosity, said Allendorf, a MOF expert and 29-year veteran of Sandia. He calls the materials “Tinkertoys for chemists” because of the ease with which new structures can be envisioned and assembled. [emphasis mine]

Allendorf said the unique porosity of MOFs will allow researchers to add a second dye, placed into the pores of the MOF, that will cover additional parts of the solar spectrum that weren’t covered with the initial dye. Finally, he and Spoerke are convinced that MOFs can help improve the overall electron charge and flow of the solar cell, which currently faces instability issues.

“Essentially, we believe MOFs can help to more effectively organize the electronic and nano-structure of the molecules in the solar cell,” said Spoerke. “This can go a long way toward improving the efficiency and stability of these assembled devices.”

In addition to the Sandia team, the project includes researchers at the University of Colorado-Boulder, particularly Steve George, an expert in a thin film technology known as atomic layer deposition.

The technique, said Spoerke, is important in that it offers a pathway for highly controlled materials chemistry with potentially low-cost manufacturing of the DSSC/MOF process.

“With the combination of MOFs, dye-sensitized solar cells and atomic layer deposition, we think we can figure out how to control all of the key cell interfaces and material elements in a way that’s never been done before,” said Spoerke. “That’s what makes this project exciting.”

Here’s a picture showing an early Tinkertoy set,

Original Tinkertoy, Giant Engineer #155. Questor Education Products Co., c.1950 [downloaded from http://en.wikipedia.org/wiki/Tinkertoy#mediaviewer/File:Tinkertoy_300126232168.JPG]

Original Tinkertoy, Giant Engineer #155. Questor Education Products Co., c.1950 [downloaded from http://en.wikipedia.org/wiki/Tinkertoy#mediaviewer/File:Tinkertoy_300126232168.JPG]

The Tinkertoy entry on Wikipedia has this,

The Tinkertoy Construction Set is a toy construction set for children. It was created in 1914—six years after the Frank Hornby’s Meccano sets—by Charles H. Pajeau and Robert Pettit and Gordon Tinker in Evanston, Illinois. Pajeau, a stonemason, designed the toy after seeing children play with sticks and empty spools of thread. He and Pettit set out to market a toy that would allow and inspire children to use their imaginations. At first, this did not go well, but after a year or two over a million were sold.

Shrinky Dinks, tinkertoys, Lego have all been mentioned here in conjunction with lab work. I’m always delighted to see scientists working with or using children’s toys as inspiration of one type or another.

Sandia National Laboratories looking for commercial partners to bring titanium dioxide nanoparticles (5 nm in diameter) to market

Sandia National Laboratories (Sandia Labs) doesn’t  ask directly but I think the call for partners is more than heavily implied. Let’s start with a June 17, 2014 news item on ScienceDaily,

Sandia National Laboratories has come up with an inexpensive way to synthesize titanium-dioxide nanoparticles and is seeking partners who can demonstrate the process at industrial scale for everything from solar cells to light-emitting diodes (LEDs).

Titanium-dioxide (TiO2) nanoparticles show great promise as fillers to tune the refractive index of anti-reflective coatings on signs and optical encapsulants for LEDs, solar cells and other optical devices. Optical encapsulants are coverings or coatings, usually made of silicone, that protect a device.

Industry has largely shunned TiO2 nanoparticles because they’ve been difficult and expensive to make, and current methods produce particles that are too large.

Sandia became interested in TiO2 for optical encapsulants because of its work on LED materials for solid-state lighting.

Current production methods for TiO2 often require high-temperature processing or costly surfactants — molecules that bind to something to make it soluble in another material, like dish soap does with fat.
ADVERTISEMENT

Those methods produce less-than-ideal nanoparticles that are very expensive, can vary widely in size and show significant particle clumping, called agglomeration.

Sandia’s technique, on the other hand, uses readily available, low-cost materials and results in nanoparticles that are small, roughly uniform in size and don’t clump.

“We wanted something that was low cost and scalable, and that made particles that were very small,” said researcher Todd Monson, who along with principal investigator Dale Huber patented the process in mid-2011 as “High-yield synthesis of brookite TiO2 nanoparticles.” [emphases mine]

A June 17, 2014 Sandia Labs news release, which originated the news item, goes on to describe the technology (Note: Links have been removed),

Their (Monson and Huber) method produces nanoparticles roughly 5 nanometers in diameter, approximately 100 times smaller than the wavelength of visible light, so there’s little light scattering, Monson said.

“That’s the advantage of nanoparticles — not just nanoparticles, but small nanoparticles,” he said.

Scattering decreases the amount of light transmission. Less scattering also can help extract more light, in the case of an LED, or capture more light, in the case of a solar cell.

TiO2 can increase the refractive index of materials, such as silicone in lenses or optical encapsulants. Refractive index is the ability of material to bend light. Eyeglass lenses, for example, have a high refractive index.

Practical nanoparticles must be able to handle different surfactants so they’re soluble in a wide range of solvents. Different applications require different solvents for processing.

“If someone wants to use TiO2 nanoparticles in a range of different polymers and applications, it’s convenient to have your particles be suspension-stable in a wide range of solvents as well,” Monson said. “Some biological applications may require stability in aqueous-based solvents, so it could be very useful to have surfactants available that can make the particles stable in water.”

The researchers came up with their synthesis technique by pooling their backgrounds — Huber’s expertise in nanoparticle synthesis and polymer chemistry and Monson’s knowledge of materials physics. The work was done under a Laboratory Directed Research and Development project Huber began in 2005.

“The original project goals were to investigate the basic science of nanoparticle dispersions, but when this synthesis was developed near the end of the project, the commercial applications were obvious,” Huber said. The researchers subsequently refined the process to make particles easier to manufacture.

Existing synthesis methods for TiO2 particles were too costly and difficult to scale up production. In addition, chemical suppliers ship titanium-dioxide nanoparticles dried and without surfactants, so particles clump together and are impossible to break up. “Then you no longer have the properties you want,” Monson said.

The researchers tried various types of alcohol as an inexpensive solvent to see if they could get a common titanium source, titanium isopropoxide, to react with water and alcohol.

The biggest challenge, Monson said, was figuring out how to control the reaction, since adding water to titanium isopropoxide most often results in a fast reaction that produces large chunks of TiO2, rather than nanoparticles. “So the trick was to control the reaction by controlling the addition of water to that reaction,” he said.

Some textbooks dismissed the titanium isopropoxide-water-alcohol method as a way of making TiO2 nanoparticles. Huber and Monson, however, persisted until they discovered how to add water very slowly by putting it into a dilute solution of alcohol. “As we tweaked the synthesis conditions, we were able to synthesize nanoparticles,” Monson said.

Whoever wrote the news release now makes the plea which isn’t quite a plea (Note: A link has been removed),

The next step is to demonstrate synthesis at an industrial scale, which will require a commercial partner. Monson, who presented the work at Sandia’s fall Science and Technology Showcase, said Sandia has received inquiries from companies interested in commercializing the technology.

“Here at Sandia we’re not set up to produce the particles on a commercial scale,” he said. “We want them to pick it up and run with it and start producing these on a wide enough scale to sell to the end user.”

Sandia would synthesize a small number of particles, then work with a partner company to form composites and evaluate them to see if they can be used as better encapsulants for LEDs, flexible high-index refraction composites for lenses or solar concentrators. “I think it can meet quite a few needs,” Monson said.

I wish them good luck.

McGill University and Sandia Labs validate Luttinger liquid model predictions

A collaboration between McGill University (Québec, Canada) and Sandia National Laboratories (New Mexico, US) has resulted in the answer to a question that was posed over 50 years ago in the field of quantum physics according to a Jan. 23, 2014 McGill University news release (also on EurekAlert),

How would electrons behave if confined to a wire so slender they could pass through it only in single-file?

The question has intrigued scientists for more than half a century. In 1950, Japanese Nobel Prize winner Sin-Itiro Tomonaga, followed by American physicist Joaquin Mazdak Luttinger in 1963, came up with a mathematical model showing that the effects of one particle on all others in a one-dimensional line would be much greater than in two- or three-dimensional spaces. Among quantum physicists, this model came to be known as the “Luttinger liquid” state.

The news release provides more information about the problem and about how the scientists addressed it,

What does one-dimensional quantum physics involve?  Gervais [Professor Guillaume Gervais of McGill’s Department of Physics] explains it this way: “Imagine that you are driving on a highway and the traffic is not too dense. If a car stops in front of you, you can get around it by passing to the left or right. That’s two-dimensional physics. But if you enter a tunnel with a single lane and a car stops, all the other cars behind it must slam on the brakes. That’s the essence of the Luttinger liquid effect. The way electrons behave in the Luttinger state is entirely different because they all become coupled to one another.”

To scientists, “what is so fascinating and elegant about quantum physics in one dimension is that the solutions are mathematically exact,” Gervais adds. “In most other cases, the solutions are only approximate.”

Making a device with the correct parameters to conduct the experiment was no simple task, however, despite the team’s 2011 discovery of a way to do so. It took years of trial, and more than 250 faulty devices – each of which required 29 processing steps – before Laroche’s [McGill PhD student Dominique Laroche[ painstaking efforts succeeded in producing functional devices yielding reliable data.  “So many things could go wrong during the fabrication process that troubleshooting the failed devices felt like educated guesswork at times,” explains Laroche.  “Adding in the inherent failure rate compounded at each processing step made the fabrication of these devices extremely challenging.”

In particular, the experiment measures the effect that a very small electrical current in one of the wires has on a nearby wire.  This can be viewed as the “friction” between the two circuits, and the experiment shows that this friction increases as the circuits are cooled to extremely low temperatures. This effect is a strong prediction of Luttinger liquid theory.

“It took a very long time to make these devices,” said Lilly. “It’s not impossible to do in other labs, but Sandia has crystal-growing capabilities, a microfabrication facility, and support for fundamental research from DOE’s office of Basic Energy Sciences (BES), and we’re very interested in understanding the fundamental ideas that drive the behavior of very small systems.”

The findings could lead to practical applications in electronics and other fields. While it’s difficult at this stage to predict what those might be, “the same was true in the case of the laser when it was invented,” Gervais notes.  “Nanotechnologies are already helping us in medicine, electronics and engineering – and this work shows that they can help us get to the bottom of a long-standing question in quantum physics.”

Here’s a link to and a citation for the paper,

1D-1D Coulomb Drag Signature of a Luttinger Liquid by D. Laroche, G. Gervais, M. P. Lilly, and J. L. Reno. Science DOI: 10.1126/science.1244152 Published Online January 23 2014

This paper is behind a paywall.

Cooling it—an application using carbon nanotubes and a theory that hotter leads to cooler

The only thing these two news items have in common is their focus on cooling down electronic devices. Well, there’s also the fact that the work is being done at the nanoscale.

First, there’s a Jan. 23, 2014 news item on Azonano about a technique using carbon nanotubes to cool down microprocessors,

“Cool it!” That’s a prime directive for microprocessor chips and a promising new solution to meeting this imperative is in the offing. Researchers with the U.S. Department of Energy (DOE)’s Lawrence Berkeley National Laboratory (Berkeley Lab) have developed a “process friendly” technique that would enable the cooling of microprocessor chips through carbon nanotubes.

Frank Ogletree, a physicist with Berkeley Lab’s Materials Sciences Division, led a study in which organic molecules were used to form strong covalent bonds between carbon nanotubes and metal surfaces. This improved by six-fold the flow of heat from the metal to the carbon nanotubes, paving the way for faster, more efficient cooling of computer chips. The technique is done through gas vapor or liquid chemistry at low temperatures, making it suitable for the manufacturing of computer chips.

The Jan. 22, 2014 Berkeley Lab news release (also on EurekAlert), which originated the news item, describes the nature  of the problem in more detail,

Overheating is the bane of microprocessors. As transistors heat up, their performance can deteriorate to the point where they no longer function as transistors. With microprocessor chips becoming more densely packed and processing speeds continuing to increase, the overheating problem looms ever larger. The first challenge is to conduct heat out of the chip and onto the circuit board where fans and other techniques can be used for cooling. Carbon nanotubes have demonstrated exceptionally high thermal conductivity but their use for cooling microprocessor chips and other devices has been hampered by high thermal interface resistances in nanostructured systems.

“The thermal conductivity of carbon nanotubes exceeds that of diamond or any other natural material but because carbon nanotubes are so chemically stable, their chemical interactions with most other materials are relatively weak, which makes for  high thermal interface resistance,” Ogletree says. “Intel came to the Molecular Foundry wanting to improve the performance of carbon nanotubes in devices. Working with Nachiket Raravikar and Ravi Prasher, who were both Intel engineers when the project was initiated, we were able to increase and strengthen the contact between carbon nanotubes and the surfaces of other materials. This reduces thermal resistance and substantially improves heat transport efficiency.”

The news release then describes the proposed solution,

Sumanjeet Kaur, lead author of the Nature Communications paper and an expert on carbon nanotubes, with assistance from co-author and Molecular Foundry chemist Brett Helms, used reactive molecules to bridge the carbon nanotube/metal interface – aminopropyl-trialkoxy-silane (APS) for oxide-forming metals, and cysteamine for noble metals. First vertically aligned carbon nanotube arrays were grown on silicon wafers, and thin films of aluminum or gold were evaporated on glass microscope cover slips. The metal films were then “functionalized” and allowed to bond with the carbon nanotube arrays. Enhanced heat flow was confirmed using a characterization technique developed by Ogletree that allows for interface-specific measurements of heat transport.

“You can think of interface resistance in steady-state heat flow as being an extra amount of distance the heat has to flow through the material,” Kaur says. “With carbon nanotubes, thermal interface resistance adds something like 40 microns of distance on each side of the actual carbon nanotube layer. With our technique, we’re able to decrease the interface resistance so that the extra distance is around seven microns at each interface.”

Although the approach used by Ogletree, Kaur and their colleagues substantially strengthened the contact between a metal and individual carbon nanotubes within an array, a majority of the nanotubes within the array may still fail to connect with the metal. The Berkeley team is now developing a way to improve the density of carbon nanotube/metal contacts. Their technique should also be applicable to single and multi-layer graphene devices, which face the same cooling issues.

For anyone who’s never heard of the Molecular Foundry before (from the news release),

The Molecular Foundry is one of five DOE [Department of Energy] Nanoscale Science Research Centers (NSRCs), national user facilities for interdisciplinary research at the nanoscale, supported by the DOE Office of Science. Together the NSRCs comprise a suite of complementary facilities that provide researchers with state-of-the-art capabilities to fabricate, process, characterize, and model nanoscale materials, and constitute the largest infrastructure investment of the National Nanotechnology Initiative. The NSRCs are located at DOE’s Argonne, Brookhaven, Lawrence Berkeley, Oak Ridge and Sandia and Los Alamos national laboratories.

My second item comes from the University of Buffalo (UB), located in the US. From a Jan. 21, 2014 University of Buffalo news release by Cory Nealon (also on EurekAlert),

Heat in electronic devices is generated by the movement of electrons through transistors, resistors and other elements of an electrical network. Depending on the network, there are a variety of ways, such as cooling fans and heat sinks, to prevent the circuits from overheating.

But as more integrated circuits and transistors are added to devices to boost their computing power, it’s becoming more difficult to keep those elements cool. Most nanoelectrics research centers are working to develop advanced materials that are capable of withstanding the extreme environment inside smartphones, laptops and other devices.

While advanced materials show tremendous potential, the UB research suggests there may still be room within the existing paradigm of electronic devices to continue developing more powerful computers.

To support their findings, the researchers fabricated nanoscale semiconductor devices in a state-of-the-art gallium arsenide crystal provided to UB by Sandia’s Reno [John L. Reno, Center for Integrated Nanotechnologies at Sandia National Laboratories]. The researchers then subjected the chip to a large voltage, squeezing an electrical current through the nanoconductors. This, in turn, increased the amount of heat circulating through the chip’s nanotransistor.

But instead of degrading the device, the nanotransistor spontaneously transformed itself into a quantum state that was protected from the effect of heating and provided a robust channel of electric current. To help explain, Bird [Jonathan Bird, UB professor of electrical engineering] offered an analogy to Niagara Falls.

“The water, or energy, comes from a source; in this case, the Great Lakes. It’s channeled into a narrow point (the Niagara River) and ultimately flows over Niagara Falls. At the bottom of waterfall is dissipated energy. But unlike the waterfall, this dissipated energy recirculates throughout the chip and changes how heat affects, or in this case doesn’t affect, the network’s operation.”

While this behavior may seem unusual, especially conceptualizing it in terms of water flowing over a waterfall, it is the direct result of the quantum mechanical nature of electronics when viewed on the nanoscale. The current is made up of electrons which spontaneously organize to form a narrow conducting filament through the nanoconductor. It is this filament that is so robust against the effects of heating.

“We’re not actually eliminating the heat, but we’ve managed to stop it from affecting the electrical network. In a way, this is an optimization of the current paradigm,” said Han [J. E. Han, UB Dept. of Physics], who developed the theoretical models which explain the findings.

What an interesting and counter-intuitive approach to managing the heat in our devices.

For those who want more, here’s a link to and citation for the carbon nanotube paper,

Enhanced thermal transport at covalently functionalized carbon nanotube array interfaces by Sumanjeet Kaur, Nachiket Raravikar, Brett A. Helms, Ravi Prasher, & D. Frank Ogletree. Nature Communications 5, Article number: 3082 doi:10.1038/ncomms4082 Published 22 January 2014

This paper is behind a paywall.

Now here’s a link to and a citation for the ‘making it hotter to make it cooler’ paper,

Formation of a protected sub-band for conduction in quantum point contacts under extreme biasing by J. Lee, J. E. Han, S. Xiao, J. Song, J. L. Reno, & J. P. Bird. Nature Nanotechnology (2014) doi:10.1038/nnano.2013.297 Published online 19 January 2014

This paper is behind a paywall although there is an option to preview it for free via ReadCube Access.

Scant detail about Sandia Labs’ nanoscientist and federal fraud charges

In US law (which is based on English common law), there is a presumption of innocence and, so far, there is no information about the Jianyu Huang situation other than a listing of the charges against him and a description of his firing from Sandia National Labs in April 2012.

Here’s some information, from the June 6, 2012 article on the Huffington Post,

A former scientist at Sandia National Labs in New Mexico has pleaded not guilty to charges of stealing research to share with China.

Jianyu Huang was arraigned Tuesday on five counts of federal program fraud and one count of false statements. He is accused of embezzling and sharing information from his position with the lab’s Center for Integrated Nanotechnologies since 2009, according to a federal indictment.

While these are serious charges being laid by the government I want to note that governments don’t always get it right. In my May 18, 2012 posting about an upcoming UNESCO meeting in Vancouver, Canada, Memory of the World, I mentioned a rather extraordinary article written by US law professor, Eric Goldman, where he outlines his indictment of the US government case presented against Megaupload and Kim Dotcom. I gather that there are, at the least, irregularities. I should also note that the Canadian government cooperated and participated in this massive ongoing legal action.

Getting back to the Sandia National Labs situation, Lee Rannals at Red Orbit wrote in his (hers?) June 6, 2012 posting,

Sandia National Labs said that he did not have access to classified national security information.The lab said that Huang was fired in April for removing a company-owned laptop from the facility.

Sandia is known for its nuclear research, as well as the disposal of the U.S. nuclear weapons program’s hazardous waste. The company is a subsidiary of Lockheed Martin Corporation.

Huang started working on nanotechnology at a Sandia Labs research center that focuses on nanotechnology five years ago.

Alexander Besant’s June 5, 2012 posting on Global Post adds these details,

The Associated Press reported that Huang claimed that nanotechnology belonging to the United States, which funds the Sandia Labs, was his own and that he shared data with state-run schools in China.

He is also being accused of lying about the fact that he brought a lab-owned laptop to China, KRQE reported.

So if I read this correctly, he was fired for bringing the lab-owned laptop to Chine and now he’s being prosecuted for lying about it (the one count of false statements). Meanwhile, he’s charged with five counts of federal program fraud (sharing research data with colleagues in Chinese state-run schools or saying that it was his own and then sharing the data?).

One note, Huang does have a blog on the iMechanica website. His last post was made on March 25, 2012 where he discussed tin and tin dioxide nanowires.

Quelle drag! McGill research team develops tiny (150 atoms) electronic circuits

Drag and heat—sounds like a car race, doesn’t it? It’s all about electronics and some nanoscale work by researchers at McGill University (Montréal, Canada). From the Dec. 7, 2011 McGill news release,

A team of scientists, led by Guillaume Gervais from McGill’s Physics Department and Mike Lilly from Sandia National Laboratories, has engineered one of the world’s smallest electronic circuits. It is formed by two wires separated by only about 150 atoms or 15 nanometers (nm).

The paper is available behind Nature’s paywall or you can view the abstract for Positive and negative Coulomb drag in vertically integrated one-dimensional quantum wires. Excerpted from the abstract,

Electron interactions in and between wires become increasingly complex and important as circuits are scaled to nanometre sizes, or use reduced-dimensional conductors such as carbon nanotubes, nanowiresand gated high-mobility two-dimensional electron systems. This is because the screening of the long-range Coulomb potential of individual carriers is weakened in these systems, which can lead to phenomena such as Coulomb drag, where a current in one wire induces a voltage in a second wire through Coulomb interactions alone.

The  news release addresses the Coulomb drag in more accessible (for some of us) language,

This is the first time that anyone has studied how the wires in an electronic circuit interact with one another when packed so tightly together. Surprisingly, the authors found that the effect of one wire on the other can be either positive or negative. This means that a current in one wire can produce a current in the other one that is either in the same or the opposite direction. This discovery, based on the principles of quantum physics, suggests a need to revise our understanding of how even the simplest electronic circuits behave at the nanoscale.

In addition to the effect on the speed and efficiency of future electronic circuits, this discovery could also help to solve one of the major challenges facing future computer design. This is managing the ever-increasing amount of heat produced by integrated circuits.

According to the news release, this discovery could have an impact on a wide range of electronics including smartphones, desktop computers, televisions, and GPS systems. Congratulations to the McGill team: D. Laroche, G. Gervais, M. P. Lilly, and J. L. Reno.