Tag Archives: Sandia National Laboratories

Neuromorphic wires (inspired by nerve cells) amplify their own signals—no amplifiers needed

Katherine Bourzac’s September 16, 2024 article for the IEEE (Institute for Electrical and Electronics Engineers) Spectrum magazine provides an accessible (relatively speaking) description of a possible breakthrough for neuromorphic computing, Note: Links have been removed,

In electrical engineering, “we just take it for granted that the signal decays” as it travels, says Timothy Brown, a postdoc in materials physics at Sandia National Lab who was part of the group of researchers who made the self-amplifying device. Even the best wires and chip interconnects put up resistance to the flow of electrons, degrading signal quality over even relatively small distances. This constrains chip designs—lossy interconnects are broken up into ever smaller lengths, and signals are bolstered by buffers and drivers. A 1-square-centimeter chip has about 10,000 repeaters to drive signals, estimates R. Stanley Williams, a professor of computer engineering at Texas A&M University.

Williams is one of the pioneers of neuromorphic computing, which takes inspiration from the nervous system. Axons, the electrical cables that carry signals from the body of a nerve cell to synapses where they connect with projections from other cells, are made up of electrically resistant materials. Yet they can carry high fidelity signals over long distances. The longest axons in the human body are about 1 meter, running from the base of the spine to the feet. Blue whales are thought to have 30 m long axons stretching to the tips of their tails. If something bites the whale’s tail, it will react rapidly. Even from 30 meters away, “the pulses arrive perfectly,” says Williams. “That’s something that doesn’t exist in electrical engineering.”

That’s because axons are active transmission lines: they provide gain to the signal along their length. Williams says he started pondering how to mimic this in an inorganic system 12 years ago. A grant from the US Department of Energy enabled him to build a team with the necessary resources to make it happen. The team included Williams, Brown, and Suhas Kumar, a materials physicist at Sandia.

Axons are coated with an insulating layer called the myelin sheath. Where there are gaps in the sheath, negatively charged sodium ions and positively charged potassium ions can move in and out of the axon, changing the voltage across the cell membrane and pumping in energy in the process. Some of that energy gets taken up by the electrical signal, amplifying it.

Williams and his team wanted to mimic this in a simple structure. They didn’t try to mimic all the physical structures in axons—instead, they sought guidance in a mathematical description of how they amplify signals. Axons operate in a mode called the “edge of chaos,” which combines stable and unstable qualities. This may seem inherently contradictory. Brown likens this kind of system to a saddle that’s curved with two dips. The saddle curves up towards the front and the back, keeping you stable as you rock back and forth. But if you get jostled from side to side, you’re more likely to fall off. When you’re riding in the saddle, you’re operating at the edge of chaos, in a semistable state. In the abstract space of electrical engineering, that jostling is equivalent to wiggles in current and voltage.

There’s a long way to go from this first experimental demonstration to a reimagining of computer chip interconnects. The team is providing samples for other researchers [emphasis mine] who want to verify their measurements. And they’re trying other materials to see how well they do—LaCoO3 [lanthanum colbalt oxide] is only the first one they’ve tested.

Williams hopes this research will show electrical engineers new ideas about how to move forward. “The dream is to redesign chips,” he says. Electrical engineers have long known about nonlinear dynamics, but have hardly ever taken advantage of them, Williams says. “This requires thinking about things and doing measurements differently than they have been done for 50 years,” he says.

If you have the time, please read Bourzac’s September 16, 2024 article in its entirety. For those who want the technical nitty gritty, here’s a link to and a citation for the paper,

Axon-like active signal transmission by Timothy D. Brown, Alan Zhang, Frederick U. Nitta, Elliot D. Grant, Jenny L. Chong, Jacklyn Zhu, Sritharini Radhakrishnan, Mahnaz Islam, Elliot J. Fuller, A. Alec Talin, Patrick J. Shamberger, Eric Pop, R. Stanley Williams & Suhas Kumar. Nature volume 633, pages 804–810 (2024) DOI: https://doi.org/10.1038/s41586-024-07921 Published online: 11 September 2024 Issue Date: 26 September 2024

This paper is open access.

How memristors retain information without a power source? A mystery solved

A September 10, 2024 news item on ScienceDaily provides a technical explanation of how memristors, without a power source, can retain information,

Phase separation, when molecules part like oil and water, works alongside oxygen diffusion to help memristors — electrical components that store information using electrical resistance — retain information even after the power is shut off, according to a University of Michigan led study recently published in Matter.

A September 11, 2024 University of Michigan press release (also on EurekAltert but published September 10, 2024), which originated the news item, delves further into the research,

Up to this point, explanations have not fully grasped how memristors retain information without a power source, known as nonvolatile memory, because models and experiments do not match up.

“While experiments have shown devices can retain information for over 10 years, the models used in the community show that information can only be retained for a few hours,” said Jingxian Li, U-M doctoral graduate of materials science and engineering and first author of the study.

To better understand the underlying phenomenon driving nonvolatile memristor memory, the researchers focused on a device known as resistive random access memory or RRAM, an alternative to the volatile RAM used in classical computing, and are particularly promising for energy-efficient artificial intelligence applications. 

The specific RRAM studied, a filament-type valence change memory (VCM), sandwiches an insulating tantalum oxide layer between two platinum electrodes. When a certain voltage is applied to the platinum electrodes, a conductive filament forms a tantalum ion bridge passing through the insulator to the electrodes, which allows electricity to flow, putting the cell in a low resistance state representing a “1” in binary code. If a different voltage is applied, the filament is dissolved as returning oxygen atoms react with the tantalum ions, “rusting” the conductive bridge and returning to a high resistance state, representing a binary code of “0”. 

It was once thought that RRAM retains information over time because oxygen is too slow to diffuse back. However, a series of experiments revealed that previous models have neglected the role of phase separation. 

“In these devices, oxygen ions prefer to be away from the filament and will never diffuse back, even after an indefinite period of time. This process is analogous to how a mixture of water and oil will not mix, no matter how much time we wait, because they have lower energy in a de-mixed state,” said Yiyang Li, U-M assistant professor of materials science and engineering and senior author of the study.

To test retention time, the researchers sped up experiments by increasing the temperature. One hour at 250°C is equivalent to about 100 years at 85°C—the typical temperature of a computer chip.

Using the extremely high-resolution imaging of atomic force microscopy, the researchers imaged filaments, which measure only about five nanometers or 20 atoms wide, forming within the one micron wide RRAM device.  

“We were surprised that we could find the filament in the device. It’s like finding a needle in a haystack,” Li said. 

The research team found that different sized filaments yielded different retention behavior. Filaments smaller than about 5 nanometers dissolved over time, whereas filaments larger than 5 nanometers strengthened over time. The size-based difference cannot be explained by diffusion alone.

Together, experimental results and models incorporating thermodynamic principles showed the formation and stability of conductive filaments depend on phase separation. 

The research team leveraged phase separation to extend memory retention from one day to well over 10 years in a rad-hard memory chip—a memory device built to withstand radiation exposure for use in space exploration. 

Other applications include in-memory computing for more energy efficient AI applications or memory devices for electronic skin—a stretchable electronic interface designed to mimic the sensory capabilities of human skin. Also known as e-skin, this material could be used to provide sensory feedback to prosthetic limbs, create new wearable fitness trackers or help robots develop tactile sensing for delicate tasks.

“We hope that our findings can inspire new ways to use phase separation to create information storage devices,” Li said.

Researchers at Ford Research, Dearborn; Oak Ridge National Laboratory; University at Albany; NY CREATES; Sandia National Laboratories; and Arizona State University, Tempe contributed to this study.

Here’s a link to and a citation for the paper,

Thermodynamic origin of nonvolatility in resistive memory by Jingxian Li, Anirudh Appachar, Sabrina L. Peczonczyk, Elisa T. Harrison, Anton V. Ievlev, Ryan Hood, Dongjae Shin, Sangmin Yoo, Brianna Roest, Kai Sun, Karsten Beckmann, Olya Popova, Tony Chiang, William S. Wahby, Robin B. Jacobs-Godrim, Matthew J. Marinella, Petro Maksymovych, John T. Heron, Nathaniel Cady, Wei D. Lu, Suhas Kumar, A. Alec Talin, Wenhao Sun, Yiyang Li. Matter DOI: https://doi.org/10.1016/j.matt.2024.07.018 Published online: August 26, 2024

This paper is behind a paywall.

World’s smallest disco party features nanoscale disco ball

I haven’t featured one of these ‘fun’ (world’s smallest xxx) announcements in a long time. An August 14, 2024 news item on phys.org announces the world’s smallest disco party and a step towards exploring quantum gravity, Note: Links have been removed,

Physicists at Purdue [Purdue University, Indiana, US] are throwing the world’s smallest disco party. The disco ball itself is a fluorescent nanodiamond, which they have levitated and spun at incredibly high speeds. The fluorescent diamond emits and scatters multicolor lights in different directions as it rotates. The party continues as they study the effects of fast rotation on the spin qubits within their system and are able to observe the Berry phase.

The team, led by Tongcang Li, professor of Physics and Astronomy and Electrical and Computer Engineering at Purdue University, published their results in Nature Communications. Reviewers of the publication described this work as “arguably a groundbreaking moment for the study of rotating quantum systems and levitodynamics” and “a new milestone for the levitated optomechanics community.”

This graph illustrates a diamond particle levitated above a surface ion trap. The fluorescent diamond nanoparticle is driven to rotate at a high speed (up to 1.2 billion rpm) by alternating voltages applied to the four corner electrodes. This rapid rotation induces a phase in the nitrogen-vacancy electron spins inside the diamond. The diagram in the top left corner depicts the atomic structure of a nitrogen-vacancy spin defect inside the diamond. Graphic provided by Kunhong Shen.

An August 13, 2024 Purdue University news release (also on EurekAlert but published August 14, 2024) by Cheryl Pierce, which originated the news item, explains what makes this work so exciting (!), Note: Links have been removed,

“Imagine tiny diamonds floating in an empty space or vacuum. Inside these diamonds, there are spin qubits that scientists can use to make precise measurements and explore the mysterious relationship between quantum mechanics and gravity,” explains Li, who is also a member of the Purdue Quantum Science and Engineering Institute.  “In the past, experiments with these floating diamonds had trouble in preventing their loss in vacuum and reading out the spin qubits. However, in our work, we successfully levitated a diamond in a high vacuum using a special ion trap. For the first time, we could observe and control the behavior of the spin qubits inside the levitated diamond in high vacuum.”

The team made the diamonds rotate incredibly fast—up to 1.2 billion times per minute! By doing this, they were able to observe how the rotation affected the spin qubits in a unique way known as the Berry phase.

“This breakthrough helps us better understand and study the fascinating world of quantum physics,” he says.

The fluorescent nanodiamonds, with an average diameter of about 750 nm, were produced through high-pressure, high-temperature synthesis. These diamonds were irradiated with high-energy electrons to create nitrogen-vacancy color centers, which host electron spin qubits. When illuminated by a green laser, they emitted red light, which was used to read out their electron spin states. An additional infrared laser was shone at the levitated nanodiamond to monitor its rotation. Like a disco ball, as the nanodiamond rotated, the direction of the scattered infrared light changed, carrying the rotation information of the nanodiamond.

The authors of this paper were mostly from Purdue University and are members of Li’s research group: Yuanbin Jin (postdoc), Kunhong Shen (PhD student), Xingyu Gao (PhD student) and Peng Ju (recent PhD graduate). Li, Jin, Shen, and Ju conceived and designed the project and Jin and Shen built the setup. Jin subsequently performed measurements and calculations and the team collectively discussed the results. Two non-Purdue authors are Alejandro Grine, principal member of technical staff at Sandia National Laboratories, and Chong Zu, assistant professor at Washington University in St. Louis. Li’s team discussed the experiment results with Grine and Zu who provided suggestions for improvement of the experiment and manuscript.

“For the design of our integrated surface ion trap,” explains Jin, “we used a commercial software, COMSOL Multiphysics, to perform 3D simulations. We calculate the trapping position and the microwave transmittance using different parameters to optimize the design. We added extra electrodes to conveniently control the motion of a levitated diamond. And for fabrication, the surface ion trap is fabricated on a sapphire wafer using photolithography. A 300-nm-thick gold layer is deposited on the sapphire wafer to create the electrodes of the surface ion trap.”

So which way are the diamonds spinning and can they be speed or direction manipulated? Shen says yes, they can adjust the spin direction and levitation.

“We can adjust the driving voltage to change the spinning direction,” he explains. “The levitated diamond can rotate around the z-axis (which is perpendicular to the surface of the ion trap), shown in the schematic, either clockwise or counterclockwise, depending on our driving signal. If we don’t apply the driving signal, the diamond will spin omnidirectionally, like a ball of yarn.”

Levitated nanodiamonds with embedded spin qubits have been proposed for precision measurements and creating large quantum superpositions to test the limit of quantum mechanics and the quantum nature of gravity.

“General relativity and quantum mechanics are two of the most important scientific breakthroughs in the 20th century. However, we still do not know how gravity might be quantized,” says Li. “Achieving the ability to study quantum gravity experimentally would be a tremendous breakthrough. In addition, rotating diamonds with embedded spin qubits provide a platform to study the coupling between mechanical motion and quantum spins.”

This discovery could have a ripple effect in industrial applications. Li says that levitated micro and nano-scale particles in vacuum can serve as excellent accelerometers and electric field sensors. For example, the US Air Force Research Laboratory (AFRL) are using optically-levitated nanoparticles to develop solutions for critical problems in navigation and communication.

“At Purdue University, we have state-of-the-art facilities for our research in levitated optomechanics,” says Li. “We have two specialized, home-built systems dedicated to this area of study. Additionally, we have access to the shared facilities at the Birck Nanotechnology Center, which enables us to fabricate and characterize the integrated surface ion trap on campus. We are also fortunate to have talented students and postdocs capable of conducting cutting-edge research. Furthermore, my group has been working in this field for ten years, and our extensive experience has allowed us to make rapid progress.”

Quantum research is one of four key pillars of the Purdue Computes initiative, which emphasizes the university’s extensive technological and computational environment.

This research was supported by the National Science Foundation (grant number PHY-2110591), the Office of Naval Research (grant number N00014-18-1-2371), and the Gordon and Betty Moore Foundation (grant DOI 10.37807/gbmf12259). The project is also partially supported by the Laboratory Directed Research and Development program at Sandia National Laboratories.

Here’s a link to and a citation for the paper,

Quantum control and Berry phase of electron spins in rotating levitated diamonds in high vacuum by Yuanbin Jin, Kunhong Shen, Peng Ju, Xingyu Gao, Chong Zu, Alejandro J. Grine & Tongcang Li. Nature Communications volume 15, Article number: 5063 (2024) DOI: https://doi.org/10.1038/s41467-024-49175-3 Published online: 13 June 2024

This paper is open access.

Synaptic transistors for brainlike computers based on (more environmentally friendly) graphene

An August 9, 2022 news item on ScienceDaily describes research investigating materials other than silicon for neuromorphic (brainlike) computing purposes,

Computers that think more like human brains are inching closer to mainstream adoption. But many unanswered questions remain. Among the most pressing, what types of materials can serve as the best building blocks to unlock the potential of this new style of computing.

For most traditional computing devices, silicon remains the gold standard. However, there is a movement to use more flexible, efficient and environmentally friendly materials for these brain-like devices.

In a new paper, researchers from The University of Texas at Austin developed synaptic transistors for brain-like computers using the thin, flexible material graphene. These transistors are similar to synapses in the brain, that connect neurons to each other.

An August 8, 2022 University of Texas at Austin news release (also on EurekAlert but published August 9, 2022), which originated the news item, provides more detail about the research,

“Computers that think like brains can do so much more than today’s devices,” said Jean Anne Incorvia, an assistant professor in the Cockrell School of Engineering’s Department of Electrical and Computer Engineer and the lead author on the paper published today in Nature Communications. “And by mimicking synapses, we can teach these devices to learn on the fly, without requiring huge training methods that take up so much power.”

The Research: A combination of graphene and nafion, a polymer membrane material, make up the backbone of the synaptic transistor. Together, these materials demonstrate key synaptic-like behaviors — most importantly, the ability for the pathways to strengthen over time as they are used more often, a type of neural muscle memory. In computing, this means that devices will be able to get better at tasks like recognizing and interpreting images over time and do it faster.

Another important finding is that these transistors are biocompatible, which means they can interact with living cells and tissue. That is key for potential applications in medical devices that come into contact with the human body. Most materials used for these early brain-like devices are toxic, so they would not be able to contact living cells in any way.

Why It Matters: With new high-tech concepts like self-driving cars, drones and robots, we are reaching the limits of what silicon chips can efficiently do in terms of data processing and storage. For these next-generation technologies, a new computing paradigm is needed. Neuromorphic devices mimic processing capabilities of the brain, a powerful computer for immersive tasks.

“Biocompatibility, flexibility, and softness of our artificial synapses is essential,” said Dmitry Kireev, a post-doctoral researcher who co-led the project. “In the future, we envision their direct integration with the human brain, paving the way for futuristic brain prosthesis.”

Will It Really Happen: Neuromorphic platforms are starting to become more common. Leading chipmakers such as Intel and Samsung have either produced neuromorphic chips already or are in the process of developing them. However, current chip materials place limitations on what neuromorphic devices can do, so academic researchers are working hard to find the perfect materials for soft brain-like computers.

“It’s still a big open space when it comes to materials; it hasn’t been narrowed down to the next big solution to try,” Incorvia said. “And it might not be narrowed down to just one solution, with different materials making more sense for different applications.”

The Team: The research was led by Incorvia and Deji Akinwande, professor in the Department of Electrical and Computer Engineering. The two have collaborated many times together in the past, and Akinwande is a leading expert in graphene, using it in multiple research breakthroughs, most recently as part of a wearable electronic tattoo for blood pressure monitoring.

The idea for the project was conceived by Samuel Liu, a Ph.D. student and first author on the paper, in a class taught by Akinwande. Kireev then suggested the specific project. Harrison Jin, an undergraduate electrical and computer engineering student, measured the devices and analyzed data.

The team collaborated with T. Patrick Xiao and Christopher Bennett of Sandia National Laboratories, who ran neural network simulations and analyzed the resulting data.

Here’s a link to and a citation for the ‘graphene transistor’ paper,

Metaplastic and energy-efficient biocompatible graphene artificial synaptic transistors for enhanced accuracy neuromorphic computing by Dmitry Kireev, Samuel Liu, Harrison Jin, T. Patrick Xiao, Christopher H. Bennett, Deji Akinwande & Jean Anne C. Incorvia. Nature Communications volume 13, Article number: 4386 (2022) DOI: https://doi.org/10.1038/s41467-022-32078-6 Published: 28 July 2022

This paper is open access.

Neuromorphic hardware could yield computational advantages for more than just artificial intelligence

Neuromorphic (brainlike) computing doesn’t have to be used for cognitive tasks only according to a research team at the US Dept. of Energy’s Sandia National Laboratories as per their March 11, 2022 news release by Neal Singer (also on EurekAlert but published March 10, 2022), Note: Links have been removed,

With the insertion of a little math, Sandia National Laboratories researchers have shown that neuromorphic computers, which synthetically replicate the brain’s logic, can solve more complex problems than those posed by artificial intelligence and may even earn a place in high-performance computing.

A random walk diffusion model based on data from Sandia National Laboratories algorithms running on an Intel Loihi neuromorphic platform. Video courtesy of Sandia National Laboratories. …

The findings, detailed in a recent article in the journal Nature Electronics, show that neuromorphic simulations employing the statistical method called random walks can track X-rays passing through bone and soft tissue, disease passing through a population, information flowing through social networks and the movements of financial markets, among other uses, said Sandia theoretical neuroscientist and lead researcher James Bradley Aimone.

“Basically, we have shown that neuromorphic hardware can yield computational advantages relevant to many applications, not just artificial intelligence to which it’s obviously kin,” said Aimone. “Newly discovered applications range from radiation transport and molecular simulations to computational finance, biology modeling and particle physics.”

In optimal cases, neuromorphic computers will solve problems faster and use less energy than conventional computing, he said.

The bold assertions should be of interest to the high-performance computing community because finding capabilities to solve statistical problems is of increasing concern, Aimone said.

“These problems aren’t really well-suited for GPUs [graphics processing units], which is what future exascale systems are likely going to rely on,” Aimone said. “What’s exciting is that no one really has looked at neuromorphic computing for these types of applications before.”

Sandia engineer and paper author Brian Franke said, “The natural randomness of the processes you list will make them inefficient when directly mapped onto vector processors like GPUs on next-generation computational efforts. Meanwhile, neuromorphic architectures are an intriguing and radically different alternative for particle simulation that may lead to a scalable and energy-efficient approach for solving problems of interest to us.”

Franke models photon and electron radiation to understand their effects on components.

The team successfully applied neuromorphic-computing algorithms to model random walks of gaseous molecules diffusing through a barrier, a basic chemistry problem, using the 50-million-chip Loihi platform Sandia received approximately a year and a half ago from Intel Corp., said Aimone. “Then we showed that our algorithm can be extended to more sophisticated diffusion processes useful in a range of applications.”

The claims are not meant to challenge the primacy of standard computing methods used to run utilities, desktops and phones. “There are, however, areas in which the combination of computing speed and lower energy costs may make neuromorphic computing the ultimately desirable choice,” he said.

Showing a neuromorphic advantage, both the IBM TrueNorth and Intel Loihi neuromorphic chips observed by Sandia National Laboratories researchers were significantly more energy efficient than conventional computing hardware. The graph shows Loihi can perform about 10 times more calculations per unit of energy than a conventional processor. Energy is the limiting factor — more chips can be inserted to run things in parallel, thus faster, but the same electric bill occurs whether it is one computer doing everything or 10,000 computers doing the work. Image courtesy of Sandia National Laboratories. Click on the thumbnail for a high-resolution image.

Unlike the difficulties posed by adding qubits to quantum computers — another interesting method of moving beyond the limitations of conventional computing — chips containing artificial neurons are cheap and easy to install, Aimone said.

There can still be a high cost for moving data on or off the neurochip processor. “As you collect more, it slows down the system, and eventually it won’t run at all,” said Sandia mathematician and paper author William Severa. “But we overcame this by configuring a small group of neurons that effectively computed summary statistics, and we output those summaries instead of the raw data.”

Severa wrote several of the experiment’s algorithms.

Like the brain, neuromorphic computing works by electrifying small pin-like structures, adding tiny charges emitted from surrounding sensors until a certain electrical level is reached. Then the pin, like a biological neuron, flashes a tiny electrical burst, an action known as spiking. Unlike the metronomical regularity with which information is passed along in conventional computers, said Aimone, the artificial neurons of neuromorphic computing flash irregularly, as biological ones do in the brain, and so may take longer to transmit information. But because the process only depletes energies from sensors and neurons if they contribute data, it requires less energy than formal computing, which must poll every processor whether contributing or not. The conceptually bio-based process has another advantage: Its computing and memory components exist in the same structure, while conventional computing uses up energy by distant transfer between these two functions. The slow reaction time of the artificial neurons initially may slow down its solutions, but this factor disappears as the number of neurons is increased so more information is available in the same time period to be totaled, said Aimone.

The process begins by using a Markov chain — a mathematical construct where, like a Monopoly gameboard, the next outcome depends only on the current state and not the history of all previous states. That randomness contrasts, said Sandia mathematician and paper author Darby Smith, with most linked events. For example, he said, the number of days a patient must remain in the hospital are at least partially determined by the preceding length of stay.

Beginning with the Markov random basis, the researchers used Monte Carlo simulations, a fundamental computational tool, to run a series of random walks that attempt to cover as many routes as possible.

“Monte Carlo algorithms are a natural solution method for radiation transport problems,” said Franke. “Particles are simulated in a process that mirrors the physical process.”

The energy of each walk was recorded as a single energy spike by an artificial neuron reading the result of each walk in turn. “This neural net is more energy efficient in sum than recording each moment of each walk, as ordinary computing must do. This partially accounts for the speed and efficiency of the neuromorphic process,” said Aimone. More chips will help the process move faster using the same amount of energy, he said.

The next version of Loihi, said Sandia researcher Craig Vineyard, will increase its current chip scale from 128,000 neurons per chip to up to one million. Larger scale systems then combine multiple chips to a board.

“Perhaps it makes sense that a technology like Loihi may find its way into a future high-performance computing platform,” said Aimone. “This could help make HPC much more energy efficient, climate-friendly and just all around more affordable.”

Here’s a link to and a citation for the paper,

Neuromorphic scaling advantages for energy-efficient random walk computations by J. Darby Smith, Aaron J. Hill, Leah E. Reeder, Brian C. Franke, Richard B. Lehoucq, Ojas Parekh, William Severa & James B. Aimone. Nature Electronics volume 5, pages 102–112 (2022) DOI: https://doi.org/10.1038/s41928-021-00705-7 Issue Date February 2022 Published 14 February 2022

This paper is open access.

Preventing corrosion in oil pipelines at the nanoscale

A June 7, 2019 news item on Azonano announces research into the process of oil pipeline corrosion at the nanoscale (Note: A link has been removed),

Steel pipes tend to rust and sooner or later fail. To anticipate disasters, oil companies and others have developed computer models to foretell when replacement is necessary. However, if the models themselves are incorrect, they can be amended only through experience, an expensive problem if detection happens too late.

Currently, scientists at Sandia National Laboratories, the Department of Energy’s Center for Integrated Nanotechnologies and the Aramco Research Center in Boston, have discovered that a specific form of nanoscale corrosion is responsible for suddenly diminishing the working life of steel pipes, according to a paper recently published in Nature’s Materials Degradation journal.

A June 6, 2019 Sandia National Laboratories news release (also on EurekAlert), which originated the news item, provides more technical detail,

Using transmission electron microscopes, which shoot electrons through targets to take pictures, the researchers were able to pin the root of the problem on a triple junction formed by a grain of cementite — a compound of carbon and iron — and two grains of ferrite, a type of iron. This junction forms frequently during most methods of fashioning steel pipe.

Iron atoms slip-sliding away

The researchers found that disorder in the atomic structure of those triple junctions made it easier for the corrosive solution to remove iron atoms along that interface.
In the experiment, the corrosive process stopped when the triple junction had been consumed by corrosion, but the crevice left behind allowed the corrosive solution to attack the interior of the steel.

“We thought of a possible solution for forming new pipe, based on changing the microstructure of the steel surface during forging, but it still needs to be tested and have a patent filed if it works,” said Sandia’s principle investigator Katherine Jungjohann, a paper author and lead microscopist. “But now we think we know where the major problem is.”

Aramco senior research scientist Steven Hayden added, “This was the world’s first real-time observation of nanoscale corrosion in a real-world material — carbon steel — which is the most prevalent type of steel used in infrastructure worldwide. Through it, we identified the types of interfaces and mechanisms that play a role in the initiation and progression of localized steel corrosion. The work is already being translated into models used to prevent corrosion-related catastrophes like infrastructure collapse and pipeline breaks.”

To mimic the chemical exposure of pipe in the field, where the expensive, delicate microscopes could not be moved, very thin pipe samples were exposed at Sandia to a variety of chemicals known to pass through oil pipelines.

Sandia researcher and paper author Khalid Hattar put a dry sample in a vacuum and used a transmission electron microscope to create maps of the steel grain types and their orientation, much as a pilot in a plane might use a camera to create area maps of farmland and roads, except that Hattar’s maps had approximately 6 nanometers resolution. (A nanometer is one-billionth of a meter.)

“By comparing these maps before and after the liquid corrosion experiments, a direct identification of the first phase that fell out of the samples could be identified, essentially identifying the weakest link in the internal microstructure,” Hattar said.

Sandia researcher and paper author Paul Kotula said, “The sample we analyzed was considered a low-carbon steel, but it has relatively high-carbon inclusions of cementite which are the sites of localized corrosion attacks.

“Our transmission electron microscopes were a key piece of this work, allowing us to image the sample, observe the corrosion process, and do microanalysis before and after the corrosion occurred to identify the part played by the ferrite and cementite grains and the corrosion product.”

When Hayden first started working in corrosion research, he said, “I was daunted at how complex and poorly understood corrosion is. This is largely because realistic experiments would involve observing complex materials like steel in liquid environments and with nanoscale resolution, and the technology to accomplish such a feat had only recently been developed and yet to be applied to corrosion. Now we are optimistic that further work at Sandia and the Center for Integrated Nanotechnologies will allow us to rethink manufacturing processes to minimize the expression of the susceptible nanostructures that render the steel vulnerable to accelerated decay mechanisms.”

Invisible path of localized corrosion

Localized corrosion is different from uniform corrosion. The latter occurs in bulk form and is highly predictable. The former is invisible, creating a pathway observable only at its endpoint and increasing bulk corrosion rates by making it easier for corrosion to spread.

“A better understanding of the mechanisms by which corrosion initiates and progresses at these types of interfaces in steel will be key to mitigating corrosion-related losses,” according to the paper.

Here’s a link to and a citation for the paper,

Localized corrosion of low-carbon steel at the nanoscale by Steven C. Hayden, Claire Chisholm, Rachael O. Grudt, Jeffery A. Aguiar, William M. Mook, Paul G. Kotula, Tatiana S. Pilyugina, Daniel C. Bufford, Khalid Hattar, Timothy J. Kucharski, Ihsan M. Taie, Michele L. Ostraat & Katherine L. Jungjohann. npj Materials Degradation volume 3, Article number: 17 (2019) DOI: https://doi.org/10.1038/s41529-019-0078-1 Published 12 April 2019

This paper is open access.

Bad battery, good synapse from Stanford University

A May 4, 2019 news item on ScienceDaily announces the latest advance made by Stanford University and Sandia National Laboratories in the field of neuromorphic (brainlike) computing,

The brain’s capacity for simultaneously learning and memorizing large amounts of information while requiring little energy has inspired an entire field to pursue brain-like — or neuromorphic — computers. Researchers at Stanford University and Sandia National Laboratories previously developed one portion of such a computer: a device that acts as an artificial synapse, mimicking the way neurons communicate in the brain.

In a paper published online by the journal Science on April 25 [2019], the team reports that a prototype array of nine of these devices performed even better than expected in processing speed, energy efficiency, reproducibility and durability.

Looking forward, the team members want to combine their artificial synapse with traditional electronics, which they hope could be a step toward supporting artificially intelligent learning on small devices.

“If you have a memory system that can learn with the energy efficiency and speed that we’ve presented, then you can put that in a smartphone or laptop,” said Scott Keene, co-author of the paper and a graduate student in the lab of Alberto Salleo, professor of materials science and engineering at Stanford who is co-senior author. “That would open up access to the ability to train our own networks and solve problems locally on our own devices without relying on data transfer to do so.”

An April 25, 2019 Stanford University news release (also on EurekAlert but published May 3, 2019) by Taylor Kubota, which originated the news item, expands on the theme,

A bad battery, a good synapse

The team’s artificial synapse is similar to a battery, modified so that the researchers can dial up or down the flow of electricity between the two terminals. That flow of electricity emulates how learning is wired in the brain. This is an especially efficient design because data processing and memory storage happen in one action, rather than a more traditional computer system where the data is processed first and then later moved to storage.

Seeing how these devices perform in an array is a crucial step because it allows the researchers to program several artificial synapses simultaneously. This is far less time consuming than having to program each synapse one-by-one and is comparable to how the brain actually works.

In previous tests of an earlier version of this device, the researchers found their processing and memory action requires about one-tenth as much energy as a state-of-the-art computing system needs in order to carry out specific tasks. Still, the researchers worried that the sum of all these devices working together in larger arrays could risk drawing too much power. So, they retooled each device to conduct less electrical current – making them much worse batteries but making the array even more energy efficient.

The 3-by-3 array relied on a second type of device – developed by Joshua Yang at the University of Massachusetts, Amherst, who is co-author of the paper – that acts as a switch for programming synapses within the array.

“Wiring everything up took a lot of troubleshooting and a lot of wires. We had to ensure all of the array components were working in concert,” said Armantas Melianas, a postdoctoral scholar in the Salleo lab. “But when we saw everything light up, it was like a Christmas tree. That was the most exciting moment.”

During testing, the array outperformed the researchers’ expectations. It performed with such speed that the team predicts the next version of these devices will need to be tested with special high-speed electronics. After measuring high energy efficiency in the 3-by-3 array, the researchers ran computer simulations of a larger 1024-by-1024 synapse array and estimated that it could be powered by the same batteries currently used in smartphones or small drones. The researchers were also able to switch the devices over a billion times – another testament to its speed – without seeing any degradation in its behavior.

“It turns out that polymer devices, if you treat them well, can be as resilient as traditional counterparts made of silicon. That was maybe the most surprising aspect from my point of view,” Salleo said. “For me, it changes how I think about these polymer devices in terms of reliability and how we might be able to use them.”

Room for creativity

The researchers haven’t yet submitted their array to tests that determine how well it learns but that is something they plan to study. The team also wants to see how their device weathers different conditions – such as high temperatures – and to work on integrating it with electronics. There are also many fundamental questions left to answer that could help the researchers understand exactly why their device performs so well.

“We hope that more people will start working on this type of device because there are not many groups focusing on this particular architecture, but we think it’s very promising,” Melianas said. “There’s still a lot of room for improvement and creativity. We only barely touched the surface.”

Here’s a link to and a citation for the paper,

Parallel programming of an ionic floating-gate memory array for scalable neuromorphic computing by Elliot J. Fuller, Scott T. Keene, Armantas Melianas, Zhongrui Wang, Sapan Agarwal, Yiyang Li, Yaakov Tuchman, Conrad D. James, Matthew J. Marinella, J. Joshua Yang3, Alberto Salleo, A. Alec Talin1. Science 25 Apr 2019: eaaw5581 DOI: 10.1126/science.aaw5581

This paper is behind a paywall.

For anyone interested in more about brainlike/brain-like/neuromorphic computing/neuromorphic engineering/memristors, use any or all of those terms in this blog’s search engine.

High-performance, low-energy artificial synapse for neural network computing

This artificial synapse is apparently an improvement on the standard memristor-based artificial synapse but that doesn’t become clear until reading the abstract for the paper. First, there’s a Feb. 20, 2017 Stanford University news release by Taylor Kubota (dated Feb. 21, 2017 on EurekAlert), Note: Links have been removed,

For all the improvements in computer technology over the years, we still struggle to recreate the low-energy, elegant processing of the human brain. Now, researchers at Stanford University and Sandia National Laboratories have made an advance that could help computers mimic one piece of the brain’s efficient design – an artificial version of the space over which neurons communicate, called a synapse.

“It works like a real synapse but it’s an organic electronic device that can be engineered,” said Alberto Salleo, associate professor of materials science and engineering at Stanford and senior author of the paper. “It’s an entirely new family of devices because this type of architecture has not been shown before. For many key metrics, it also performs better than anything that’s been done before with inorganics.”

The new artificial synapse, reported in the Feb. 20 issue of Nature Materials, mimics the way synapses in the brain learn through the signals that cross them. This is a significant energy savings over traditional computing, which involves separately processing information and then storing it into memory. Here, the processing creates the memory.

This synapse may one day be part of a more brain-like computer, which could be especially beneficial for computing that works with visual and auditory signals. Examples of this are seen in voice-controlled interfaces and driverless cars. Past efforts in this field have produced high-performance neural networks supported by artificially intelligent algorithms but these are still distant imitators of the brain that depend on energy-consuming traditional computer hardware.

Building a brain

When we learn, electrical signals are sent between neurons in our brain. The most energy is needed the first time a synapse is traversed. Every time afterward, the connection requires less energy. This is how synapses efficiently facilitate both learning something new and remembering what we’ve learned. The artificial synapse, unlike most other versions of brain-like computing, also fulfills these two tasks simultaneously, and does so with substantial energy savings.

“Deep learning algorithms are very powerful but they rely on processors to calculate and simulate the electrical states and store them somewhere else, which is inefficient in terms of energy and time,” said Yoeri van de Burgt, former postdoctoral scholar in the Salleo lab and lead author of the paper. “Instead of simulating a neural network, our work is trying to make a neural network.”

The artificial synapse is based off a battery design. It consists of two thin, flexible films with three terminals, connected by an electrolyte of salty water. The device works as a transistor, with one of the terminals controlling the flow of electricity between the other two.

Like a neural path in a brain being reinforced through learning, the researchers program the artificial synapse by discharging and recharging it repeatedly. Through this training, they have been able to predict within 1 percent of uncertainly what voltage will be required to get the synapse to a specific electrical state and, once there, it remains at that state. In other words, unlike a common computer, where you save your work to the hard drive before you turn it off, the artificial synapse can recall its programming without any additional actions or parts.

Testing a network of artificial synapses

Only one artificial synapse has been produced but researchers at Sandia used 15,000 measurements from experiments on that synapse to simulate how an array of them would work in a neural network. They tested the simulated network’s ability to recognize handwriting of digits 0 through 9. Tested on three datasets, the simulated array was able to identify the handwritten digits with an accuracy between 93 to 97 percent.

Although this task would be relatively simple for a person, traditional computers have a difficult time interpreting visual and auditory signals.

“More and more, the kinds of tasks that we expect our computing devices to do require computing that mimics the brain because using traditional computing to perform these tasks is becoming really power hungry,” said A. Alec Talin, distinguished member of technical staff at Sandia National Laboratories in Livermore, California, and senior author of the paper. “We’ve demonstrated a device that’s ideal for running these type of algorithms and that consumes a lot less power.”

This device is extremely well suited for the kind of signal identification and classification that traditional computers struggle to perform. Whereas digital transistors can be in only two states, such as 0 and 1, the researchers successfully programmed 500 states in the artificial synapse, which is useful for neuron-type computation models. In switching from one state to another they used about one-tenth as much energy as a state-of-the-art computing system needs in order to move data from the processing unit to the memory.

This, however, means they are still using about 10,000 times as much energy as the minimum a biological synapse needs in order to fire. The researchers are hopeful that they can attain neuron-level energy efficiency once they test the artificial synapse in smaller devices.

Organic potential

Every part of the device is made of inexpensive organic materials. These aren’t found in nature but they are largely composed of hydrogen and carbon and are compatible with the brain’s chemistry. Cells have been grown on these materials and they have even been used to make artificial pumps for neural transmitters. The voltages applied to train the artificial synapse are also the same as those that move through human neurons.

All this means it’s possible that the artificial synapse could communicate with live neurons, leading to improved brain-machine interfaces. The softness and flexibility of the device also lends itself to being used in biological environments. Before any applications to biology, however, the team plans to build an actual array of artificial synapses for further research and testing.

Additional Stanford co-authors of this work include co-lead author Ewout Lubberman, also of the University of Groningen in the Netherlands, Scott T. Keene and Grégorio C. Faria, also of Universidade de São Paulo, in Brazil. Sandia National Laboratories co-authors include Elliot J. Fuller and Sapan Agarwal in Livermore and Matthew J. Marinella in Albuquerque, New Mexico. Salleo is an affiliate of the Stanford Precourt Institute for Energy and the Stanford Neurosciences Institute. Van de Burgt is now an assistant professor in microsystems and an affiliate of the Institute for Complex Molecular Studies (ICMS) at Eindhoven University of Technology in the Netherlands.

This research was funded by the National Science Foundation, the Keck Faculty Scholar Funds, the Neurofab at Stanford, the Stanford Graduate Fellowship, Sandia’s Laboratory-Directed Research and Development Program, the U.S. Department of Energy, the Holland Scholarship, the University of Groningen Scholarship for Excellent Students, the Hendrik Muller National Fund, the Schuurman Schimmel-van Outeren Foundation, the Foundation of Renswoude (The Hague and Delft), the Marco Polo Fund, the Instituto Nacional de Ciência e Tecnologia/Instituto Nacional de Eletrônica Orgânica in Brazil, the Fundação de Amparo à Pesquisa do Estado de São Paulo and the Brazilian National Council.

Here’s an abstract for the researchers’ paper (link to paper provided after abstract) and it’s where you’ll find the memristor connection explained,

The brain is capable of massively parallel information processing while consuming only ~1–100fJ per synaptic event1, 2. Inspired by the efficiency of the brain, CMOS-based neural architectures3 and memristors4, 5 are being developed for pattern recognition and machine learning. However, the volatility, design complexity and high supply voltages for CMOS architectures, and the stochastic and energy-costly switching of memristors complicate the path to achieve the interconnectivity, information density, and energy efficiency of the brain using either approach. Here we describe an electrochemical neuromorphic organic device (ENODe) operating with a fundamentally different mechanism from existing memristors. ENODe switches at low voltage and energy (<10pJ for 103μm2 devices), displays >500 distinct, non-volatile conductance states within a ~1V range, and achieves high classification accuracy when implemented in neural network simulations. Plastic ENODes are also fabricated on flexible substrates enabling the integration of neuromorphic functionality in stretchable electronic systems6, 7. Mechanical flexibility makes ENODes compatible with three-dimensional architectures, opening a path towards extreme interconnectivity comparable to the human brain.

Here’s a link to and a citation for the paper,

A non-volatile organic electrochemical device as a low-voltage artificial synapse for neuromorphic computing by Yoeri van de Burgt, Ewout Lubberman, Elliot J. Fuller, Scott T. Keene, Grégorio C. Faria, Sapan Agarwal, Matthew J. Marinella, A. Alec Talin, & Alberto Salleo. Nature Materials (2017) doi:10.1038/nmat4856 Published online 20 February 2017

This paper is behind a paywall.

ETA March 8, 2017 10:28 PST: You may find this this piece on ferroelectricity and neuromorphic engineering of interest (March 7, 2017 posting titled: Ferroelectric roadmap to neuromorphic computing).

Solar cells and ‘tinkertoys’

A Nov. 3, 2014 news item on Nanowerk features a project researchers hope will improve photovoltaic efficiency and make solar cells competitive with other sources of energy,

 Researchers at Sandia National Laboratories have received a $1.2 million award from the U.S. Department of Energy’s SunShot Initiative to develop a technique that they believe will significantly improve the efficiencies of photovoltaic materials and help make solar electricity cost-competitive with other sources of energy.

The work builds on Sandia’s recent successes with metal-organic framework (MOF) materials by combining them with dye-sensitized solar cells (DSSC).

“A lot of people are working with DSSCs, but we think our expertise with MOFs gives us a tool that others don’t have,” said Sandia’s Erik Spoerke, a materials scientist with a long history of solar cell exploration at the labs.

A Nov. 3, 2014 Sandia National Laboratories news release, which originated the news item, describes the project and the technology in more detail,

Sandia’s project is funded through SunShot’s Next Generation Photovoltaic Technologies III program, which sponsors projects that apply promising basic materials science that has been proven at the materials properties level to demonstrate photovoltaic conversion improvements to address or exceed SunShot goals.

The SunShot Initiative is a collaborative national effort that aggressively drives innovation with the aim of making solar energy fully cost-competitive with traditional energy sources before the end of the decade. Through SunShot, the Energy Department supports efforts by private companies, universities and national laboratories to drive down the cost of solar electricity to 6 cents per kilowatt-hour.

DSSCs provide basis for future advancements in solar electricity production

Dye-sensitized solar cells, invented in the 1980s, use dyes designed to efficiently absorb light in the solar spectrum. The dye is mated with a semiconductor, typically titanium dioxide, that facilitates conversion of the energy in the optically excited dye into usable electrical current.

DSSCs are considered a significant advancement in photovoltaic technology since they separate the various processes of generating current from a solar cell. Michael Grätzel, a professor at the École Polytechnique Fédérale de Lausanne in Switzerland, was awarded the 2010 Millennium Technology Prize for inventing the first high-efficiency DSSC.

“If you don’t have everything in the DSSC dependent on everything else, it’s a lot easier to optimize your photovoltaic device in the most flexible and effective way,” explained Sandia senior scientist Mark Allendorf. DSSCs, for example, can capture more of the sun’s energy than silicon-based solar cells by using varied or multiple dyes and also can use different molecular systems, Allendorf said.

“It becomes almost modular in terms of the cell’s components, all of which contribute to making electricity out of sunlight more efficiently,” said Spoerke.

MOFs’ structure, versatility and porosity help overcome DSSC limitations

Though a source of optimism for the solar research community, DSSCs possess certain challenges that the Sandia research team thinks can be overcome by combining them with MOFs.

Allendorf said researchers hope to use the ordered structure and versatile chemistry of MOFs to help the dyes in DSSCs absorb more solar light, which he says is a fundamental limit on their efficiency.

“Our hypothesis is that we can put a thin layer of MOF on top of the titanium dioxide, thus enabling us to order the dye in exactly the way we want it,” Allendorf explained. That, he said, should avoid the efficiency-decreasing problem of dye aggregation, since the dye would then be locked into the MOF’s crystalline structure.

MOFs are highly-ordered materials that also offer high levels of porosity, said Allendorf, a MOF expert and 29-year veteran of Sandia. He calls the materials “Tinkertoys for chemists” because of the ease with which new structures can be envisioned and assembled. [emphasis mine]

Allendorf said the unique porosity of MOFs will allow researchers to add a second dye, placed into the pores of the MOF, that will cover additional parts of the solar spectrum that weren’t covered with the initial dye. Finally, he and Spoerke are convinced that MOFs can help improve the overall electron charge and flow of the solar cell, which currently faces instability issues.

“Essentially, we believe MOFs can help to more effectively organize the electronic and nano-structure of the molecules in the solar cell,” said Spoerke. “This can go a long way toward improving the efficiency and stability of these assembled devices.”

In addition to the Sandia team, the project includes researchers at the University of Colorado-Boulder, particularly Steve George, an expert in a thin film technology known as atomic layer deposition.

The technique, said Spoerke, is important in that it offers a pathway for highly controlled materials chemistry with potentially low-cost manufacturing of the DSSC/MOF process.

“With the combination of MOFs, dye-sensitized solar cells and atomic layer deposition, we think we can figure out how to control all of the key cell interfaces and material elements in a way that’s never been done before,” said Spoerke. “That’s what makes this project exciting.”

Here’s a picture showing an early Tinkertoy set,

Original Tinkertoy, Giant Engineer #155. Questor Education Products Co., c.1950 [downloaded from http://en.wikipedia.org/wiki/Tinkertoy#mediaviewer/File:Tinkertoy_300126232168.JPG]

Original Tinkertoy, Giant Engineer #155. Questor Education Products Co., c.1950 [downloaded from http://en.wikipedia.org/wiki/Tinkertoy#mediaviewer/File:Tinkertoy_300126232168.JPG]

The Tinkertoy entry on Wikipedia has this,

The Tinkertoy Construction Set is a toy construction set for children. It was created in 1914—six years after the Frank Hornby’s Meccano sets—by Charles H. Pajeau and Robert Pettit and Gordon Tinker in Evanston, Illinois. Pajeau, a stonemason, designed the toy after seeing children play with sticks and empty spools of thread. He and Pettit set out to market a toy that would allow and inspire children to use their imaginations. At first, this did not go well, but after a year or two over a million were sold.

Shrinky Dinks, tinkertoys, Lego have all been mentioned here in conjunction with lab work. I’m always delighted to see scientists working with or using children’s toys as inspiration of one type or another.

Sandia National Laboratories looking for commercial partners to bring titanium dioxide nanoparticles (5 nm in diameter) to market

Sandia National Laboratories (Sandia Labs) doesn’t  ask directly but I think the call for partners is more than heavily implied. Let’s start with a June 17, 2014 news item on ScienceDaily,

Sandia National Laboratories has come up with an inexpensive way to synthesize titanium-dioxide nanoparticles and is seeking partners who can demonstrate the process at industrial scale for everything from solar cells to light-emitting diodes (LEDs).

Titanium-dioxide (TiO2) nanoparticles show great promise as fillers to tune the refractive index of anti-reflective coatings on signs and optical encapsulants for LEDs, solar cells and other optical devices. Optical encapsulants are coverings or coatings, usually made of silicone, that protect a device.

Industry has largely shunned TiO2 nanoparticles because they’ve been difficult and expensive to make, and current methods produce particles that are too large.

Sandia became interested in TiO2 for optical encapsulants because of its work on LED materials for solid-state lighting.

Current production methods for TiO2 often require high-temperature processing or costly surfactants — molecules that bind to something to make it soluble in another material, like dish soap does with fat.
ADVERTISEMENT

Those methods produce less-than-ideal nanoparticles that are very expensive, can vary widely in size and show significant particle clumping, called agglomeration.

Sandia’s technique, on the other hand, uses readily available, low-cost materials and results in nanoparticles that are small, roughly uniform in size and don’t clump.

“We wanted something that was low cost and scalable, and that made particles that were very small,” said researcher Todd Monson, who along with principal investigator Dale Huber patented the process in mid-2011 as “High-yield synthesis of brookite TiO2 nanoparticles.” [emphases mine]

A June 17, 2014 Sandia Labs news release, which originated the news item, goes on to describe the technology (Note: Links have been removed),

Their (Monson and Huber) method produces nanoparticles roughly 5 nanometers in diameter, approximately 100 times smaller than the wavelength of visible light, so there’s little light scattering, Monson said.

“That’s the advantage of nanoparticles — not just nanoparticles, but small nanoparticles,” he said.

Scattering decreases the amount of light transmission. Less scattering also can help extract more light, in the case of an LED, or capture more light, in the case of a solar cell.

TiO2 can increase the refractive index of materials, such as silicone in lenses or optical encapsulants. Refractive index is the ability of material to bend light. Eyeglass lenses, for example, have a high refractive index.

Practical nanoparticles must be able to handle different surfactants so they’re soluble in a wide range of solvents. Different applications require different solvents for processing.

“If someone wants to use TiO2 nanoparticles in a range of different polymers and applications, it’s convenient to have your particles be suspension-stable in a wide range of solvents as well,” Monson said. “Some biological applications may require stability in aqueous-based solvents, so it could be very useful to have surfactants available that can make the particles stable in water.”

The researchers came up with their synthesis technique by pooling their backgrounds — Huber’s expertise in nanoparticle synthesis and polymer chemistry and Monson’s knowledge of materials physics. The work was done under a Laboratory Directed Research and Development project Huber began in 2005.

“The original project goals were to investigate the basic science of nanoparticle dispersions, but when this synthesis was developed near the end of the project, the commercial applications were obvious,” Huber said. The researchers subsequently refined the process to make particles easier to manufacture.

Existing synthesis methods for TiO2 particles were too costly and difficult to scale up production. In addition, chemical suppliers ship titanium-dioxide nanoparticles dried and without surfactants, so particles clump together and are impossible to break up. “Then you no longer have the properties you want,” Monson said.

The researchers tried various types of alcohol as an inexpensive solvent to see if they could get a common titanium source, titanium isopropoxide, to react with water and alcohol.

The biggest challenge, Monson said, was figuring out how to control the reaction, since adding water to titanium isopropoxide most often results in a fast reaction that produces large chunks of TiO2, rather than nanoparticles. “So the trick was to control the reaction by controlling the addition of water to that reaction,” he said.

Some textbooks dismissed the titanium isopropoxide-water-alcohol method as a way of making TiO2 nanoparticles. Huber and Monson, however, persisted until they discovered how to add water very slowly by putting it into a dilute solution of alcohol. “As we tweaked the synthesis conditions, we were able to synthesize nanoparticles,” Monson said.

Whoever wrote the news release now makes the plea which isn’t quite a plea (Note: A link has been removed),

The next step is to demonstrate synthesis at an industrial scale, which will require a commercial partner. Monson, who presented the work at Sandia’s fall Science and Technology Showcase, said Sandia has received inquiries from companies interested in commercializing the technology.

“Here at Sandia we’re not set up to produce the particles on a commercial scale,” he said. “We want them to pick it up and run with it and start producing these on a wide enough scale to sell to the end user.”

Sandia would synthesize a small number of particles, then work with a partner company to form composites and evaluate them to see if they can be used as better encapsulants for LEDs, flexible high-index refraction composites for lenses or solar concentrators. “I think it can meet quite a few needs,” Monson said.

I wish them good luck.