Tag Archives: US Department of Energy

Single layer graphene as a solid lubricant

Graphite (from which graphene springs) has been used as a solid lubricant for many years but it has limitations which researchers at the US Dept. of Energy’s Argonne National Laboratory are attempting to overcome by possibly replacing it with graphene. An Oct. 14, 2014 news item on phys.org describes the research (Note: A link has been removed),

Nanoscientist Anirudha Sumant and his colleagues at Argonne’s Center for Nanoscale Materials and Argonne’s Energy Systems division applied a one-atom-thick layer of graphene, a two-dimensional form of carbon, in between a steel ball and a steel disk. They found that just the single layer of graphene lasted for more than 6,500 “wear cycles,” a dramatic improvement over conventional lubricants like graphite or molybdenum disulfide.

An Oct. 13, 2014 Argonne National Laboratory news release by Jared Sagoff, which originated the news item, provides more information about this research (Note: A link has been removed),

“For comparison,” Sumant said, “conventional lubricants would need about 1,000 layers to last for 1,000 wear cycles. That’s a huge advantage in terms of cost savings with much better performance.”

Graphite has been used as an industrial lubricant for more than 40 years, but not without certain drawbacks, Sumant explained.  “Graphite is limited by the fact that it really works only in humid environments. If you have a dry setting, it’s not going to be nearly as effective,” he said.

This limitation arises from the fact that graphite – unlike graphene – has a three-dimensional structure.  The water molecules in the moist air create slipperiness by weaving themselves in between graphite’s carbon sheets. When there are not enough water molecules in the air, the material loses its slickness.

Molybdenum disulfide, another common lubricant, has the reverse problem, Sumant said. It works in dry environments but not well in wet ones. “Essentially the challenge is to find a single all-purpose lubricant that works well for mechanical systems, no matter where they are,” he said.

Graphene’s two-dimensional structure gives it a significant advantage. “The material is able to bond directly to the surface of the stainless steel ball, making it so perfectly even that even hydrogen atoms are not able to penetrate it,” said Argonne materials scientist Ali Erdemir, a collaborator on the study who tested graphene-coated steel surfaces in his lab.

In a previous study in Materials Today, Sumant and his colleagues showed that a few layers of graphene works equally well in humid and dry environments as a solid lubricant, solving the 40-year-old puzzle of finding a flawless solid lubricant. However, the team wanted to go further and test just a single graphene layer.

While doing so in an environment containing molecules of pure hydrogen, they observed a dramatic improvement in graphene’s operational lifetime. When the graphene monolayer eventually starts to wear away, hydrogen atoms leap in to repair the lattice, like stitching a quilt back together. “Hydrogen can only get into the fabric where there is already an opening,” said Subramanian Sankaranarayanan, an Argonne computational scientist and co-author in this study. This means the graphene layer stays intact longer.

Researchers had previously done experiments to understand the mechanical strength of a single sheet of graphene, but the Argonne study is the first to explain the extraordinary wear resistance of one-atom-thick graphene.

Here’s a link to and a citation for the August 2014 study,

Extraordinary Macroscale Wear Resistance of One Atom Thick Graphene Layer by Diana Berman, Sanket A. Deshmukh, Subramanian K. R. S. Sankaranarayanan, Ali Erdemir, and Anirudha V. Sumant. Advanced Funtional Materials DOI: 10.1002/adfm.201401755 Article first published online: 26 AUG 2014

© 2014 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim

This article is behind a paywall.

Trapping gases left from nuclear fuels

A July 20, 2014 news item on ScienceDaily provides some insight into recycling nuclear fuel,

When nuclear fuel gets recycled, the process releases radioactive krypton and xenon gases. Naturally occurring uranium in rock contaminates basements with the related gas radon. A new porous material called CC3 effectively traps these gases, and research appearing July 20 in Nature Materials shows how: by breathing enough to let the gases in but not out.

The CC3 material could be helpful in removing unwanted or hazardous radioactive elements from nuclear fuel or air in buildings and also in recycling useful elements from the nuclear fuel cycle. CC3 is much more selective in trapping these gases compared to other experimental materials. Also, CC3 will likely use less energy to recover elements than conventional treatments, according to the authors.

A July 21, 2014 US Department of Energy (DOE) Pacific Northwest National Laboratory (PNNL) news release (also on EurekAlert), which originated the news item despite the difference in dates, provides more details (Note: A link has been removed),

The team made up of scientists at the University of Liverpool in the U.K., the Department of Energy’s Pacific Northwest National Laboratory, Newcastle University in the U.K., and Aix-Marseille Universite in France performed simulations and laboratory experiments to determine how — and how well — CC3 might separate these gases from exhaust or waste.

“Xenon, krypton and radon are noble gases, which are chemically inert. That makes it difficult to find materials that can trap them,” said coauthor Praveen Thallapally of PNNL. “So we were happily surprised at how easily CC3 removed them from the gas stream.”

Noble gases are rare in the atmosphere but some such as radon come in radioactive forms and can contribute to cancer. Others such as xenon are useful industrial gases in commercial lighting, medical imaging and anesthesia.

The conventional way to remove xenon from the air or recover it from nuclear fuel involves cooling the air far below where water freezes. Such cryogenic separations are energy intensive and expensive. Researchers have been exploring materials called metal-organic frameworks, also known as MOFs, that could potentially trap xenon and krypton without having to use cryogenics. Although a leading MOF could remove xenon at very low concentrations and at ambient temperatures admirably, researchers wanted to find a material that performed better.

Thallapally’s collaborator Andrew Cooper at the University of Liverpool and others had been researching materials called porous organic cages, whose molecular structures are made up of repeating units that form 3-D cages. Cages built from a molecule called CC3 are the right size to hold about three atoms of xenon, krypton or radon.

To test whether CC3 might be useful here, the team simulated on a computer CC3 interacting with atoms of xenon and other noble gases. The molecular structure of CC3 naturally expands and contracts. The researchers found this breathing created a hole in the cage that grew to 4.5 angstroms wide and shrunk to 3.6 angstroms. One atom of xenon is 4.1 angstroms wide, suggesting it could fit within the window if the cage opens long enough. (Krypton and radon are 3.69 angstroms and 4.17 angstroms wide, respectively, and it takes 10 million angstroms to span a millimeter.)

The computer simulations revealed that CC3 opens its windows big enough for xenon about 7 percent of the time, but that is enough for xenon to hop in. In addition, xenon has a higher likelihood of hopping in than hopping out, essentially trapping the noble gas inside.

The team then tested how well CC3 could pull low concentrations of xenon and krypton out of air, a mix of gases that included oxygen, argon, carbon dioxide and nitrogen. With xenon at 400 parts per million and krypton at 40 parts per million, the researchers sent the mix through a sample of CC3 and measured how long it took for the gases to come out the other side.

Oxygen, nitrogen, argon and carbon dioxide — abundant components of air — traveled through the CC3 and continued to be measured for the experiment’s full 45 minute span. Xenon however stayed within the CC3 for 15 minutes, showing that CC3 could separate xenon from air.

In addition, CC3 trapped twice as much xenon as the leading MOF material. It also caught xenon 20 times more often than it caught krypton, a characteristic known as selectivity. The leading MOF only preferred xenon 7 times as much. These experiments indicated improved performance in two important characteristics of such a material, capacity and selectivity.

“We know that CC3 does this but we’re not sure why. Once we understand why CC3 traps the noble gases so easily, we can improve on it,” said Thallapally.

To explore whether MOFs and porous organic cages offer economic advantages, the researchers estimated the cost compared to cryogenic separations and determined they would likely be less expensive.

“Because these materials function well at ambient or close to ambient temperatures, the processes based on them are less energy intensive to use,” said PNNL’s Denis Strachan.

The material might also find use in pharmaceuticals. Most molecules come in right- and left-handed forms and often only one form works in people. In additional experiments, Cooper and colleagues in the U.K. tested CC3’s ability to distinguish and separate left- and right-handed versions of an alcohol. After separating left- and right-handed forms of CC3, the team showed in biochemical experiments that each form selectively trapped only one form of the alcohol.

The researchers have provided an image illustrating a CC3 cage,

Breathing room: In this computer simulation, light and dark purple highlight the cavities within the 3D pore structure of CC3. Courtesy:  PNNL

Breathing room: In this computer simulation, light and dark purple highlight the cavities within the 3D pore structure of CC3. Courtesy: PNNL

Here’s a link to and a citation for the paper,

Separation of rare gases and chiral molecules by selective binding in porous organic cages by Linjiang Chen, Paul S. Reiss, Samantha Y. Chong, Daniel Holden, Kim E. Jelfs, Tom Hasell, Marc A. Little, Adam Kewley, Michael E. Briggs, Andrew Stephenson, K. Mark Thomas, Jayne A. Armstrong, Jon Bell, Jose Busto, Raymond Noel, Jian Liu, Denis M. Strachan, Praveen K. Thallapally, & Andrew I. Cooper. Nature Material (2014) doi:10.1038/nmat4035 Published online 20 July 2014

This paper is behind a paywall.

Picasso, paint, and the hard x-ray nanoprobe

There’s the paint you put on your walls and there’s the paint you put on your body and there’s the paint artists use for their works of art. Well, it turns out that a very well known artist used common house paint to create some of his masterpieces,

Among the Picasso paintings in the Art Institute of Chicago collection, The Red Armchair is the most emblematic of his Ripolin usage and is the painting that was examined with APS X-rays at Argonne National Laboratory. To view a larger version of the image, click on it. Courtesy Art Institute of Chicago, Gift of Mr. and Mrs. Daniel Saidenberg (AIC 1957.72) © Estate of Pablo Picasso / Artists Rights Society (ARS), New York [downloaded from http://www.anl.gov/articles/high-energy-x-rays-shine-light-mystery-picasso-s-paints]

Among the Picasso paintings in the Art Institute of Chicago collection, The Red Armchair is the most emblematic of his Ripolin usage and is the painting that was examined with APS X-rays at Argonne National Laboratory. To view a larger version of the image, click on it. Courtesy Art Institute of Chicago, Gift of Mr. and Mrs. Daniel Saidenberg (AIC 1957.72) © Estate of Pablo Picasso / Artists Rights Society (ARS), New York [downloaded from http://www.anl.gov/articles/high-energy-x-rays-shine-light-mystery-picasso-s-paints]

The Art Institute of Chicago teamed with the US Argonne National Laboratory to solve a decades-long mystery as to what kind of paint Picasso used. From the Feb. 8, 2013 news item on Azonano,

The Art Institute of Chicago teamed up with Argonne National Laboratory to unravel a decades-long debate among art scholars about what kind of paint Picasso used to create his masterpieces.

The results published last month in the journal Applied Physics A: Materials Science & Processing adds significant weight to the widely held theory that Picasso was one of the first master painters to use common house paint rather than traditional artists’ paint. That switch in painting material gave birth to a new style of art marked by canvasses covered in glossy images with marbling, muted edges, and occasional errant paint drips but devoid of brush marks. Fast-drying enamel house paint enabled this dramatic departure from the slow-drying heavily blended oil paintings that dominated the art world up until Picasso’s time.

The key to decoding this long-standing mystery was the development of a unique high-energy X-ray instrument, called the hard X-ray nanoprobe, at the U.S. Department of Energy’s Advanced Photon Source (APS) X-ray facility and the Center for Nanoscale Materials, both housed at Argonne. The nanoprobe is designed to advance the development of high-performance materials and sustainable energies by giving scientists a close up view of the type and arraignment of chemical elements in material.

At that submicroscopic level is where science and art crossed paths.

The Argonne National Laboratory Feb. 6, 2013 news release by Tona Kunz, which originated the news item,  provides more  technical detail,

Volker Rose, a physicist at Argonne, uses the nanoprobe at the APS [Advanced Photon Source]/CNM [Center for Nanoscale Materials] to study zinc oxide, a key chemical used in wide-band-gap semiconductors. White paint contains the same chemical in varying amounts, depending on the type and brand of paint, which makes it a valuable clue for learning about Picasso’s work.

By comparing decades-old paint samples collected through e-Bay purchases with samples from Picasso paintings, scientists were able to learn that the chemical makeup of paint used by Picasso matched the chemical makeup of the first commercial house paint, Ripolin. Scientists also learned about the correlation of the spacing of impurities at the nanoscale in zinc oxide, offering important clues to how zinc oxide could be modified to improve performance in a variety of products, including sensors for radiation detection, LEDs and energy-saving windows as well as liquid-crystal displays for computers, TVs and instrument panels.

“Everything that we learn about how materials are structured and how chemicals react at the nanolevel can help us in our quest to design a better and more sustainable future,” Rose said.

Physicists weren’t the first to investigate the question,

Many art conservators and historians have tried over the years to use traditional optical and electron microscopes to determine whether Picasso or one of his contemporaries was the first to break with the cultural tradition of professional painters using expensive paints designed specifically for their craft. Those art world detectives all failed, because traditional tools wouldn’t let them see deeply enough into the layers of paint or with enough resolution to distinguish between store-bought enamel paint and techniques designed to mimic its appearance.

“Appearances can deceive, so this is where art can benefit from scientific research,” said Francesca Casadio, senior conservator scientist at the Art Institute of Chicago, and co-lead author on the result publication. “We needed to reverse-engineer the paint so that we could figure out if there was a fingerprint that we could then go look for in the pictures around the world that are suspected to be painted with Ripolin, the first commercial brand of house paint.”

Just as criminals leave a signature at a crime scene, each batch of paint has a chemical signature determined by its ingredients and impurities from the area and time period it was made. These signatures can’t be imitated and lie in the nanoscale range.

Yet until now, it was difficult to differentiate the chemical components of the paint pigments from the chemical components in the binders, fillers, other additives and contaminates that were mixed in with the pigments or layered on top of them. Only the nanoprobe at the APS /CNM can distinguish that level of detail: elemental composition and nanoscale distribution of elements within individualized submicrometeric pigment particles.

“The nanoprobe at the APS and CNM allowed unprecedented visualization of information about chemical composition within a singe grain of paint pigment, significantly reducing doubt that Picasso used common house paint in some of his most famous works,” said Rose, co-lead author on the result publication titled “High-Resolution Fluorescence Mapping of Impurities in the Historical Zinc Oxide Pigments: Hard X-ray Nanoprobe Applications to the Paints of Pablo Picasso.”

The nanoprobe’s high spatial resolution and micro-focusing abilities gave it the unique ability to identify individual chemical elements and distinguish between the size of paint particles crushed by hand in artists’ studios and those crushed even smaller by manufacturing equipment. The nanoprobe peered deeper than previous similar paint studies limited to a one-micrometer viewing resolution. The nanoprobe gave scientists an unprecedented look at 30-nanometer-wide particles of paint and impurities from the paint manufacturing process. For comparison, a typical sheet of copier paper is 100,000 nanometers thick.

Using the nanoprobe, scientists were able to determine that Picasso used enamel paint to create in 1931 The Red Armchair, on display at the Art Institute of Chicago. They were also able to determine the paint brand and from what manufacturing region the paint originated.

X-ray analysis of white paints produced under the Ripolin brand and used in artists’ traditional tube paints revealed that both contained nearly contaminate-free zinc oxide pigment. However, artists’ tube paints contained more fillers of other white-colored pigments than did the Ripolin, which was mostly pure zinc oxide.

Casaido [sic] views this type of chemical characterization of paints as a having a much wider application than just the study of Picasso’s paintings. By studying the chemical composition of art materials, she said, historians can learn about trade movements in ancient times, better determine the time period a piece was created, and even learn about the artist themselves through their choice of materials.

Perhaps not so coincidentally, the Art Institute of Chicago is celebrating the 100 year relationship between Picasso and Chicago, excerpted from their Jan. 14, 2013 news release,

THE ART INSTITUTE HONORS 100-YEAR RELATIONSHIP BETWEEN PICASSO AND CHICAGO WITH LANDMARK MUSEUM–WIDE CELEBRATION

First Large-Scale Picasso Exhibition Presented by the Art Institute in 30 Years Commemorates Centennial Anniversary of the Armory Show

Picasso and Chicago on View Exclusively at the Art Institute February 20–May 12, 2013

This winter, the Art Institute of Chicago celebrates the unique relationship between Chicago and one of the preeminent artists of the 20th century—Pablo Picasso—with special presentations, singular paintings on loan from the Philadelphia Museum of Art, and programs throughout the museum befitting the artist’s unparalleled range and influence. The centerpiece of this celebration is the major exhibition Picasso and Chicago, on view from February 20 through May 12, 2013 in the Art Institute’s Regenstein Hall, which features more than 250 works selected from the museum’s own exceptional holdings and from private collections throughout Chicago. Representing Picasso’s innovations in nearly every media—paintings, sculpture, prints, drawings, and ceramics—the works not only tell the story of Picasso’s artistic development but also the city’s great interest in and support for the artist since the Armory Show of 1913, a signal event in the history of modern art.

The Art Institute of Chicago, Francesca Casadio, and art conservation (specifically in regard to Winslow Homer) were mentioned here in an April 11, 2011 posting.

Computer simulation errors and corrections

In addition to being a news release, this is a really good piece of science writing by Paul Preuss for the Lawrence Berkeley National Laboratory (Berkeley Lab), from the Jan. 3, 2013 Berkeley Lab news release,

Because modern computers have to depict the real world with digital representations of numbers instead of physical analogues, to simulate the continuous passage of time they have to digitize time into small slices. This kind of simulation is essential in disciplines from medical and biological research, to new materials, to fundamental considerations of quantum mechanics, and the fact that it inevitably introduces errors is an ongoing problem for scientists.

Scientists at the U.S. Department of Energy’s Lawrence Berkeley National Laboratory (Berkeley Lab) have now identified and characterized the source of tenacious errors and come up with a way to separate the realistic aspects of a simulation from the artifacts of the computer method. …

Here’s more detail about the problem and solution,

How biological molecules move is hardly the only field where computer simulations of molecular-scale motion are essential. The need to use computers to test theories and model experiments that can’t be done on a lab bench is ubiquitous, and the problems that Sivak and his colleagues encountered weren’t new.

“A simulation of a physical process on a computer cannot use the exact, continuous equations of motion; the calculations must use approximations over discrete intervals of time,” says Sivak. “It’s well known that standard algorithms that use discrete time steps don’t conserve energy exactly in these calculations.”

One workhorse method for modeling molecular systems is Langevin dynamics, based on equations first developed by the French physicist Paul Langevin over a century ago to model Brownian motion. Brownian motion is the random movement of particles in a fluid (originally pollen grains on water) as they collide with the fluid’s molecules – particle paths resembling a “drunkard’s walk,” which Albert Einstein had used just a few years earlier to establish the reality of atoms and molecules. Instead of impractical-to-calculate velocity, momentum, and acceleration for every molecule in the fluid, Langevin’s method substituted an effective friction to damp the motion of the particle, plus a series of random jolts.

When Sivak and his colleagues used Langevin dynamics to model the behavior of molecular machines, they saw significant differences between what their exact theories predicted and what their simulations produced. They tried to come up with a physical picture of what it would take to produce these wrong answers.

“It was as if extra work were being done to push our molecules around,” Sivak says. “In the real world, this would be a driven physical process, but it existed only in the simulation, so we called it ‘shadow work.’ It took exactly the form of a nonequilibrium driving force.”

They first tested this insight with “toy” models having only a single degree of freedom, and found that when they ignored the shadow work, the calculations were systematically biased. But when they accounted for the shadow work, accurate calculations could be recovered.

“Next we looked at systems with hundreds or thousands of simple molecules,” says Sivak. Using models of water molecules in a box, they simulated the state of the system over time, starting from a given thermal energy but with no “pushing” from outside. “We wanted to know how far the water simulation would be pushed by the shadow work alone.”

The result confirmed that even in the absence of an explicit driving force, the finite-time-step Langevin dynamics simulation acted by itself as a driving nonequilibrium process. Systematic errors resulted from failing to separate this shadow work from the actual “protocol work” that they explicitly modeled in their simulations. For the first time, Sivak and his colleagues were able to quantify the magnitude of the deviations in various test systems.

Such simulation errors can be reduced in several ways, for example by dividing the evolution of the system into ever-finer time steps, because the shadow work is larger when the discrete time steps are larger. But doing so increases the computational expense.

The better approach is to use a correction factor that isolates the shadow work from the physically meaningful work, says Sivak. “We can apply results from our calculation in a meaningful way to characterize the error and correct for it, separating the physically realistic aspects of the simulation from the artifacts of the computer method.”

You can find out more in the Berkeley Lab news release, or (H/T)  in the Jan. 3, 2013 news item on Nanowerk, or you can read the paper,

“Using nonequilibrium fluctuation theorems to understand and correct errors in equilibrium and nonequilibrium discrete Langevin dynamics simulations,” by David A. Sivak, John D. Chodera, and Gavin E. Crooks, will appear in Physical Review X (http://prx.aps.org/) and is now available as an arXiv preprint at http://arxiv.org/abs/1107.2967.

This casts a new light on the SPAUN (Semantic Pointer Architecture Unified Network) project, from Chris Eliasmith’s team at the University of Waterloo, which announced the most  successful attempt (my Nov. 29, 2012 posting) yet to simulate a brain using virtual neurons. Given the probability that Eliasmith’s team was not aware of this work from the Berkeley Lab, one imagines that once it has been integrated that SPAUN will be capable of even more extraordinary feats.

Space-time crystals and everlasting clocks

Apparently, a space-time crystal could be useful for such things as studying the many-body problem in physics.  Since I hadn’t realized the many-body problem existed and have no idea how this might affect me or anyone else, I will have to take the utility of a space-time crystal on trust.As for the possibility of an everlasting clock, how will I ever know the truth since I’m not everlasting?

The Sept. 24, 2012 news item on Nanowerk about a new development makes the space-time crystal sound quite fascinating,

Imagine a clock that will keep perfect time forever, even after the heat-death of the universe. This is the “wow” factor behind a device known as a “space-time crystal,” a four-dimensional crystal that has periodic structure in time as well as space. However, there are also practical and important scientific reasons for constructing a space-time crystal. With such a 4D crystal, scientists would have a new and more effective means by which to study how complex physical properties and behaviors emerge from the collective interactions of large numbers of individual particles, the so-called many-body problem of physics. A space-time crystal could also be used to study phenomena in the quantum world, such as entanglement, in which an action on one particle impacts another particle even if the two particles are separated by vast distances. [emphasis mine]

While I’m most interested in the possibility of studying entanglement, it seems to me the scientists are guessing since the verb ‘could’ is being used where they used ‘would’ previously for studying the many body problem.

The Sept. 24, 2012 news release by Lynn Yarris for the Lawrence Berkeley National Laboratory  (Berkeley Lab), which originated the news item, provides detail on the latest space-time crystal development,

A space-time crystal, however, has only existed as a concept in the minds of theoretical scientists with no serious idea as to how to actually build one – until now. An international team of scientists led by researchers with the U.S. Department of Energy (DOE)’s Lawrence Berkeley National Laboratory (Berkeley Lab) has proposed the experimental design of a space-time crystal based on an electric-field ion trap and the Coulomb repulsion of particles that carry the same electrical charge.

“The electric field of the ion trap holds charged particles in place and Coulomb repulsion causes them to spontaneously form a spatial ring crystal,” says Xiang Zhang, a faculty scientist  with Berkeley Lab’s Materials Sciences Division who led this research. “Under the application of a weak static magnetic field, this ring-shaped ion crystal will begin a rotation that will never stop. The persistent rotation of trapped ions produces temporal order, leading to the formation of a space-time crystal at the lowest quantum energy state.”

Because the space-time crystal is already at its lowest quantum energy state, its temporal order – or timekeeping – will theoretically persist even after the rest of our universe reaches entropy, thermodynamic equilibrium or “heat-death.”

This new development builds on some work done earlier this year at the Massachusetts Institute of Technology (MIT), from the Yarris news release,

The concept of a crystal that has discrete order in time was proposed earlier this year by Frank Wilczek, the Nobel-prize winning physicist at the Massachusetts Institute of Technology. While Wilczek mathematically proved that a time crystal can exist, how to physically realize such a time crystal was unclear. Zhang and his group, who have been working on issues with temporal order in a different system since September 2011, have come up with an experimental design to build a crystal that is discrete both in space and time – a space-time crystal.

Traditional crystals are 3D solid structures made up of atoms or molecules bonded together in an orderly and repeating pattern. Common examples are ice, salt and snowflakes. Crystallization takes place when heat is removed from a molecular system until it reaches its lower energy state. At a certain point of lower energy, continuous spatial symmetry breaks down and the crystal assumes discrete symmetry, meaning that instead of the structure being the same in all directions, it is the same in only a few directions.

“Great progress has been made over the last few decades in exploring the exciting physics of low-dimensional crystalline materials such as two-dimensional graphene, one-dimensional nanotubes, and zero-dimensional buckyballs,” says Tongcang Li, lead author of the PRL paper and a post-doc in Zhang’s research group. “The idea of creating a crystal with dimensions higher than that of conventional 3D crystals is an important conceptual breakthrough in physics and it is very exciting for us to be the first to devise a way to realize a space-time crystal.”

Just as a 3D crystal is configured at the lowest quantum energy state when continuous spatial symmetry is broken into discrete symmetry, so too is symmetry breaking expected to configure the temporal component of the space-time crystal. Under the scheme devised by Zhang and Li and their colleagues, a spatial ring of trapped ions in persistent rotation will periodically reproduce itself in time, forming a temporal analog of an ordinary spatial crystal. With a periodic structure in both space and time, the result is a space-time crystal.

Here’s an image created by team at the Berkeley Lab to represent their work on the space-time crystal,

Imagine a clock that will keep perfect time forever or a device that opens new dimensions into quantum phenomena such as emergence and entanglement. (courtesy of Xiang Zhang group[?] at Berkeley Lab)

For anyone who’s interested in this work, I suggest reading either the news item on Nanowerk or the Berkeley Lab news release in full. I will leave you with Natalie Cole and Everlasting Love,

Pacific Northwest National Laboratory gets artistic

There are some very pretty pictures from the Pacific Northwest National Laboratory (PNNL) that appear in a new calendar featuring science as art. From the Nov. 1, 2011 news item on Nanowerk,

A dozen stunning science images, representing cell structures, microorganisms, polymer films, degraded metals and more, have been selected by the voting public as winners in Pacific Northwest National Laboratory’s Science as Art contest.

The photos are representative of research projects at the Department of Energy laboratory and will appear in a 2012 “Discovery in Action” calendar (available for high- and low-resolution download). Winning images will also be used in laboratory websites, printed materials, building lobbies and conference rooms.

You can find out more about the competition in the news item and/or you can also view the images on the PNNL’s Flickr site. I downloaded a couple samples from the Flickr site,

Electro-Polymerization of Pyrrole

Electro-Polymerization of Pyrrole
Organizations like the U.S. Environmental Protection Agency rely on field sensors that can detect traces of anionic water-soluble pollutants, like arsenate, chromate, perchlorate and pertechnetate. At PNNL, scientists are experimenting with modified polymer films that can recognize—and therefore be used—to detect pollutants. These polymers could potentially be incorporated into devices that would make detection rapid and economic. Shown here is a microscopic image of a polymer film generated through electro-polymerization of pyrrole from a water solution. PNNL researchers Dev Chatterjee, Thao Bui and Sam Bryan are working on this project.

and here’s the second one,

Designing Nano-Potteries: CdS Hollow Spheres

Designing Nano-Potteries: CdS Hollow Spheres
Imaging bio-molecules and cells over extended periods of time is critical to understanding cellular processes and the causes of pathogenic diseases. Cadmium sulfide quantum dots are widely used for highly sensitive cellular imaging. The extraordinary photostability of these probes are highly attractive for the real-time tracking of bio-molecules and cells over time. PNNL scientists are exploring quantum dots with varying morphologies and trying to understand the variation of their spectroscopy associated with the morphological changes. The goal is to design probes that can be used to monitor cellular processes over extended periods. PNNL researcher Dev Chatterjee provided the image. Others who contribute to the project include Matthew Edwards, Paul MacFarlan, Samuel Bryan and Jason Hoki. Image colored by PNNL graphic designer Jeff London.

I quite enjoyed the images.

Nano happenings in Alberta (Canada); smart windows, again; reading postage stamps

The first Nanotechnology Systems Diploma programme in Canada is going to be offered through the Northern Alberta Institute of Technology (NAIT) is September 2010. Alberta as I’ve noted previously is home to Canada’s National Institute of Nanotechnology and its provincial government is providing substantive  support to an emerging nanotechnology sector. From the news item on Azonano,

The Canadian nanotech sector is just beginning to emerge, and Alberta is a major player. The Alberta government unveiled a nanotechnology strategy in 2007, outlining an investment of funds and infrastructure aimed at capturing a $20 billion share of the worldwide nanotechnology market by 2020. Alberta now boasts a growing nanotech enterprise sector of more than 40 companies, with many located in the Edmonton region.

Meanwhile, the Alberta Centre for Advanced Micro Nano Technology Products (ACAMP) is holding a seminar for Alberta’s conventional energy sector about nano and micro technology products. From the news item on Nanowerk,

Today at ACAMP’s latest seminar, Alberta’s conventional energy industry learned how nanotechnology, micro-systems and micro-fluidics can play a powerful role in enhancing operational performance, reducing costs and promoting efficient extraction of oil and gas resources, while opening new markets for Alberta companies worldwide.

“Micro and Nano technologies for conventional energy applications are extremely important in Alberta,” said Ken Brizel, CEO of ACAMP, “enhancing operational performance allowing for efficient extraction of oil and gas resources. Innovative new products are being developed and used locally enabling Alberta companies to compete worldwide.”

As for other parts of the Canadian nanotechnology scene such as the proposed new legislation by NDP (New Democrat Party) Member of Parliament, Peter Julian, I have sent his office some questions for an email interview and will hopefully be able to publish his responses here. (The proposed legislation was mentioned in yesterday’s posting, March 10, 2010.)

As I speed through this posting, I will take a moment for one of my pet interests, windows. Kit Eaton at Fast Company recently wrote a piece about a Dutch company that’s created ‘smart windows’ (from the article),

Whereas every home has windows. And this fact has led Dutch company Peer+ to create Smart Energy Glass panels that generate current from the sun while also acting as like those old-fashioned devices that lets you see right through a wall. But that’s not all. Similar to the other up-and-coming LCD glass treatments that let you blank a window at the flick of a switch (removing the need for curtains, blinds or shutters,) these smart windows also have selectable darkness. Darkest is the highest privacy mode, and thanks to a trick of the optics concerned, also leads to the most efficient power generation from solar input. And you can even choose between a range of shades for the glass and also incorporate logos or text into the panels, which will appeal to countless businesses.

There are some images of these windows embedded in the Fast Company article. As Eaton notes (and I heartily concur), adoption of technologies of this type will occur readily as the products become  more attractive or more stylish.

Still with the windows, the US Department of Energy has made an additional investment in SAGE Electrochomics with a $72M conditional loan guaranteed. From the news item on Nanowerk,

SAGE will transform the way buildings use energy by mass producing a revolutionary new kind of dynamic glass that can change from a clear state to a tinted state at the push of a button. Windows using SageGlass® technology control the amount of sunlight that enters a building, significantly reducing energy consumed for air conditioning, heating and lighting. The company will tap the DOE funding to build a high-volume manufacturing plant next to its headquarters in Faribault, Minn., ramping up production for commercial, institutional and residential applications.

I notice these windows do not include  self-cleaning component. Ah well.

Getting back to the Dutch for my final bit today, a postage stamp you can read like a book or use for a letter. From the William Bostwick article on Fast Company,

“Hey, did you read the stamp I sent you?” There’s no need for a letter when the stamp you use is a book. Rotterdam designer Richard Hutten has designed a new stamp for Royal TNT Post, in honor of this year’s Dutch Book Week, that doubles as a tiny tome. The 3×4 centimeter stamp opens up into an 8-page, 500-word story by Joost Zwagerman.

That’s it for today as I get ready for the PCAST (President’s Council of Advisors on Science and Technology) webcast.

New $15M electron microscope lands at McMaster University, Ontario

McMaster University (at their Canadian Centre for Electron Microscopy) is the first post secondary institution in the world to get a new electron microscope which offers scientists the ability to see atoms in sharper detail than possible with other microscopes. The Titan 80-300 Cubed microscope was shipped from the Netherlands and received at the university in June 2008 and is available to researchers across the country. There are currently plans for projects such as developing more efficient batteries, looking at how air pollution particles damage lungs, and creating higher density memory storage. For more details you can see an Oct. 15, 2008 article in the Globe & Mail here (note: they put their articles behind a paywall shortly after they make them public) or you can try here (note: scroll down). I had no luck finding information on the McMaster site or on the Canadian Centre for Electron Microscopy. Finally, the most powerful electron microscope is owned by the US Department of Energy but unlike the one at McMaster it is not commercially available.