Tag Archives: University of Maryland

Ethereal optical cables

It’s a gobsmacking idea but here’s what scientist Howard Milchberg wants to accomplish (from a July 22, 2014 University of Maryland (UMD) news release (also on EurekAlert) [written by Brian Doctrow]),

Imagine being able to instantaneously run an optical cable or fiber to any point on earth, or even into space. That’s what Howard Milchberg, professor of physics and electrical and computer engineering at the University of Maryland, wants to do.

In a paper published today in the July 2014 issue of the journal Optica, Milchberg and his lab report using an “air waveguide” to enhance light signals collected from distant sources. These air waveguides could have many applications, including long-range laser communications, detecting pollution in the atmosphere, making high-resolution topographic maps and laser weapons.

Here’s an image illustrating the first step to achieving ‘ethereal cables’, an air waveguide,

Caption: This is an illustration of an air waveguide. The filaments leave 'holes' in the air (red rods) that reflect light. Light (arrows) passing between these holes stays focused and intense. Credit: Howard Milchberg

Caption: This is an illustration of an air waveguide. The filaments leave ‘holes’ in the air (red rods) that reflect light. Light (arrows) passing between these holes stays focused and intense.
Credit: Howard Milchberg

Here’s more about precursor research into creating air waveguides, from the news release,

Milchberg showed previously that these filaments heat up the air as they pass through, causing the air to expand and leaving behind a “hole” of low-density air in their wake. This hole has a lower refractive index than the air around it. While the filament itself is very short lived (less than one-trillionth of a second [less than a picosecond]), it takes a billion times longer for the hole to appear. It’s “like getting a slap to your face and then waiting, and then your face moves,” according to Milchberg, who also has an appointment in the Institute for Research in Electronics and Applied Physics at UMD.

On Feb. 26, 2014, Milchberg and his lab reported in the journal Physical Review X that if four filaments were fired in a square arrangement, the resulting holes formed the low-density wall needed for a waveguide. When a more powerful beam was fired between these holes, the second beam lost hardly any energy when tested over a range of about a meter. Importantly, the “pipe” produced by the filaments lasted for a few milliseconds, a million times longer than the laser pulse itself. For many laser applications, Milchberg says, “milliseconds [thousandths of a second] is infinity.”

The latest work brings Milchberg a step closer to using air waveguides as cables for lasers (from the news release),

Because light loses intensity with distance, the range over which such tasks can be done is limited. Even lasers, which produce highly directed beams, lose focus due to their natural spreading, or worse, due to interactions with gases in the air. Fiber-optic cables can trap light beams and guide them like a pipe, preventing loss of intensity or focus.

Typical fibers consist of a transparent glass core surrounded by a cladding material with a lower index of refraction. When light tries to leave the core, it gets reflected back inward. But solid optical fibers can only handle so much power, and they need physical support that may not be available where the cables need to go, such as the upper atmosphere. Now, Milchberg’s team has found a way to make air behave like an optical fiber, guiding light beams over long distances without loss of power.

Milchberg’s air waveguides consist of a “wall” of low-density air surrounding a core of higher density air. The wall has a lower refractive index than the core—just like an optical fiber. In the Optica paper, Milchberg, physics graduate students Eric Rosenthal and Nihal Jhajj, and associate research scientist Jared Wahlstrand, broke down the air with a laser to create a spark. An air waveguide conducted light from the spark to a detector about a meter away. The researchers collected a strong enough signal to analyze the chemical composition of the air that produced the spark.

The signal was 1.5 times stronger than a signal obtained without the waveguide. That may not seem like much, but over distances that are 100 times longer, where an unguided signal would be severely weakened, the signal enhancement could be much greater.

Milchberg creates his air waveguides using very short, very powerful laser pulses. A sufficiently powerful laser pulse in the air collapses into a narrow beam, called a filament. This happens because the laser light increases the refractive index of the air in the center of the beam, as if the pulse is carrying its own lens with it.

Because the waveguides are so long-lived, Milchberg believes that a single waveguide could be used to send out a laser and collect a signal. “It’s like you could just take a physical optical fiber and unreel it at the speed of light, put it next to this thing that you want to measure remotely, and then have the signal come all the way back to where you are,” says Milchberg.

First, though, he needs to show that these waveguides can be used over much longer distances—50 meters at least. If that works, it opens up a world of possibilities. Air waveguides could be used to conduct chemical analyses of places like the upper atmosphere or nuclear reactors, where it’s difficult to get instruments close to what’s being studied. The waveguides could also be used for LIDAR, a variation on radar that uses laser light instead of radio waves to make high-resolution topographic maps.

Here are links to and citations for both papers from Milchberg’s research team,

Demonstration of Long-Lived High-Power Optical Waveguides in Air by N. Jhajj, E. W. Rosenthal, R. Birnbaum, J. K. Wahlstrand, and H. M. Milchberg. Physical Review X: http://dx.doi.org/10.1103/PhysRevX.4.011027 Published Feb. 26, 2014

Collection of remote optical signals by air waveguides by E. W. Rosenthal, N. Jhajj, J. K. Wahlstrand, and H. M. Milchberg. Optica, Vol. 1, Issue 1, pp. 5-9 (July 2014) http://dx.doi.org/10.1364/OPTICA.1.000005

Both papers are open access.

Super-black nanotechnology, space exploration, and carbon nanotubes grown by atomic layer deposition (ALD)

Super-black in this context means that very little light is reflected by the carbon nanotubes that a team at the US National Aeronautics and Space Administration (NASA) have produced. From a July 17, 2013 NASA news release (also here on EurekAlert),

A NASA engineer has achieved yet another milestone in his quest to advance an emerging super-black nanotechnology that promises to make spacecraft instruments more sensitive without enlarging their size.

A team led by John Hagopian, an optics engineer at NASA’s Goddard Space Flight Center in Greenbelt, Md., has demonstrated that it can grow a uniform layer of carbon nanotubes through the use of another emerging technology called atomic layer deposition or ALD. The marriage of the two technologies now means that NASA can grow nanotubes on three-dimensional components, such as complex baffles and tubes commonly used in optical instruments.

“The significance of this is that we have new tools that can make NASA instruments more sensitive without making our telescopes bigger and bigger,” Hagopian said. “This demonstrates the power of nanoscale technology, which is particularly applicable to a new class of less-expensive tiny satellites called Cubesats that NASA is developing to reduce the cost of space missions.”

(It’s the first time I’ve seen atomic layer deposition (ALD) described as an emerging technology; I’ve always thought of it as well established.)  Here’s a 2010 NASA video, which  provides a good explanation of this team’s work,

With the basic problem being less data due to light reflection from the instruments used to make the observations in space, the researchers determined that ALD might provide carbon nanotubes suitable for super-black instrumentation for space exploration. From the NASA news release,

To determine the viability of using ALD to create the catalyst layer, while Dwivedi [NASA Goddard co-investigator Vivek Dwivedi, University of Maryland] was building his new ALD reactor, Hagopian engaged through the Science Exchange the services of the Melbourne Centre for Nanofabrication (MCN), Australia’s largest nanofabrication research center. The Science Exchange is an online community marketplace where scientific service providers can offer their services. The NASA team delivered a number of components, including an intricately shaped occulter used in a new NASA-developed instrument for observing planets around other stars.

Through this collaboration, the Australian team fine-tuned the recipe for laying down the catalyst layer — in other words, the precise instructions detailing the type of precursor gas, the reactor temperature and pressure needed to deposit a uniform foundation. “The iron films that we deposited initially were not as uniform as other coatings we have worked with, so we needed a methodical development process to achieve the outcomes that NASA needed for the next step,” said Lachlan Hyde, MCN’s expert in ALD.

The Australian team succeeded, Hagopian said. “We have successfully grown carbon nanotubes on the samples we provided to MCN and they demonstrate properties very similar to those we’ve grown using other techniques for applying the catalyst layer. This has really opened up the possibilities for us. Our goal of ultimately applying a carbon-nanotube coating to complex instrument parts is nearly realized.”

For anyone who’d like a little more information about the Science Exchange, I posted about this scientific markeplace both on Sept. 2, 2011 after it was launched in August of that year and later on Dec. 19, 2011 in a followup about a specific nano project.

Getting back to super-black nanotechnology, here’s what the NASA team produced, from the news release,

During the research, Hagopian tuned the nano-based super-black material, making it ideal for this application, absorbing on average more than 99 percent of the ultraviolet, visible, infrared and far-infrared light that strikes it — a never-before-achieved milestone that now promises to open new frontiers in scientific discovery. The material consists of a thin coating of multi-walled carbon nanotubes about 10,000 times thinner than a strand of human hair.

Once a laboratory novelty grown only on silicon, the NASA team now grows these forests of vertical carbon tubes on commonly used spacecraft materials, such as titanium, copper and stainless steel. Tiny gaps between the tubes collect and trap light, while the carbon absorbs the photons, preventing them from reflecting off surfaces. Because only a small fraction of light reflects off the coating, the human eye and sensitive detectors see the material as black.

Before growing this forest of nanotubes on instrument parts, however, materials scientists must first deposit a highly uniform foundation or catalyst layer of iron oxide that supports the nanotube growth. For ALD, technicians do this by placing a component or some other substrate material inside a reactor chamber and sequentially pulsing different types of gases to create an ultra-thin film whose layers are literally no thicker than a single atom. Once applied, scientists then are ready to actually grow the carbon nanotubes. They place the component in another oven and heat the part to about 1,832  F (750 C). While it heats, the component is bathed in carbon-containing feedstock gas.

Congratulations to the team, I gather they’ve been working on this light absorption project for quite a while.

Wooden batteries in Maryland (US)

There seems to be a gusher of interest in making wooden batteries. Last year, there was news from a joint Polish-Swedish research team (my Aug. 14, 2012 posting) who’d combined lignin with a conductive polymer (polypyrrole) to create a battery cathode. Today, June 19, 2013, Nanowerk featured a news item about a team at the University of Maryland (US) who are also using wood to make battery components (Note: A link has been removed),

A sliver of wood coated with tin could make a tiny, long-lasting, efficient and environmentally friendly battery (“Tin Anode for Sodium-Ion Batteries Using Natural Wood Fiber as a Mechanical Buffer and Electrolyte Reservoir”).

But don’t try it at home yet– the components in the battery tested by scientists at the University of Maryland are a thousand times thinner than a piece of paper. Using sodium instead of lithium, as many rechargeable batteries do, makes the battery environmentally benign. Sodium doesn’t store energy as efficiently as lithium, so you won’t see this battery in your cell phone — instead, its low cost and common materials would make it ideal to store huge amounts of energy at once – such as solar energy at a power plant.

The June 19, 2013 University of Maryland news release, which originated the news item, explains why this work with wood is so exciting (Note: Links have been removed),

Existing batteries are often created on stiff bases, which are too brittle to withstand the swelling and shrinking that happens as electrons are stored in and used up from the battery. Liangbing Hu, Teng Li and their team found that wood fibers are supple enough to let their sodium-ion battery last more than 400 charging cycles, which puts it among the longest lasting nanobatteries.

“The inspiration behind the idea comes from the trees,” said Hu, an assistant professor of materials science. “Wood fibers that make up a tree once held mineral-rich water, and so are ideal for storing liquid electrolytes, making them not only the base but an active part of the battery.”

Lead author Hongli Zhu and other team members noticed that after charging and discharging the battery hundreds of times, the wood ended up wrinkled but intact. Computer models showed that that the wrinkles effectively relax the stress in the battery during charging and recharging, so that the battery can survive many cycles.

Here’s a link to and a citation for the research paper,

Tin Anode for Sodium-Ion Batteries Using Natural Wood Fiber as a Mechanical Buffer and Electrolyte Reservoir by Hongli Zhu, Zheng Jia, Yuchen Chen, Nicholas Weadock, Jiayu Wan, Oeyvind Vaaland, Xiaogang Han, Teng Li, and Liangbing Hu. Nano Lett., Article ASAP DOI: 10.1021/nl400998t Publication Date (Web): May 29, 2013

Copyright © 2013 American Chemical Society

This paper is behind a paywall.

Turn back timeline of ‘nonlinear objects’

A Dec. 6, 2012 news item on Nanowerk announced a ‘time-reversal’ technique being developed at the University of Maryland (Note: I have removed links),

… researchers at the University of Maryland have come up with a sci-fi seeming technology that one day could make them real. Using a “time-reversal” technique, the team has discovered how to transmit power, sound or images to a “nonlinear object” without knowing the object’s exact location or affecting objects around it (“Nonlinear Time-Reversal in a Wave Chaotic System”).

“That’s the magic of time reversal,” says Steven Anlage, a university physics professor involved in the project. “When you reverse the waveform’s direction in space and time, it follows the same path it took coming out and finds its way exactly back to the source.”

The Nov. 29, 2012 University of Maryland news release, which originated the news item, provides some technical information,

The time-reversal process is less like living the last five minutes over and more like playing a record backwards, explains Matthew Frazier, a postdoctoral research fellow in the university’s physics department. When a signal travels through the air, its waveforms scatter before an antenna picks it up. Recording the received signal and transmitting it backwards reverses the scatter and sends it back as a focused beam in space and time.

“If you go toward a secure building, they won’t let you take cell phones,” Frazier says, “So instead of checking everyone, they could detect the cell phone and send a lot of energy to to jam it.”

What differentiates this research from other time-reversal projects, such as underwater communication, is that it focuses on nonlinear objects such as a cellphone, diode or even a rusty piece of metal. When the altered, nonlinear frequency of nonlinear objects is recorded, time-reversed and retransmitted, it creates a private communication channel, because other objects cannot understand the signal.

“Time reversal has been around for 10 to 20 years but it requires some pretty sophisticated technology to make it work,” Anlage says. …

Not only could this nonlinear characteristic secure a wireless communication line, it could prevent transmitted energy from affecting any object but its target. For example, Frazier says, if scientists find a way to tag tumors with chemicals or nanoparticles that react to microwaves in a nonlinear way, doctors could use the technology to direct destructive heat to the errant cells, much like ultrasound is used to break down kidney stones. But unlike ultrasound, which must be directed to a specific location, doctors would not need to know where the tumors were to remove them. Also, the heat treatment would not affect surrounding cells.

To study time-reversal, the researchers sent a microwave pulse into an enclosed area where waveforms scattered and bounced around inside, as well as off a nonlinear and a linear port. A transceiver then recorded and time-reversed the frequencies the nonlinear port had altered, then broadcast them back into the space. The nonlinear port picked up the time-reversed signal, but the linear port did not.

The paper can be found on arXiv.org,

Nonlinear Time-Reversal in a Wave Chaotic System by Matthew Frazier, Biniyam Taddese, Thomas Antonsen, Steven M. Anlage

(Submitted on 6 Jul 2012 (v1), last revised 26 Jul 2012 (this version, v2))

The last ‘time’ oriented posting (July 14, 2011) on this blog (Splitting light to make events invisible) was about a temporal invisibility cloak.

Nearby Nature GigaBlitz—Summer Solstice 2012—get your science out

The June 20 – 26, 2012 GigaBlitz event is an international citizen science project focused on biodiversity. From the June 13, 2012 news item on physorg.com,

A high-resolution image of a palm tree in Brazil, which under close examination shows bees, wasps and flies feasting on nectars and pollens, was the top jury selection among the images captured during last December’s Nearby Nature GigaBlitz. It’s also an example of what organizers hope participants will produce for the next GigaBlitz, June 20-26 [2012].

Here’s a close up from the Brazilian palm tree image,

Bee close up from Palmeira em flor, by Eduardo Frick (http://gigapan.com/gigapans/95168/)

This bee close up does not convey the full impact of an image that you can zoom from a standard size to extreme closeups of insects, other animals, portions of palm fronds, etc. To get the full impact go here.

Here’s more about the Nearby Nature GigaBlitz events from the June 13, 2012 Carnegie Mellon University news release,

The Nearby Nature GigaBlitz events are citizen science projects in which people use gigapixel imagery technology to document biodiversity in their backyards — if not literally in their backyards, then in a nearby woodlot or vacant field. These images are then shared and made available for analysis via the GigaPan website. The events are organized by a trio of biologists and their partners at Carnegie Mellon University’s CREATE Lab.

December’s GigaBlitz included contributors from the United States, Canada, Spain, Japan, South Africa, Brazil, Singapore, Indonesia and Australia. Ten of the best images are featured in the June issue of GigaPan Magazine, an online publication of CMU’s CREATE Lab.

The issue was guest-edited by the organizers of the GigaBlitz: Ken Tamminga, professor of landscape architecture at Penn State University; Dennis vanEngelsdorp, research scientist at the University of Maryland’s Department of Entomology; and M. Alex Smith, assistant professor of integrative biology at the University of Guelph, Ontario.

The inspiration for the gigablitz comes from the world of ornithology (bird watching), from the Carnegie Mellon University June 13, 2012 news release,

Tamminga, vanEngelsdorp and Smith envisioned something akin to a BioBlitz, an intensive survey of a park or nature preserve that attempts to identify all living species within an area at a given time, and citizen science efforts such as the Audubon Society’s Christmas Bird Count.

“We imagined using these widely separated, but nearby, panoramas as a way of collecting biodiversity data – similar to the Christmas bird count – where citizen scientists surveyed their world, then distributed and shared that data with the world through public GigaPans,” they wrote. “The plus of the GigaPan approach was that the sharing was bi-directional – not merely ‘This is what I saw,’ but also hearing someone say, ‘This is what I found in your GigaPan.’”

Here’s an excerpt from the Nearby Nature gigablitz June 20 -26, 2012 Call for Entries,

The challenge: Gigapixel imaging can reveal a surprising range of animal and plant species in the ordinary and sometimes extraordinary settings in which we live, learn, and work. Your challenge is to capture panoramas of Nearby Nature and share them with your peers at gigapan.org for further exploration. We hope that shared panoramas and snapshotting will help the GigaPan community more deeply explore, document, and celebrate the diversity of life forms in their local habitats.

Gigablitz timing: The event will take place over a 7-day period – a gigablitz – that aligns with the June solstice. Please capture and upload your images to the gigapan.org website between 6am, June 20 and 11pm, June 26 (your local time).

Juried selections:    Panoramas that meet the criteria below are eligible for inclusion in the science.gigapan.org Nearby Nature collection. The best panoramas will be selected by a jury for publication in an issue of GigaPan Magazine dedicated to the Nearby Nature collection.  Selection criteria are as follows:

  • Biodiversity: the image is species rich.
  • Uniqueness: the image contains particularly interesting or unique species, or the image captures a sense of the resilience of life-forms in human-dominated settings.
  • Nearby Nature context: image habitat is part of, or very near, the everyday places that people inhabit.
  • Image quality: the image is of high quality and is visually captivating.

Subjects and locations: The gigablitz subject may be any “nearby” location in which you have a personal interest:  schoolyard garden, backyard habitat, balcony planter, village grove, nearby remnant woods, vacant lot meadow next door and others.  Panoramas with high species richness (the range of different species in a given area) that are part of everyday places are especially encouraged.  It is the process of making and sharing gigapans that will transform the ordinary into the extraordinary.

Here are 3 things to keep in mind when choosing a place:

  • The panorama should focus on organisms in a habitat near your home, school or place of work.
  • Any life-forms are acceptable, such as plants, insects, and other animals.
  • Rich, sharp detail will encourage snapshotters to help identify organisms in your panorama.  Thus, your gigapan unit should be positioned close to the subject habitat – within 100 feet (30 meters) away, and preferably much closer.  Up close mini-habitats in the near-macro range are welcome.

Please do check the Call for Entries for additional information about the submissions.

As for the website which hosts the contest, I checked the About GigaPan page and found this,

What is a GigaPan?

Gigapans are gigapixel panoramas, digital images with billions of pixels. They are huge panoramas with fascinating detail, all captured in the context of a single brilliant photo. Phenomenally large, yet remarkably crisp and vivid, gigapans are available to be explored at GigaPan.com. Zoom in and discover the detail of over 50,000 panoramas from around the world.

A New Dimension for Photography

GigaPan gives experienced and novice photographers the technology to create high-resolution panorama images more easily than ever before, and the resulting GigaPan images offer viewers a new, unique perspective on the world.

GigaPan offers the first solution for shooting, viewing and exploring high-resolution panoramic images in a single system: EPIC series of robotic camera mounts capture photos using almost any digital camera; GigaPan Stitch Software automatically combines the thousands of images taken into a single image; and GigaPan.com enables the unique mega-high resolution viewing experience.

GigaPan EPIC

GigaPan EPIC robotic mounts empower cameras to take hundreds, even thousands of photos, which are combined to create one highly detailed image with amazing depth and clarity.

The GigaPan EPIC and EPIC 100 are compatible with a broad range of point-and-shoot cameras and small DSLRs to capture gigapans, quickly and accurately. Light and compact, they are easy-to-use, and remarkably efficient. The EPIC Pro is designed to work with DSLR cameras and larger lenses, features advanced technology, and delivers stunning performance and precision. Strong enough to hold a camera and lens combination of up to 10 lbs, the EPIC Pro enables users to capture enormous panoramas with crisp, vivid detail.

Bringing Mars Rover Technology to Earth

The GigaPan EPIC series is based on the same technology employed by the Mars Rovers, Spirit and Opportunity, to capture the incredible images of the red planet. Now everyone has the opportunity to use technology developed for Mars to take their own incredible images.

GigaPan was formed in 2008 as a commercial spin-off of a successful research collaboration between a team of researchers at NASA and Carnegie Mellon University. The company’s mission is to bring this powerful, high-resolution imaging capability to a broad audience.

The original GigaPan prototype and related software were devised by a team led by Randy Sargent, a senior systems scientist at Carnegie Mellon West and the NASA Ames Research Center in Moffett Field, Calif., and Illah Nourbakhsh, an associate professor of robotics at Carnegie Mellon in Pittsburgh.

If I understand this rightly, this commercial enterprise (GigaPan), which offers hardware and software,  also supports a community-sharing platform for the types of images made possible by the equipment they sell.

Trickster researchers at the University of Maryland and graphene photodetectors

Trickster figures are a feature in mythologies around the world. They’re always mischievous, tricking humans and other beings into doing things they shouldn’t.

Tricksters can be good and/or villainous. For example, Raven in the Pacific Northwest gave us the sun, moon, and stars but stole them in the first place from someone else.

I don’t think the researchers at the University of Maryland have done anything comparable (i.e., stealing) with their graphene discovery but the analogy does amuse me. From the June 3, 2012 news release by Lee Tune,

Researchers at the Center for Nanophysics and Advanced Materials of the University of Maryland have developed a new type of hot electron bolometer a sensitive detector of infrared light, that can be used in a huge range of applications from detection of chemical and biochemical weapons from a distance and use in security imaging technologies such as airport body scanners, to chemical analysis in the laboratory and studying the structure of the universe through new telescopes. [emphasis mine]

Before launching into why I highlighted the part about the universe and the telescopes, here’s the problem the researchers were solving (from the news release),

Most photon detectors are based on semiconductors. Semiconductors are materials which have a range of energies that their electrons are forbidden to occupy, called a “band gap”. The electrons in a semiconductor can absorb photons of light having energies greater than the band gap energy, and this property forms the basis of devices such as photovoltaic cells.

Graphene, a single atom-thick plane of graphite, is unique in that is has a bandgap of exactly zero energy; graphene can therefore absorb photons of any energy. This property makes graphene particularly attractive for absorbing very low energy photons (terahertz and infrared) which pass through most semiconductors. Graphene has another attractive property as a photon absorber: the electrons which absorb the energy are able to retain it efficiently, rather than losing energy to vibrations of the atoms of the material. This same property also leads to extremely low electrical resistance in graphene.

University of Maryland researchers exploited these two properties to devise the hot electron bolometer. It works by measuring the change in the resistance that results from the heating of the electrons as they absorb light.

Normally, graphene’s resistance is almost independent of temperature, unsuitable for a bolometer.

Here’s how the researchers solved the problem (from the news release),

So the Maryland researchers used a special trick: when bilayer graphene is exposed to an electric field it has a small band gap, large enough that its resistance becomes strongly temperature dependent, but small enough to maintain its ability to absorb low energy infrared photons.

The researchers found that their bilayer graphene hot electron bolometer operating at a temperature of 5 Kelvin had comparable sensitivity to existing bolometers operating at similar temperatures, but was more than a thousand times faster.  They extrapolated the performance of the graphene bolometer to lower temperature and found that it may beat all existing technologies.

As usual, there is more work to be done (from the news release),

Some challenges remain. The bilayer graphene bolometer has a higher electrical resistance than similar devices using other materials which may make it difficult to use at high frequencies. Additionally, bilayer graphene absorbs only a few percent of incident light.  But the Maryland researchers are working on ways to get around these difficulties with new device designs, and are confident that a graphene has a bright future as a photo-detecting material.

As for why I highlighted the passage about telescopes and the structure of the universe, our local particle physics laboratory (TRIUMF located in Vancouver, Canada) is hosting the Physics at the Large Hadron Collider (PLHC) conference this week. This is a big deal, from the 7th annual PLHC conference home page (Note: I have removed some links),

PLHC2012 is the seventh conference in the series. The previous conferences in this series were held in Prague (2003), Vienna (2004), Cracow (2006), Split (2008), Hamburg (2010) and Perugia (2011). The conference consists of invited and contributed talks, as well as posters, covering experiment and theory.

Topics at the conference

  • Beauty Physics
  • Heavy Ion Physics
  • Standard Model & Beyond
  • Supersymmetry
  • Higgs Boson

There was a June 3, 2012 public event (mentioned in my May 15, 2012 posting) featuring Rolf Heuer, Director General of CERN (European Particle Physics Laboratory) which houses the Large Hadron Collider and experiments where they are attempting to discern the structure of the universe. (I did attend Heuer’s talk and I think one needs to be more of a physics aficionado than I am.  Thankfully I had watched the Perimeter Institute’s webcast  (What the Higgs is going on?) when the big Higgs Boson announcement was made in December 2012 (mentioned in my Dec. 14, 2012 posting) and that helped.

There is of course an alternate view of the universe and its structure as presented by the story of Raven (from the Wikipedia essay [Note: I have removed a link]),

Raven steals the sun

This is an ancient story told on the Queen Charlotte Islands and includes how Raven helped to bring the Sun, Moon, Stars, Fresh Water, and Fire to the world.

Long ago, near the beginning of the world, Gray Eagle was the guardian of the Sun, Moon and Stars, of fresh water, and of fire. Gray Eagle hated people so much that he kept these things hidden. People lived in darkness, without fire and without fresh water.

Gray Eagle had a beautiful daughter, and Raven fell in love with her. In the beginning, Raven was a snow-white bird, and as a such, he pleased Gray Eagle’s daughter. She invited him to her father’s longhouse.

When Raven saw the Sun, Moon and stars, and fresh water hanging on the sides of Eagle’s lodge, he knew what he should do. He watched for his chance to seize them when no one was looking. He stole all of them, and a brand of fire also, and flew out of the longhouse through the smoke hole. As soon as Raven got outside he hung the Sun up in the sky. It made so much light that he was able to fly far out to an island in the middle of the ocean. When the Sun set, he fastened the Moon up in the sky and hung the stars around in different places. By this new light he kept on flying, carrying with him the fresh water and the brand of fire he had stolen.

He flew back over the land. When he had reached the right place, he dropped all the water he had stolen. It fell to the ground and there became the source of all the fresh-water streams and lakes in the world. Then Raven flew on, holding the brand of fire in his bill. The smoke from the fire blew back over his white feathers and made them black. When his bill began to burn, he had to drop the firebrand. It struck rocks and hid itself within them. That is why, if you strike two stones together, sparks of fire will drop out.

Raven’s feathers never became white again after they were blackened by the smoke from the firebrand. That is why Raven is now a black bird.

While it’s less poetic in tone, there is an image from the University of Maryland illustrating their graphene photodetector,

Electrons in bilayer graphene are heated by a beam of light. Illustration by Loretta Kuo and Michelle Groce, University of Maryland .

With a song in your heart and multiplexed images in an atomic vapor

A specific piece of research has inspired a song with lyrics based on the text of a research paper and, weirdly, it works. You will have a song in your heart and on your lips and it’s all to do with storing images in an atomic vapor,

Hot, hot, hot, eh?

As for the research paper itself (Temporally multiplexed storage of images in a Gradient Echo Memory), it’s currently availab.e at arXiv.org or in Optics Express, Vol. 20, Issue 11, pp. 12350-12358 (2012) DOI: 10.1364/OE.20.012350(authors: Quentin Glorieux, Jeremy B. Clark, Alberto M. Marino, Zhifan Zhou, Paul D. Lett). The May 29, 2012 news item on Nanowerk offers some tantalizing tidbits about the work,

The storage of light-encoded messages on film and compact disks and as holograms is ubiquitous—grocery scanners, Netflix disks, credit-card images are just a few examples. And now light signals can be stored as patterns in a room-temperature vapor of atoms. Scientists at the Joint Quantum Institute [JQI] have stored not one but two letters of the alphabet in a tiny cell filled with rubidium (Rb) atoms which are tailored to absorb and later re-emit messages on demand. This is the first time two images have simultaneously been reliably stored in a non-solid medium and then played back.

In effect, this is the first stored and replayed atomic movie. Because the JQI researchers are able to store and replay two separate images, or “frames,” a few micro-seconds apart, the whole sequence can qualify as a feat of cinematography.

Here’s a little more detail about how this was done and some information about the implications,

Having stored one image (the letter N), the JQI physicists then stored a second image, the letter T, before reading both letters back in quick succession. The two “frames” of this movie, about a microsecond apart, were played back successfully every time, although typically only about 8 percent of the original light was redeemed, a percentage that will improve with practice. According to Paul Lett, one of the great challenges in storing images this way is to keep the atoms embodying the image from diffusing away. The longer the storage time (measured so far to be about 20 microseconds) the more diffusion occurs. The result is a fuzzy image.

Paul Lett plans to link up these new developments in storing images with his previous work on squeezed light. “Squeezing” light is one way to partially circumvent the Heisenberg uncertainty principle governing the ultimate measurement limitations. By allowing a poorer knowledge of a stream of light—say the timing of the light, its phase—one gain a sharper knowledge of a separate variable—in this case the light’s amplitude. This increased capability, at le ast for the one variable, allows higher precision in certain quantum measurements.

“The big thing here,” said Lett, “is that this allows us to do images and do pulses (instead of individual photons) and it can be matched (hopefully) to our squeezed light source, so that we can soon try to store “quantum images” and make essentially a random access memory for continuous variable quantum information. The thing that really attracted us to this method—aside from its being pretty well-matched to our source of squeezed light—is that the ANU [Australian National University] group was able to get 87% recovery efficiency from it – which is, I think, the best anyone has seen in any optical system, so it holds great promise for a quantum memory.”

I may never totally understand this work but at least I now have a song to sing and for anyone who wants more details, the May 27, 2012 news item on Nanowerk provides details and images, as well as, another opportunity to watch the song.  I did check out the video on YouTube and found that it’s by therockcookiebottom and is part of a project, Song A Day: 1000 Days and Counting that singer-songwriter, Jonathan Mann started in Jan. 2009. I imagine that means he  must be nearing the end. Thank you Jonathan for a very entertaining and educational song. He does offer memberships to support him and his song-a-day project and opportunities to hire him for any songwriting projects you may have.

Dental fillings that improve your teeth

If you have lousy teeth, this is exciting news. From the May 2, 2012 news item on Nanowerk (I have removed a link),

Scientists using nanotechology at the University of Maryland School of Dentistry have created the first cavity-filling composite that kills harmful bacteria and regenerates tooth structure lost to bacterial decay. [emphasis mine]

Rather than just limiting decay with conventional fillings, the new composite is a revolutionary dental weapon to control harmful bacteria, which co-exist in the natural colony of microorganisms in the mouth, says professor Huakun (Hockin) Xu, PhD, MS. [emphasis mine]

While the possibilities are promising, I find the idea of a weapon in my mouth disconcerting. (They might want to check out their metaphors a little more closely.) Moving on, there’s a little more detail about this new composite  (from the news item),

Fillings made from the School of Dentistry’s new nanocomposite, with antibacterial primer and antibacterial adhesive, should last longer than the typical five to 10 years, though the scientists have not thoroughly tested longevity. Xu says a key component of the new nanocomposite and nano-structured adhesive is calcium phosphate nanoparticles that regenerate tooth minerals. The antibacterial component has a base of quaternary ammonium and silver nanoparticles along with a high pH. The alkaline pH limits acid production by tooth bacteria.

“The bottom line is we are continuing to improve these materials and making them stronger in their antibacterial and remineralizing capacities as well as increasing their longevity,” Xu says.

The new products have been laboratory tested using biofilms from saliva of volunteers. The Xu team is planning to next test its products in animal teeth and in human volunteers in collaboration with the Federal University of Ceara in Brazil.

The folks at the enewsparkforest blog are not quite so sanguine about this dental development as per their May 3, 2012 posting on the topic (I have removed llinks),

A study conducted in 2008 and confirmed by another study in 2009 shows that washing nano-silver textiles releases substantial amounts of the nanosilver into the laundry discharge water, which will ultimately reach natural waterways and potentially poison fish and other aquatic organisms. A study found nanosilver to cause malformations and to be lethal to small fish at various stages of development since they are able to cross the egg membranes and move into the fish embryos. A 2010 study by scientists at Oregon State University and in the European Union highlights the major regulatory and educational issues that they believe should be considered before nanoparticles are used in pesticides.

As Dexter Johnson in his May 3, 2012 posting on his Nanoclast blog (on the Institute for Electrical and Electronics Engineers website) notes,

The researchers are continuing with their animal and human testing with the nanocomposite. Given that some sectors of the public are concerned about the potential risks of silver nanoparticles, they should probably take a look at the issue as part of their research.

This is not unreasonable especially in light of the concern some folks have had over mercury in dental fillings. Sufficient concern by the way to occasion this cautionary note from Health Canada (from the Mercury and Human Health webpage on their website),

Minimizing Your Risk

Elemental mercury from dental fillings does not generally pose a health risk. There is, however, a fairly small number of people who are hypersensitive to mercury. While Health Canada does not recommend that you replace existing mercury dental fillings, it does suggest that when the fillings need to be repaired, you may want to consider using a product that does not contain mercury.

Pregnant women, people allergic to mercury and those with impaired kidney function should avoid mercury fillings. Whenever possible, amalgam fillings should not be removed when you are pregnant because the removal may expose you to mercury vapour. When appropriate, the primary teeth of children should be filled with non-mercury materials.

Side note: I find it interesting that while Health Canada has not banned the use of mercury in fillings, it does advise against adding more mercury-laced fillings to your mouth and/or using them in your children’s primary teeth, if possible.

Getting back to silver nanoparticles in our mouths, I reiterate Dexter’s suggestion.

Asia’s research effort in nano-, bio-, and information technology integrated in Asian Research Network

The Feb. 29, 2012 news item by Cameron Chai on Azonano spells it out,

An Asian Research Network (ARN) has been formed by the Hanyang University of Korea and RIKEN of Japan in collaboration with other institutes and universities in Asia. This network has been launched to reinforce a strong education and research collaboration throughout Asia.

The Asian Research Network website is here. You will need to use your scroll bars as it appears to be partially constructed (or maybe my system is so creaky that I just can’t see everything on the page). Towards the bottom (right side) of the home page,there are a couple of red buttons for PDFs of the ARN Pamphlet and Research Articles.

From page 2 of the ARN pamphlet, here’s a listing of the member organizations,

KOREA

Hanyang University
Samsung Electronics
Electronics and Telecommunication Research Institute
Seoul National University
Institute of Pasteur Korea
Korea Research Institute of Chemical Technology
Korea Advanced Nano Fab Center

JAPAN

RIKEN

INDIA

National Chemical Laboratory
Shivaji University
Indian Institutes of Science Education and Research
Pune University
Indian Institute of Technology-Madras (In Progress)
Indian Institute of Science (In Progress)

USA

University of Texas at Dallas
UCLA (In Progress)
f d i i ( )

CHINA

National Center for Nanoscience and Technology
Peking University

SINGAPORE

National University of Singapore
Nanyang Technological University (In Progress)
Stanford University In Progress)
University of Maryland (In Progress)

ISRAEL

Weizmann Institute of Science (In Progress)
Hebrew University Jerusalem

THAILAND

National Science and Technology Development Agency (In Progress)

I was a little surprised to see Israel on the list and on an even more insular note, why no Canada?

Getting back to the ARN, here are their aims, from page 2 of the ARN pamphlet,

We are committed to fostering talented human resources, creating a research network in which researchers in the region share their knowledge and experiences, and establishing a future-oriented partnership to globalize our research capabilities. To this end, we will achieve excellence in all aspects of education, research, and development in the area of fusion research between BT [biotechnology] and IT [information technology] based on NT [nanotechnology] in general. We will make a substantial contribution to the betterment of the global community as well as the Asian society.

I look forward to hearing more from them in the future.

Action Science Explorer (data visualization tool)

There’s a lot of data being generated and we need to find new ways to manage and navigate through it. The Dec. 8, 2011 news item by Ellen Ferrante and Lisa-Joy Zgorski on phsyorg.com describes a data visualization tool designed by the Human-Computer Interaction Laboratory (HCIL) at the University of Maryland,

The National Science Foundation- (NSF) funded Action Science Explorer (ASE) allows users to simultaneously search through thousands of academic papers, using a visualization method that determines how papers are connected, for instance, by topic, date, authors, etc.   The goal is to use these connections to identify emerging scientific trends and advances.

“We are creating an early warning system for scientific breakthroughs,” said Ben Shneiderman, a professor at the University of Maryland (UM) and founding director of the UM Human-Computer Interaction Lab.

“Such a system would dramatically improve the capability of academic researchers, government program managers and industry analysts to understand emerging scientific topics so as to recognize breakthroughs, controversies and centers of activity,” said Shneiderman. “This would enable appropriate allocation of funds, encourage new collaborations among groups that unknowingly were working on similar topics and accelerate research progress.”

I went to the HCIL website to find more about the ASE project here where I also located a screen capture of the graphical interface,

A large-screen window layout of the overall interface of ASE. Credit: Cody Dunne, Robert Gove, Ben Shneiderman, Bonnie Dorr and Judith Klavans. University of Maryland

There’s also a video explaining some aspects of ASE,


For those who can’t get enough data, there’s a technical report here.

I expect we will be seeing more of these kinds of tools and not just for science research. There was this April 6, 2011 news item by Aaron Dubrow on physorg.com describing the US National Archives and Records Administration’s (NARA) new data visualization tools,

At the end of President George W. Bush’s administration in 2009, NARA received roughly 35 times the amount of data as previously received from the administration of President Bill Clinton, which itself was many times that of the previous administration. With the federal government increasingly using social media, cloud computing and other technologies to contribute to open government, this trend is not likely to decline. By 2014, NARA is expecting to accumulate more than 35 petabytes (quadrillions of bytes) of data in the form of electronic records.

“The National Archives is a unique national institution that responds to requirements for preservation, access and the continued use of government records,” said Robert Chadduck, acting director for the National Archives Center for Advanced Systems and Technologies.

After consulting with NARA about its needs, members of TACC’s [Texas Advanced Computing Center] Data and Information Analysis group developed a multi-pronged approach that combines different data analysis methods into a visualization framework. The visualizations act as a bridge between the archivist and the data by interactively rendering information as shapes and colors to facilitate an understanding of the archive’s structure and content.

I’d best get ready to develop new literacy skills as these data visualization tools come into play.