Category Archives: nanophotonics

Nanophotonics transforms Raman spectroscopy at Rice University (US)

This new technique for sensing molecules is intriguing. From a July 15, 2014 news item on Azonano,

Nanophotonics experts at Rice University [Texas, US] have created a unique sensor that amplifies the optical signature of molecules by about 100 billion times. Newly published tests found the device could accurately identify the composition and structure of individual molecules containing fewer than 20 atoms.

The new imaging method, which is described this week in the journal Nature Communications, uses a form of Raman spectroscopy in combination with an intricate but mass reproducible optical amplifier. Researchers at Rice’s Laboratory for Nanophotonics (LANP) said the single-molecule sensor is about 10 times more powerful that previously reported devices.

A July 15, 2014 Rice University news release (also on EurekAlert), which originated the news item, provides more detail about the research,

“Ours and other research groups have been designing single-molecule sensors for several years, but this new approach offers advantages over any previously reported method,” said LANP Director Naomi Halas, the lead scientist on the study. “The ideal single-molecule sensor would be able to identify an unknown molecule — even a very small one — without any prior information about that molecule’s structure or composition. That’s not possible with current technology, but this new technique has that potential.”

The optical sensor uses Raman spectroscopy, a technique pioneered in the 1930s that blossomed after the advent of lasers in the 1960s. When light strikes a molecule, most of its photons bounce off or pass directly through, but a tiny fraction — fewer than one in a trillion — are absorbed and re-emitted into another energy level that differs from their initial level. By measuring and analyzing these re-emitted photons through Raman spectroscopy, scientists can decipher the types of atoms in a molecule as well as their structural arrangement.

Scientists have created a number of techniques to boost Raman signals. In the new study, LANP graduate student Yu Zhang used one of these, a two-coherent-laser technique called “coherent anti-Stokes Raman spectroscopy,” or CARS. By using CARS in conjunction with a light amplifier made of four tiny gold nanodiscs, Halas and Zhang were able to measure single molecules in a powerful new way. LANP has dubbed the new technique “surface-enhanced CARS,” or SECARS.

“The two-coherent-laser setup in SECARS is important because the second laser provides further amplification,” Zhang said. “In a conventional single-laser setup, photons go through two steps of absorption and re-emission, and the optical signatures are usually amplified around 100 million to 10 billion times. By adding a second laser that is coherent with the first one, the SECARS technique employs a more complex multiphoton process.”

Zhang said the additional amplification gives SECARS the potential to address most unknown samples. That’s an added advantage over current techniques for single-molecule sensing, which generally require a prior knowledge about a molecule’s resonant frequency before it can be accurately measured.

Another key component of the SECARS process is the device’s optical amplifier, which contains four tiny gold discs in a precise diamond-shaped arrangement. The gap in the center of the four discs is about 15 nanometers wide. Owing to an optical effect called a “Fano resonance,” the optical signatures of molecules caught in that gap are dramatically amplified because of the efficient light harvesting and signal scattering properties of the four-disc structure.

Fano resonance requires a special geometric arrangement of the discs, and one of LANP’s specialties is the design, production and analysis of Fano-resonant plasmonic structures like the four-disc “quadrumer.” In previous LANP research, other geometric disc structures were used to create powerful optical processors.

Zhang said the quadrumer amplifiers are a key to SECARS, in part because they are created with standard e-beam lithographic techniques, which means they can be easily mass-produced.

“A 15-nanometer gap may sound small, but the gap in most competing devices is on the order of 1 nanometer,” Zhang said. “Our design is much more robust because even the smallest defect in a one-nanometer device can have significant effects. Moreover, the larger gap also results in a larger target area, the area where measurements take place. The target area in our device is hundreds of times larger than the target area in a one-nanometer device, and we can measure molecules anywhere in that target area, not just in the exact center.”

Halas, the Stanley C. Moore Professor in Electrical and Computer Engineering and a professor of biomedical engineering, chemistry, physics and astronomy at Rice, said the potential applications for SECARS include chemical and biological sensing as well as metamaterials research. She said scientific labs are likely be the first beneficiaries of the technology.

“Amplification is important for sensing small molecules because the smaller the molecule, the weaker the optical signature,” Halas said. “This amplification method is the most powerful yet demonstrated, and it could prove useful in experiments where existing techniques can’t provide reliable data.”

Here’s a link to and a citation for the paper,

Coherent anti-Stokes Raman scattering with single-molecule sensitivity using a plasmonic Fano resonance by Yu Zhang, Yu-Rong Zhen, Oara Neumann, Jared K. Day, Peter Nordlander & Naomi J. Halas. Nature Communications 5, Article number: 4424 doi:10.1038/ncomms5424 Published 14 July 2014

This paper is behind a paywall.

Memristor, memristor! What is happening? News from the University of Michigan and HP Laboratories

Professor Wei Lu (whose work on memristors has been mentioned here a few times [an April 15, 2010 posting and an April 19, 2012 posting]) has made a discovery about memristors with significant implications (from a June 25, 2014 news item on Azonano),

In work that unmasks some of the magic behind memristors and “resistive random access memory,” or RRAM—cutting-edge computer components that combine logic and memory functions—researchers have shown that the metal particles in memristors don’t stay put as previously thought.

The findings have broad implications for the semiconductor industry and beyond. They show, for the first time, exactly how some memristors remember.

A June 24, 2014 University of Michigan news release, which originated the news item, includes Lu’s perspective on this discovery and more details about it,

“Most people have thought you can’t move metal particles in a solid material,” said Wei Lu, associate professor of electrical and computer engineering at the University of Michigan. “In a liquid and gas, it’s mobile and people understand that, but in a solid we don’t expect this behavior. This is the first time it has been shown.”

Lu, who led the project, and colleagues at U-M and the Electronic Research Centre Jülich in Germany used transmission electron microscopes to watch and record what happens to the atoms in the metal layer of their memristor when they exposed it to an electric field. The metal layer was encased in the dielectric material silicon dioxide, which is commonly used in the semiconductor industry to help route electricity.

They observed the metal atoms becoming charged ions, clustering with up to thousands of others into metal nanoparticles, and then migrating and forming a bridge between the electrodes at the opposite ends of the dielectric material.

They demonstrated this process with several metals, including silver and platinum. And depending on the materials involved and the electric current, the bridge formed in different ways.

The bridge, also called a conducting filament, stays put after the electrical power is turned off in the device. So when researchers turn the power back on, the bridge is there as a smooth pathway for current to travel along. Further, the electric field can be used to change the shape and size of the filament, or break the filament altogether, which in turn regulates the resistance of the device, or how easy current can flow through it.

Computers built with memristors would encode information in these different resistance values, which is in turn based on a different arrangement of conducting filaments.

Memristor researchers like Lu and his colleagues had theorized that the metal atoms in memristors moved, but previous results had yielded different shaped filaments and so they thought they hadn’t nailed down the underlying process.

“We succeeded in resolving the puzzle of apparently contradicting observations and in offering a predictive model accounting for materials and conditions,” said Ilia Valov, principle investigator at the Electronic Materials Research Centre Jülich. “Also the fact that we observed particle movement driven by electrochemical forces within dielectric matrix is in itself a sensation.”

The implications for this work (from the news release),

The results could lead to a new approach to chip design—one that involves using fine-tuned electrical signals to lay out integrated circuits after they’re fabricated. And it could also advance memristor technology, which promises smaller, faster, cheaper chips and computers inspired by biological brains in that they could perform many tasks at the same time.

As is becoming more common these days (from the news release),

Lu is a co-founder of Crossbar Inc., a Santa Clara, Calif.-based startup working to commercialize RRAM. Crossbar has just completed a $25 million Series C funding round.

Here’s a link to and a citation for the paper,

Electrochemical dynamics of nanoscale metallic inclusions in dielectrics by Yuchao Yang, Peng Gao, Linze Li, Xiaoqing Pan, Stefan Tappertzhofen, ShinHyun Choi, Rainer Waser, Ilia Valov, & Wei D. Lu. Nature Communications 5, Article number: 4232 doi:10.1038/ncomms5232 Published 23 June 2014

This paper is behind a paywall.

The other party instrumental in the development and, they hope, the commercialization of memristors is HP (Hewlett Packard) Laboratories (HP Labs). Anyone familiar with this blog will likely know I have frequently covered the topic starting with an essay explaining the basics on my Nanotech Mysteries wiki (or you can check this more extensive and more recently updated entry on Wikipedia) and with subsequent entries here over the years. The most recent entry is a Jan. 9, 2014 posting which featured the then latest information on the HP Labs memristor situation (scroll down about 50% of the way). This new information is more in the nature of a new revelation of details rather than an update on its status. Sebastian Anthony’s June 11, 2014 article for extremetech.com lays out the situation plainly (Note: Links have been removed),

HP, one of the original 800lb Silicon Valley gorillas that has seen much happier days, is staking everything on a brand new computer architecture that it calls… The Machine. Judging by an early report from Bloomberg Businessweek, up to 75% of HP’s once fairly illustrious R&D division — HP Labs – are working on The Machine. As you would expect, details of what will actually make The Machine a unique proposition are hard to come by, but it sounds like HP’s groundbreaking work on memristors (pictured top) and silicon photonics will play a key role.

First things first, we’re probably not talking about a consumer computing architecture here, though it’s possible that technologies commercialized by The Machine will percolate down to desktops and laptops. Basically, HP used to be a huge player in the workstation and server markets, with its own operating system and hardware architecture, much like Sun. Over the last 10 years though, Intel’s x86 architecture has rapidly taken over, to the point where HP (and Dell and IBM) are essentially just OEM resellers of commodity x86 servers. This has driven down enterprise profit margins — and when combined with its huge stake in the diminishing PC market, you can see why HP is rather nervous about the future. The Machine, and IBM’s OpenPower initiative, are both attempts to get out from underneath Intel’s x86 monopoly.

While exact details are hard to come by, it seems The Machine is predicated on the idea that current RAM, storage, and interconnect technology can’t keep up with modern Big Data processing requirements. HP is working on two technologies that could solve both problems: Memristors could replace RAM and long-term flash storage, and silicon photonics could provide faster on- and off-motherboard buses. Memristors essentially combine the benefits of DRAM and flash storage in a single, hyper-fast, super-dense package. Silicon photonics is all about reducing optical transmission and reception to a scale that can be integrated into silicon chips (moving from electrical to optical would allow for much higher data rates and lower power consumption). Both technologies can be built using conventional fabrication techniques.

In a June 11, 2014 article by Ashlee Vance for Bloomberg Business Newsweek, the company’s CTO (Chief Technical Officer), Martin Fink provides new details,

That’s what they’re calling it at HP Labs: “the Machine.” It’s basically a brand-new type of computer architecture that HP’s engineers say will serve as a replacement for today’s designs, with a new operating system, a different type of memory, and superfast data transfer. The company says it will bring the Machine to market within the next few years or fall on its face trying. “We think we have no choice,” says Martin Fink, the chief technology officer and head of HP Labs, who is expected to unveil HP’s plans at a conference Wednesday [June 11, 2014].

In my Jan. 9, 2014 posting there’s a quote from Martin Fink stating that 2018 would be earliest date for the company’s StoreServ arrays to be packed with 100TB Memristor drives (the Machine?). The company later clarified the comment by noting that it’s very difficult to set dates for new technology arrivals.

Vance shares what could be a stirring ‘origins’ story of sorts, provided the Machine is successful,

The Machine started to take shape two years ago, after Fink was named director of HP Labs. Assessing the company’s projects, he says, made it clear that HP was developing the needed components to create a better computing system. Among its research projects: a new form of memory known as memristors; and silicon photonics, the transfer of data inside a computer using light instead of copper wires. And its researchers have worked on operating systems including Windows, Linux, HP-UX, Tru64, and NonStop.

Fink and his colleagues decided to pitch HP Chief Executive Officer Meg Whitman on the idea of assembling all this technology to form the Machine. During a two-hour presentation held a year and a half ago, they laid out how the computer might work, its benefits, and the expectation that about 75 percent of HP Labs personnel would be dedicated to this one project. “At the end, Meg turned to [Chief Financial Officer] Cathie Lesjak and said, ‘Find them more money,’” says John Sontag, the vice president of systems research at HP, who attended the meeting and is in charge of bringing the Machine to life. “People in Labs see this as a once-in-a-lifetime opportunity.”

Here is the memristor making an appearance in Vance’s article,

HP’s bet is the memristor, a nanoscale chip that Labs researchers must build and handle in full anticontamination clean-room suits. At the simplest level, the memristor consists of a grid of wires with a stack of thin layers of materials such as tantalum oxide at each intersection. When a current is applied to the wires, the materials’ resistance is altered, and this state can hold after the current is removed. At that point, the device is essentially remembering 1s or 0s depending on which state it is in, multiplying its storage capacity. HP can build these chips with traditional semiconductor equipment and expects to be able to pack unprecedented amounts of memory—enough to store huge databases of pictures, files, and data—into a computer.

New memory and networking technology requires a new operating system. Most applications written in the past 50 years have been taught to wait for data, assuming that the memory systems feeding the main computers chips are slow. Fink has assigned one team to develop the open-source Machine OS, which will assume the availability of a high-speed, constant memory store. …

Peter Bright in his June 11, 2014 article for Ars Technica opens his article with a controversial statement (Note: Links have been removed),

In 2008, scientists at HP invented a fourth fundamental component to join the resistor, capacitor, and inductor: the memristor. [emphasis mine] Theorized back in 1971, memristors showed promise in computing as they can be used to both build logic gates, the building blocks of processors, and also act as long-term storage.

Whether or not the memristor is a fourth fundamental component has been a matter of some debate as you can see in this Memristor entry (section on Memristor definition and criticism) on Wikipedia.

Bright goes on to provide a 2016 delivery date for some type of memristor-based product and additional technical insight about the Machine,

… By 2016, the company plans to have memristor-based DIMMs, which will combine the high storage densities of hard disks with the high performance of traditional DRAM.

John Sontag, vice president of HP Systems Research, said that The Machine would use “electrons for processing, photons for communication, and ions for storage.” The electrons are found in conventional silicon processors, and the ions are found in the memristors. The photons are because the company wants to use optical interconnects in the system, built using silicon photonics technology. With silicon photonics, photons are generated on, and travel through, “circuits” etched onto silicon chips, enabling conventional chip manufacturing to construct optical parts. This allows the parts of the system using photons to be tightly integrated with the parts using electrons.

The memristor story has proved to be even more fascinating than I thought in 2008 and I was already as fascinated as could be, or so I thought.

Cardiac pacemakers: Korea’s in vivo demonstration of a self-powered and UK’s breath-based approach

As i best I can determine ,the last mention of a self-powered pacemaker and the like on this blog was in a Nov. 5, 2012 posting (Developing self-powered batteries for pacemakers). This latest news from The Korea Advanced Institute of Science and Technology (KAIST) is, I believe, the first time that such a device has been successfully tested in vivo. From a June 23, 2014 news item on ScienceDaily,

As the number of pacemakers implanted each year reaches into the millions worldwide, improving the lifespan of pacemaker batteries has been of great concern for developers and manufacturers. Currently, pacemaker batteries last seven years on average, requiring frequent replacements, which may pose patients to a potential risk involved in medical procedures.

A research team from the Korea Advanced Institute of Science and Technology (KAIST), headed by Professor Keon Jae Lee of the Department of Materials Science and Engineering at KAIST and Professor Boyoung Joung, M.D. of the Division of Cardiology at Severance Hospital of Yonsei University, has developed a self-powered artificial cardiac pacemaker that is operated semi-permanently by a flexible piezoelectric nanogenerator.

A June 23, 2014 KAIST news release on EurekAlert, which originated the news item, provides more details,

The artificial cardiac pacemaker is widely acknowledged as medical equipment that is integrated into the human body to regulate the heartbeats through electrical stimulation to contract the cardiac muscles of people who suffer from arrhythmia. However, repeated surgeries to replace pacemaker batteries have exposed elderly patients to health risks such as infections or severe bleeding during operations.

The team’s newly designed flexible piezoelectric nanogenerator directly stimulated a living rat’s heart using electrical energy converted from the small body movements of the rat. This technology could facilitate the use of self-powered flexible energy harvesters, not only prolonging the lifetime of cardiac pacemakers but also realizing real-time heart monitoring.

The research team fabricated high-performance flexible nanogenerators utilizing a bulk single-crystal PMN-PT thin film (iBULe Photonics). The harvested energy reached up to 8.2 V and 0.22 mA by bending and pushing motions, which were high enough values to directly stimulate the rat’s heart.

Professor Keon Jae Lee said:

“For clinical purposes, the current achievement will benefit the development of self-powered cardiac pacemakers as well as prevent heart attacks via the real-time diagnosis of heart arrhythmia. In addition, the flexible piezoelectric nanogenerator could also be utilized as an electrical source for various implantable medical devices.”

This image illustrating a self-powered nanogenerator for a cardiac pacemaker has been provided by KAIST,

This picture shows that a self-powered cardiac pacemaker is enabled by a flexible piezoelectric energy harvester. Credit: KAIST

This picture shows that a self-powered cardiac pacemaker is enabled by a flexible piezoelectric energy harvester.
Credit: KAIST

Here’s a link to and a citation for the paper,

Self-Powered Cardiac Pacemaker Enabled by Flexible Single Crystalline PMN-PT Piezoelectric Energy Harvester by Geon-Tae Hwang, Hyewon Park, Jeong-Ho Lee, SeKwon Oh, Kwi-Il Park, Myunghwan Byun, Hyelim Park, Gun Ahn, Chang Kyu Jeong, Kwangsoo No, HyukSang Kwon, Sang-Goo Lee, Boyoung Joung, and Keon Jae Lee. Advanced Materials DOI: 10.1002/adma.201400562
Article first published online: 17 APR 2014

© 2014 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim

This paper is behind a paywall.

There was a May 15, 2014 KAIST news release on EurekAlert announcing this same piece of research but from a technical perspective,

The energy efficiency of KAIST’s piezoelectric nanogenerator has increased by almost 40 times, one step closer toward the commercialization of flexible energy harvesters that can supply power infinitely to wearable, implantable electronic devices

NANOGENERATORS are innovative self-powered energy harvesters that convert kinetic energy created from vibrational and mechanical sources into electrical power, removing the need of external circuits or batteries for electronic devices. This innovation is vital in realizing sustainable energy generation in isolated, inaccessible, or indoor environments and even in the human body.

Nanogenerators, a flexible and lightweight energy harvester on a plastic substrate, can scavenge energy from the extremely tiny movements of natural resources and human body such as wind, water flow, heartbeats, and diaphragm and respiration activities to generate electrical signals. The generators are not only self-powered, flexible devices but also can provide permanent power sources to implantable biomedical devices, including cardiac pacemakers and deep brain stimulators.

However, poor energy efficiency and a complex fabrication process have posed challenges to the commercialization of nanogenerators. Keon Jae Lee, Associate Professor of Materials Science and Engineering at KAIST, and his colleagues have recently proposed a solution by developing a robust technique to transfer a high-quality piezoelectric thin film from bulk sapphire substrates to plastic substrates using laser lift-off (LLO).

Applying the inorganic-based laser lift-off (LLO) process, the research team produced a large-area PZT thin film nanogenerators on flexible substrates (2 cm x 2 cm).

“We were able to convert a high-output performance of ~250 V from the slight mechanical deformation of a single thin plastic substrate. Such output power is just enough to turn on 100 LED lights,” Keon Jae Lee explained.

The self-powered nanogenerators can also work with finger and foot motions. For example, under the irregular and slight bending motions of a human finger, the measured current signals had a high electric power of ~8.7 μA. In addition, the piezoelectric nanogenerator has world-record power conversion efficiency, almost 40 times higher than previously reported similar research results, solving the drawbacks related to the fabrication complexity and low energy efficiency.

Lee further commented,

“Building on this concept, it is highly expected that tiny mechanical motions, including human body movements of muscle contraction and relaxation, can be readily converted into electrical energy and, furthermore, acted as eternal power sources.”

The research team is currently studying a method to build three-dimensional stacking of flexible piezoelectric thin films to enhance output power, as well as conducting a clinical experiment with a flexible nanogenerator.

In addition to the 2012 posting I mentioned earlier, there was also this July 12, 2010 posting which described research on harvesting biomechanical movement ( heart beat, blood flow, muscle stretching, or even irregular vibration) at the Georgia (US) Institute of Technology where the lead researcher observed,

…  Wang [Professor Zhong Lin Wang at Georgia Tech] tells Nanowerk. “However, the applications of the nanogenerators under in vivo and in vitro environments are distinct. Some crucial problems need to be addressed before using these devices in the human body, such as biocompatibility and toxicity.”

Bravo to the KAIST researchers for getting this research to the in vivo testing stage.

Meanwhile at the University of Bristol and at the University of Bath, researchers have received funding for a new approach to cardiac pacemakers, designed them with the breath in mind. From a June 24, 2014 news item on Azonano,

Pacemaker research from the Universities of Bath and Bristol could revolutionise the lives of over 750,000 people who live with heart failure in the UK.

The British Heart Foundation (BHF) is awarding funding to researchers developing a new type of heart pacemaker that modulates its pulses to match breathing rates.

A June 23, 2014 University of Bristol press release, which originated the news item, provides some context,

During 2012-13 in England, more than 40,000 patients had a pacemaker fitted.

Currently, the pulses from pacemakers are set at a constant rate when fitted which doesn’t replicate the natural beating of the human heart.

The normal healthy variation in heart rate during breathing is lost in cardiovascular disease and is an indicator for sleep apnoea, cardiac arrhythmia, hypertension, heart failure and sudden cardiac death.

The device is then briefly described (from the press release),

The novel device being developed by scientists at the Universities of Bath and Bristol uses synthetic neural technology to restore this natural variation of heart rate with lung inflation, and is targeted towards patients with heart failure.

The device works by saving the heart energy, improving its pumping efficiency and enhancing blood flow to the heart muscle itself.  Pre-clinical trials suggest the device gives a 25 per cent increase in the pumping ability, which is expected to extend the life of patients with heart failure.

One aim of the project is to miniaturise the pacemaker device to the size of a postage stamp and to develop an implant that could be used in humans within five years.

Dr Alain Nogaret, Senior Lecturer in Physics at the University of Bath, explained“This is a multidisciplinary project with strong translational value.  By combining fundamental science and nanotechnology we will be able to deliver a unique treatment for heart failure which is not currently addressed by mainstream cardiac rhythm management devices.”

The research team has already patented the technology and is working with NHS consultants at the Bristol Heart Institute, the University of California at San Diego and the University of Auckland. [emphasis mine]

Professor Julian Paton, from the University of Bristol, added: “We’ve known for almost 80 years that the heart beat is modulated by breathing but we have never fully understood the benefits this brings. The generous new funding from the BHF will allow us to reinstate this natural occurring synchrony between heart rate and breathing and understand how it brings therapy to hearts that are failing.”

Professor Jeremy Pearson, Associate Medical Director at the BHF, said: “This study is a novel and exciting first step towards a new generation of smarter pacemakers. More and more people are living with heart failure so our funding in this area is crucial. The work from this innovative research team could have a real impact on heart failure patients’ lives in the future.”

Given some current events (‘Tesla opens up its patents’, Mike Masnick’s June 12, 2014 posting on Techdirt), I wonder what the situation will be vis à vis patents by the time this device gets to market.

Swelling sensors and detecting gases at the nanoscale

A June 17, 2014 news item on Nanowerk features a new approach to sensing gases from the Massachusetts Institute of Technology (MIT),

Using microscopic polymer light resonators that expand in the presence of specific gases, researchers at MIT’s Quantum Photonics Laboratory have developed new optical sensors with predicted detection levels in the parts-per-billion range. Optical sensors are ideal for detecting trace gas concentrations due to their high signal-to-noise ratio, compact, lightweight nature, and immunity to electromagnetic interference.

Although other optical gas sensors had been developed before, the MIT team conceived an extremely sensitive, compact way to detect vanishingly small amounts of target molecules.

A June 17, 2014 American Institute of Physics (AIP) news release by John Arnst, which originated the news item, describes the new technique in some detail,

The researchers fabricated wavelength-scale photonic crystal cavities from PMMA, an inexpensive and flexible polymer that swells when it comes into contact with a target gas. The polymer is infused with fluorescent dye, which emits selectively at the resonant wavelength of the cavity through a process called the Purcell effect. At this resonance, a specific color of light reflects back and forth a few thousand times before eventually leaking out. A spectral filter detects this small color shift, which can occur at even sub-nanometer level swelling of the cavity, and in turn reveals the gas concentration.

“These polymers are often used as coatings on other materials, so they’re abundant and safe to handle. Because of their deformation in response to biochemical substances, cavity sensors made entirely of this polymer lead to a sensor with faster response and much higher sensitivity,” said Hannah Clevenson. Clevenson is a PhD student in the electrical engineering and computer science department at MIT, who led the experimental effort in the lab of principal investigator Dirk Englund.

PMMA can be treated to interact specifically with a wide range of different target chemicals, making the MIT team’s sensor design highly versatile. There’s a wide range of potential applications for the sensor, said Clevenson, “from industrial sensing in large chemical plants for safety applications, to environmental sensing out in the field, to homeland security applications for detecting toxic gases, to medical settings, where the polymer could be treated for specific antibodies.”

The thin PMMA polymer films, which are 400 nanometers thick, are patterned with structures that are 8-10 micrometers long by 600 nanometers wide and suspended in the air. In one experiment, the films were embedded on tissue paper, which allowed 80 percent of the sensors to be suspended over the air gaps in the paper. Surrounding the PMMA film with air is important, Clevenson said, both because it allows the device to swell when exposed to the target gas, and because the optical properties of air allow the device to be designed to trap light travelling in the polymer film.

The team found that these sensors are easily reusable since the polymer shrinks back to its original length once the targeted gas has been removed.

The current experimental sensitivity of the devices is 10 parts per million, but the team predicts that with further refinement, they could detect gases with part-per-billion concentration levels.

The researchers have provided an image illustrating the sensor’s response to a target gas,

High-sensitivity detection of dilute gases is demonstrated by monitoring the resonance of a suspended polymer nanocavity. The inset shows the target gas molecules (darker) interacting with the polymer material (lighter). This interaction causes the nanocavity to swell, resulting in a shift of its resonance. CREDIT: H. Clevenson/MIT

High-sensitivity detection of dilute gases is demonstrated by monitoring the resonance of a suspended polymer nanocavity. The inset shows the target gas molecules (darker) interacting with the polymer material (lighter). This interaction causes the nanocavity to swell, resulting in a shift of its resonance.
CREDIT: H. Clevenson/MIT

Here’s a link to and a citation for the paper,

High sensitivity gas sensor based on high-Q suspended polymer photonic crystal nanocavity by  Hannah Clevenson, Pierre Desjardins, Xuetao Gan, and Dirk Englund. Appl. Phys. Lett. 104, 241108 (2014); http://dx.doi.org/10.1063/1.4879735

This is an open access paper.

Flatland, an 1884 novella or optics with graphene?

Flatland is both novella and a story about optics with graphene. First, here’s more about the novella from its Wikipedia entry (Note: Links have been removed),

Flatland: A Romance of Many Dimensions is an 1884 satirical novella by the English schoolmaster Edwin Abbott Abbott. Writing pseudonymously as “A Square”,[1] the book used the fictional two-dimensional world of Flatland to offer pointed observations on the social hierarchy of Victorian culture. However, the novella’s more enduring contribution is its examination of dimensions.[2]

For the uninitiated, graphene is two-dimensional and, apparently, this characteristic could prove helpful for new types of optics (from a May 23, 2014 news item on Nanowerk; Note:  Links have been removed),

Researchers from CIC nanoGUNE, in collaboration with ICFO  [Institute of Photonic Sciences] and Graphenea, introduce a platform technology based on optical antennas for trapping and controlling light with the one-atom-thick material graphene. The experiments show that the dramatically squeezed graphene-guided light can be focused and bent, following the fundamental principles of conventional optics. The work, published yesterday in Science (“Controlling graphene plasmons with resonant metal antennas and spatial conductivity patterns”), opens new opportunities for smaller and faster photonic devices and circuits.

A May 23, 2014 CIC nanoGUNE news release (also on EurekAlert), which originated the news item,

Optical circuits and devices could make signal processing and computing much faster. “However, although light is very fast it needs too much space”, explains Rainer Hillenbrand, Ikerbasque Professor at nanoGUNE and UPV/EHU. In fact, propagating light needs at least the space of half its wavelength, which is much larger than state-of-the-art electronic building blocks in our computers. For that reason, a quest for squeezing light to propagate it through nanoscale materials arises.

The wonder material graphene, a single layer of carbon atoms with extraordinary properties, has been proposed as one solution. The wavelength of light captured by a graphene layer can be strongly shortened by a factor of 10 to 100 compared to light propagating in free space. As a consequence, this light propagating along the graphene layer – called graphene plasmon – requires much less space.

However, transforming light efficiently into graphene plasmons and manipulating them with a compact device has been a major challenge. A team of researchers from nanoGUNE, ICFO and Graphenea – members of the EU Graphene Flagship – now demonstrates that the antenna concept of radio wave technology could be a promising solution. The team shows that a nanoscale metal rod on graphene (acting as an antenna for light) can capture infrared light and transform it into graphene plasmons, analogous to a radio antenna converting radio waves into electromagnetic waves in a metal cable.

“We introduce a versatile platform technology based on resonant optical antennas for launching and controlling of propagating graphene plasmons, which represents an essential step for the development of graphene plasmonic circuits”, says team leader Rainer Hillenbrand. Pablo Alonso-González, who performed the experiments at nanoGUNE, highlights some of the advantages offered by the antenna device: “the excitation of graphene plasmons is purely optical, the device is compact and the phase and wavefronts of the graphene plasmons can be directly controlled by geometrically tailoring the antennas. This is essential to develop applications based on focusing and guiding of light”.

The news release describes few of the more technical aspects of the research,

The research team also performed theoretical studies. Alexey Nikitin, Ikerbasque Research Fellow at nanoGUNE, performed the calculations and explains that “according to theory, the operation of our device is very efficient, and all the future technological applications will essentially depend upon fabrication limitations and quality of graphene”.

Based on Nikitin´s calculations, nanoGUNE’s Nanodevices group fabricated gold nanoantennas on graphene provided by Graphenea. The Nanooptics group then used the Neaspec near-field microscope to image how infrared graphene plasmons are launched and propagate along the graphene layer. In the images, the researchers saw that, indeed, waves on graphene propagate away from the antenna, like waves on a water surface when a stone is thrown in.

In order to test whether the two-dimensional propagation of light waves along a one-atom-thick carbon layer follow the laws of conventional optics, the researchers tried to focus and refract the waves. For the focusing experiment, they curved the antenna. The images then showed that the graphene plasmons focus away from the antenna, similar to the light beam that is concentrated with a lens or concave mirror.

The team also observed that graphene plasmons refract (bend) when they pass through a prism-shaped graphene bilayer, analogous to the bending of a light beam passing through a glass prism. “The big difference is that the graphene prism is only two atoms thick. It is the thinnest refracting optical prism ever”, says Rainer Hillenbrand. Intriguingly, the graphene plasmons are bent because the conductivity in the two-atom-thick prism is larger than in the surrounding one-atom-thick layer. In the future, such conductivity changes in graphene could be also generated by simple electronic means, allowing for highly efficient electric control of refraction, among others for steering applications.

Altogether, the experiments show that the fundamental and most important principles of conventional optics also apply for graphene plasmons, in other words, squeezed light propagating along a one-atom-thick layer of carbon atoms. Future developments based on these results could lead to extremely miniaturized optical circuits and devices that could be useful for sensing and computing, among other applications.

Here’s a link to and a citation for the paper,

Controlling graphene plasmons with resonant metal antennas and spatial conductivity patterns by P. Alonso-González, A. Y. Nikitin, F. Golmar, A. Centeno, A. Pesquera, S. Vélez, J. Chen, G. Navickaite, F. Koppens, A. Zurutuza, F. Casanova1, L. E. Hueso1, and R. Hillenbrand. Science (2014) DOI: 10.1126/science.1253202 Published Online May 22 2014

This paper is behind a paywall.

You can find our more about the Institute of Photonic Sciences (ICFO) here and Graphenea, a graphene producer, here and CIC nanoGUNE here.

US Air Force wants to merge classical and quantum physics

The US Air Force wants to merge classical and quantum physics for practical purposes according to a May 5, 2014 news item on Azonano,

The Air Force Office of Scientific Research has selected the Harvard School of Engineering and Applied Sciences (SEAS) to lead a multidisciplinary effort that will merge research in classical and quantum physics and accelerate the development of advanced optical technologies.

Federico Capasso, Robert L. Wallace Professor of Applied Physics and Vinton Hayes Senior Research Fellow in Electrical Engineering, will lead this Multidisciplinary University Research Initiative [MURI] with a world-class team of collaborators from Harvard, Columbia University, Purdue University, Stanford University, the University of Pennsylvania, Lund University, and the University of Southampton.

The grant is expected to advance physics and materials science in directions that could lead to very sophisticated lenses, communication technologies, quantum information devices, and imaging technologies.

“This is one of the world’s strongest possible teams,” said Capasso. “I am proud to lead this group of people, who are internationally renowned experts in their fields, and I believe we can really break new ground.”

A May 1, 2014 Harvard University School of Engineering and Applied Sciences news release, which originated the news item, provides a description of project focus: nanophotonics and metamaterials along with some details of Capasso’s work in these areas (Note: Links have been removed),

The premise of nanophotonics is that light can interact with matter in unusual ways when the material incorporates tiny metallic or dielectric features that are separated by a distance shorter than the wavelength of the light. Metamaterials are engineered materials that exploit these phenomena, producing strange effects, enabling light to bend unnaturally, twist into a vortex, or disappear entirely. Yet the fabrication of thick, or bulk, metamaterials—that manipulate light as it passes through the material—has proven very challenging.

Recent research by Capasso and others in the field has demonstrated that with the right device structure, the critical manipulations can actually be confined to the very surface of the material—what they have dubbed a “metasurface.” These metasurfaces can impart an instantaneous shift in the phase, amplitude, and polarization of light, effectively controlling optical properties on demand. Importantly, they can be created in the lab using fairly common fabrication techniques.

At Harvard, the research has produced devices like an extremely thin, flat lens, and a material that absorbs 99.75% of infrared light. But, so far, such devices have been built to order—brilliantly suited to a single task, but not tunable.

This project, however,is focused on the future (Note: Links have been removed),

“Can we make a rapidly configurable metasurface so that we can change it in real time and quickly? That’s really a visionary frontier,” said Capasso. “We want to go all the way from the fundamental physics to the material building blocks and then the actual devices, to arrive at some sort of system demonstration.”

The proposed research also goes further. A key thrust of the project involves combining nanophotonics with research in quantum photonics. By exploiting the quantum effects of luminescent atomic impurities in diamond, for example, physicists and engineers have shown that light can be captured, stored, manipulated, and emitted as a controlled stream of single photons. These types of devices are essential building blocks for the realization of secure quantum communication systems and quantum computers. By coupling these quantum systems with metasurfaces—creating so-called quantum metasurfaces—the team believes it is possible to achieve an unprecedented level of control over the emission of photons.

“Just 20 years ago, the notion that photons could be manipulated at the subwavelength scale was thought to be some exotic thing, far fetched and of very limited use,” said Capasso. “But basic research opens up new avenues. In hindsight we know that new discoveries tend to lead to other technology developments in unexpected ways.”

The research team includes experts in theoretical physics, metamaterials, nanophotonic circuitry, quantum devices, plasmonics, nanofabrication, and computational modeling. Co-principal investigator Marko Lončar is the Tiantsai Lin Professor of Electrical Engineering at Harvard SEAS. Co-PI Nanfang Yu, Ph.D. ’09, developed expertise in metasurfaces as a student in Capasso’s Harvard laboratory; he is now an assistant professor of applied physics at Columbia. Additional co-PIs include Alexandra Boltasseva and Vladimir Shalaev at Purdue, Mark Brongersma at Stanford, and Nader Engheta at the University of Pennsylvania. Lars Samuelson (Lund University) and Nikolay Zheludev (University of Southampton) will also participate.

The bulk of the funding will support talented graduate students at the lead institutions.

The project, titled “Active Metasurfaces for Advanced Wavefront Engineering and Waveguiding,” is among 24 planned MURI awards selected from 361 white papers and 88 detailed proposals evaluated by a panel of experts; each award is subject to successful negotiation. The anticipated amount of the Harvard-led grant is up to $6.5 million for three to five years.

For anyone who’s not familiar (that includes me, anyway) with MURI awards, there’s this from Wikipedia (Note: links have been removed),

Multidisciplinary University Research Initiative (MURI) is a basic research program sponsored by the US Department of Defense (DoD). Currently each MURI award is about $1.5 million a year for five years.

I gather that in addition to the Air Force, the Army and the Navy also award MURI funds.

Data transmisstion at 1.44 terabits per second

It’s not only the amount of data we have which is increasing but the amount of data we want to transmit from one place to another. An April 14, 2014 news item on ScienceDaily describes a new technique designed to increase data transmission rates,

Miniaturized optical frequency comb sources allow for transmission of data streams of several terabits per second over hundreds of kilometers — this has now been demonstrated by researchers of Karlsruhe Institute of Technology (KIT) and the Swiss École Polytechnique Fédérale de Lausanne (EPFL) in a experiment presented in the journal Nature Photonics. The results may contribute to accelerating data transmission in large computing centers and worldwide communication networks.

In the study presented in Nature Photonics, the scientists of KIT, together with their EPFL colleagues, applied a miniaturized frequency comb as optical source. They reached a data rate of 1.44 terabits per second and the data was transmitted over a distance of 300 km. This corresponds to a data volume of more than 100 million telephone calls or up to 500,000 high-definition (HD) videos. For the first time, the study shows that miniaturized optical frequency comb sources are suited for coherent data transmission in the terabit range.

The April (?) 2014 KIT news release, which originated the news item, describes some of the current transmission technology’s constraints,

The amount of data generated and transmitted worldwide is growing continuously. With the help of light, data can be transmitted rapidly and efficiently. Optical communication is based on glass fibers, through which optical signals can be transmitted over large distances with hardly any losses. So-called wavelength division multiplexing (WDM) techniques allow for the transmission of several data channels independently of each other on a single optical fiber, thereby enabling extremely high data rates. For this purpose, the information is encoded on laser light of different wavelengths, i.e. different colors. However, scalability of such systems is limited, as presently an individual laser is required for each transmission channel. In addition, it is difficult to stabilize the wavelengths of these lasers, which requires additional spectral guard bands between the data channels to prevent crosstalk.

The news release goes on to further describe the new technology using ‘combs’,

Optical frequency combs, for the development of which John Hall and Theodor W. Hänsch received the 2005 Nobel Prize in Physics, consist of many densely spaced spectral lines, the distances of which are identical and exactly known. So far, frequency combs have been used mainly for highly precise optical atomic clocks or optical rulers measuring optical frequencies with utmost precision. However, conventional frequency comb sources are bulky and costly devices and hence not very well suited for use in data transmission. Moreover, spacing of the spectral lines in conventional frequency combs often is too small and does not correspond to the channel spacing used in optical communications, which is typically larger than 20 GHz.

In their joint experiment, the researchers of KIT and the EPFL have now demonstrated that integrated optical frequency comb sources with large line spacings can be realized on photonic chips and applied for the transmission of large data volumes. For this purpose, they use an optical microresonator made of silicon nitride, into which laser light is coupled via a waveguide and stored for a long time. “Due to the high light intensity in the resonator, the so-called Kerr effect can be exploited to produce a multitude of spectral lines from a single continuous-wave laser beam, hence forming a frequency comb,” explains Jörg Pfeifle, who performed the transmission experiment at KIT. This method to generate these so-called Kerr frequency combs was discovered by Tobias Kippenberg, EPFL, in 2007. Kerr combs are characterized by a large optical bandwidth and can feature line spacings that perfectly meet the requirements of data transmission. The underlying microresonators are produced with the help of complex nanofabrication methods by the EPFL Center of Micronanotechnology. “We are among the few university research groups that are able to produce such samples,” comments Kippenberg. Work at EPFL was funded by the Swiss program “NCCR Nanotera” and the European Space Agency [ESA].

Scientists of KIT’s Institute of Photonics and Quantum Electronics (IPQ) and Institute of Microstructure Technology (IMT) are the first to use such Kerr frequency combs for high-speed data transmission. “The use of Kerr combs might revolutionize communication within data centers, where highly compact transmission systems of high capacity are required most urgently,” Christian Koos says.

Here’s a link to and a citation for the paper,

Coherent terabit communications with microresonator Kerr frequency combs by Joerg Pfeifle, Victor Brasch, Matthias Lauermann, Yimin Yu, Daniel Wegner, Tobias Herr, Klaus Hartinger, Philipp Schindler, Jingshi Li, David Hillerkuss, Rene Schmogrow, Claudius Weimann, Ronald Holzwarth, Wolfgang Freude, Juerg Leuthold, Tobias J. Kippenberg, & Christian Koos. Nature Photonics (2014) doi:10.1038/nphoton.2014.57 Published online 13 April 2014

This paper is behind a paywall.

Seeing quantum entanglement and using quantum entanglement to build a wormhole

Kudos to the team from the Vienna Center for Quantum Science and Technology for the great musical accompaniment on their video showing quantum entanglement in real time,

A Dec. 4, 2013 news item on Nanowerk provides more details,

Einstein called quantum entanglement “spooky action at a distance”. Now, a team from the Vienna Center for Quantum Science and Technology has reported imaging of entanglement events where the influence of the measurement of one particle on its distant partner particle is directly visible (“Real-Time Imaging of Quantum Entanglement”).

The Dec. 4, 2013 Andor news release, which originated the news item, gives more details about the team’s work and about the Andor camera which enabled it,

The key to their success is the Andor iStar 334T Intensified CCD (ICCD) camera, which is capable of very fast (nanosecond) and precise (picosecond) optical gating speeds. Unlike the relatively long microsecond exposure times of CCD and EMCCD cameras which inhibits their usefulness in ultra-high-speed imaging, this supreme level of temporal resolution made it possible for the team to perform a real-time coincidence imaging of entanglement for the first time.

“The Andor iStar ICCD camera is fast enough, and sensitive enough, to image in real-time the effect of the measurement of one photon on its entangled partner,” says Robert Fickler of the Institute for Quantum Optics and Quantum Information. “Using ICCD cameras to evaluate the number of photons from a registered intensity within a given region opens up new experimental possibilities to determine more efficiently the structure and properties of spatial modes from only single intensity images. Our results suggest that triggered ICCD cameras will advance quantum optics and quantum information experiments where complex structures of single photons need to be investigated with high spatio-temporal resolution.”

According to Antoine Varagnat, Product Specialist at Andor, “The experiment produces pairs of photons which are entangled so as to have opposite polarisations. For instance, if one of a pair has horizontal polarisation, the other has vertical, and so on. The first photon is sent to polarising glass that transmits photons of one angle only, followed by a detector to register photons which make it through the glass. The other photon is delayed by a fibre, then its entangled property is coherently transferred from the polarisation to the spatial mode and afterwards brought to the high-speed, ultra-sensitive iStar camera.

“The use of the ICCD camera allowed the team to demonstrate the high flexibility of the setup in creating any desired spatial-mode entanglement. Their results suggest that visual imaging in quantum optics not only provides a better intuitive understanding of entanglement but will also improve applications of quantum science,” concludes Varagnat.

Research into quantum entanglement was instigated in 1935 by Albert Einstein, Boris Podolsky and Nathan Rosen, in a paper critiquing quantum mechanics. Erwin Schrödinger also wrote several papers shortly afterwards. Although these first studies focused on the counterintuitive properties of entanglement with the aim of criticising quantum mechanics, entanglement was eventually verified experimentally and recognised as a valid, fundamental feature of quantum mechanics. Nowadays, the focus of the research has changed to its utilization in communications and computation, and has been used to realise quantum teleportation experimentally.

The team’s work is chronicled in this study,

Real-Time Imaging of Quantum Entanglement by Robert Fickler, Mario Krenn, Radek Lapkiewicz, Sven Ramelow & Anton Zeilinger. Scientific Reports 3, Article number: 1914 doi:10.1038/srep01914 Published 29 May 2013

This is an open access paper.

Meanwhile, researchers at the University of Washington (Seattle, Washington state) explore the quantum entanglement phenomenon with an eye to wormholes (from the De.c 3, 2013 University of Washington news release [also on EurekAlter]),

Quantum entanglement, a perplexing phenomenon of quantum mechanics that Albert Einstein once referred to as “spooky action at a distance,” could be even spookier than Einstein perceived.

Physicists at the University of Washington and Stony Brook University in New York believe the phenomenon might be intrinsically linked with wormholes, hypothetical features of space-time that in popular science fiction can provide a much-faster-than-light shortcut from one part of the universe to another.

But here’s the catch: One couldn’t actually travel, or even communicate, through these wormholes, said Andreas Karch, a UW physics professor.

Quantum entanglement occurs when a pair or a group of particles interact in ways that dictate that each particle’s behavior is relative to the behavior of the others. In a pair of entangled particles, if one particle is observed to have a specific spin, for example, the other particle observed at the same time will have the opposite spin.

The “spooky” part is that, as research has confirmed, the relationship holds true no matter how far apart the particles are – across the room or across several galaxies. If the behavior of one particle changes, the behavior of both entangled particles changes simultaneously, no matter how far away they are.

Recent research indicated that the characteristics of a wormhole are the same as if two black holes were entangled, then pulled apart. Even if the black holes were on opposite sides of the universe, the wormhole would connect them.

Black holes, which can be as small as a single atom or many times larger than the sun, exist throughout the universe, but their gravitational pull is so strong that not even light can escape from them.

If two black holes were entangled, Karch said, a person outside the opening of one would not be able to see or communicate with someone just outside the opening of the other.

“The way you can communicate with each other is if you jump into your black hole, then the other person must jump into his black hole, and the interior world would be the same,” he said.

The work demonstrates an equivalence between quantum mechanics, which deals with physical phenomena at very tiny scales, and classical geometry – “two different mathematical machineries to go after the same physical process,” Karch said. The result is a tool scientists can use to develop broader understanding of entangled quantum systems.

“We’ve just followed well-established rules people have known for 15 years and asked ourselves, ‘What is the consequence of quantum entanglement?’”

The researchers have provided an illustration, which looks more like a ‘smiley face’ to me. Are wormholes smiley faces in space,

Alan Stonebraker/American Physical Society This illustration demonstrates a wormhole connecting two black holes.

Alan Stonebraker/American Physical Society
This illustration demonstrates a wormhole connecting two black holes.

Here’s a link to and a citation for the research paper on quantum entanglement and wormholes,

Holographic Dual of an Einstein-Podolsky-Rosen Pair has a Wormhole by Kristan Jensen and Andreas Karch. Phys. Rev. Lett. 111, 211602 (2013) [5 pages] Published 20 November 2013

This paper is behind a paywall.

ETA Dec. 11, 2013: There’s a news item today, Dec. 11, 2013, on Nanowerk which casts an interesting light on Andor,

Nanotechnology specialist Oxford Instruments is to take over Belfast-based scientific camera maker Andor in a £176million deal.
The Andor board last night agreed a 525p a share offer, giving a 31 per cent premium over the closing price before Oxford’s initial 500p a share pitch in November.

The two companies have been in talks since July [2013].

Shares in Andor rose 10p to 515p and Oxford Instruments gained 8p to 1566p.

It looks like their Dec. 4, 2013 news release was a leadup to this business news.

Dye your carbon nantubes for better resolution

A team at the Université de Montréal has developed a technique for making nanoscale objects more easily seen. From a Dec. 2, 2013 news item on Nanowerk (Note: A link has been removed),

Richard Martel and his research team at the Department of Chemistry of the Université de Montréal have discovered a method to improve detection of the infinitely small. Their discovery is presented in the November 24 online edition of the journal Nature Photonics (“Giant Raman scattering from J-aggregated dyes inside carbon nanotubes for multispectral imaging”).

“Raman scattering provides information on the ways molecules vibrate, which is equivalent to taking their fingerprint. It’s a bit like a bar code,” said the internationally renowned professor. “Raman signals are specific for each molecule and thus useful in identifying these molecules.”

The discovery by Martel’s team is that Raman scattering of dye-nanotube particles is so large that a single particle of this type can be located and identified. All one needs is an optical scanner capable of detecting this particle, much like a fingerprint.

I haven’t been able to track down the English language version of the Dec. 2, 2013 Université de Montréal news release but here are some excerpts from the French language version by Dominique Nancy,

Grâce à l’alignement de molécules de colorants encapsulées dans un nanotube de carbone, les chercheurs ont réussi à amplifier le signal Raman jusqu’ici pas assez puissant de ces colorants pour permettre leur détection. L’article présente les données expérimentales d’une diffusion extraordinaire de lumière visible sur une particule de taille nanométrique.

«La diffusion Raman contient de l’information sur les modes de vibration des molécules, ce qui équivaut à relever leurs empreintes digitales. C’est un peu comme un code à barres, explique le professeur de renommée internationale. Le signal Raman est propre à chaque molécule et donc très utile pour la repérer.»

Le mode de diffusion Raman est un phénomène optique mis au jour en 1928 par le physicien Chandrashekhara Venkata Râman. L’effet consiste en la diffusion inélastique d’un photon, c’est-à-dire le phénomène physique par lequel un milieu peut modifier la fréquence de la lumière qui y circule. ….

….

Mais jusqu’à ce jour, le signal Raman des molécules était trop faible pour répondre efficacement aux besoins en imagerie optique. Les chercheurs avaient donc recours à d’autres techniques optiques plus sensibles mais moins précises, car elles ne possèdent pas de «code à barres». «Il est toutefois possible techniquement de voir les signaux Raman avec un spectromètre lorsque la concentration des molécules est assez élevée, indique M. Martel. Mais cela limite les applications du Raman.»

….

Composé d’une centaine de molécules colorées et alignées dans le cylindre, le nanotraceur est 50 000 fois plus petit qu’un cheveu. Il mesure environ un nanomètre de diamètre et 500 de long. Et pourtant les particules colorées encapsulées dans le nanotube de carbone donnent un signal Raman un million de fois plus intense que celui des autres molécules autour de l’objet.

On peut aussi imaginer un douanier qui scannerait notre passeport avec un mode Raman multispectral (aux signaux multiples). Ces nanotraceurs pourraient également être utilisés dans les encres des billets de banque, rendant la contrefaçon presque impossible.

La beauté de la chose, affirme Richard Martel, c’est que le phénomène est général et plusieurs types de colorants peuvent servir à la fabrication des nanotraceurs, dont les «codes à barres» sont tous différents. «On a fabriqué jusqu’ici plus de 10 traceurs et il semble qu’il n’y a pas de limite, dit-il. On pourrait donc en principe créer autant de nanotraceurs qu’il y a de bactéries et utiliser ce principe pour les déceler avec un microscope fonctionnant en mode Raman.»

As I have noted many times here, my linguistic skills are shaky but here’s my overview:

Due to the colouring agent researchers have added to the carbon nanotubes in the experiement, it is possible to amplify the Ramen signal allowing for an extraordinary resolution making nanoscale objects optically visible.

With the dyed carbon nanotubes, the new technique offers the equivalent of a unique digital fingerprint or, Martel also describes it, as a unique bar code for nanoscale objects. Standard Raman technique can be used to detect nanoscale objects when there’s a high concentration but is not not powerful enough to optically detect nanoscale objects in lower concentrations or with any precision, i.e., the ability to detect a unique ‘fingerprint’ or ‘bar code’.

The nanotracer, the dyed carbon nanotube, is 1/50,000 the diameter of a human hair and can measure object of approximately 500 nm in diameter.

Martel sees a number of applications for this new technique include biomedical applications.

You may want to take a look at the news item on Nanowerk for a better and more complete translation.

Here’s a link to and a citation for the English language paper,

Giant Raman scattering from J-aggregated dyes inside carbon nanotubes for multispectral imaging by E. Gaufrès, N. Y.-Wa Tang, F. Lapointe, J. Cabana, M.-A. Nadon, N. Cottenye, F. Raymond, T. Szkopek, & R. Martel. Nature Photonics (2013) doi:10.1038/nphoton.2013.309 Published online 24 November 2013

This article is behind a paywall.

Bejweled and bedazzled but not bewitched, bothered, or bewildered at Northwestern University

When discussing DNA (deoxyribonucleic acid) one doesn’t usually expect to encounter gems as one does in a Nov. 28, 2013 news item on Azonano,

Nature builds flawless diamonds, sapphires and other gems. Now a Northwestern University [located in Chicago, Illinois, US] research team is the first to build near-perfect single crystals out of nanoparticles and DNA, using the same structure favored by nature.

The Nov. 27, 2013 Northwestern University news release by Megan Fellman (also on EurekAlert), which originated the news item,, explains why single crystals are of such interest,

“Single crystals are the backbone of many things we rely on — diamonds for beauty as well as industrial applications, sapphires for lasers and silicon for electronics,” said nanoscientist Chad A. Mirkin. “The precise placement of atoms within a well-defined lattice defines these high-quality crystals.

“Now we can do the same with nanomaterials and DNA, the blueprint of life,” Mirkin said. “Our method could lead to novel technologies and even enable new industries, much as the ability to grow silicon in perfect crystalline arrangements made possible the multibillion-dollar semiconductor industry.”

His research group developed the “recipe” for using nanomaterials as atoms, DNA as bonds and a little heat to form tiny crystals. This single-crystal recipe builds on superlattice techniques Mirkin’s lab has been developing for nearly two decades.

(I wrote about Mirkin’s nanoparticle DNA work in the context of his proposed periodic table of modified nucleic acid nanoparticles in a July 5, 2013 posting.) The news release goes on to describe Mirkin’s most recent work,

In this recent work, Mirkin, an experimentalist, teamed up with Monica Olvera de la Cruz, a theoretician, to evaluate the new technique and develop an understanding of it. Given a set of nanoparticles and a specific type of DNA, Olvera de la Cruz showed they can accurately predict the 3-D structure, or crystal shape, into which the disordered components will self-assemble.

The general set of instructions gives researchers unprecedented control over the type and shape of crystals they can build. The Northwestern team worked with gold nanoparticles, but the recipe can be applied to a variety of materials, with potential applications in the fields of materials science, photonics, electronics and catalysis.

A single crystal has order: its crystal lattice is continuous and unbroken throughout. The absence of defects in the material can give these crystals unique mechanical, optical and electrical properties, making them very desirable.

In the Northwestern study, strands of complementary DNA act as bonds between disordered gold nanoparticles, transforming them into an orderly crystal. The researchers determined that the ratio of the DNA linker’s length to the size of the nanoparticle is critical.

“If you get the right ratio it makes a perfect crystal — isn’t that fun?” said Olvera de la Cruz, who also is a professor of chemistry in the Weinberg College of Arts and Sciences. “That’s the fascinating thing, that you have to have the right ratio. We are learning so many rules for calculating things that other people cannot compute in atoms, in atomic crystals.”

The ratio affects the energy of the faces of the crystals, which determines the final crystal shape. Ratios that don’t follow the recipe lead to large fluctuations in energy and result in a sphere, not a faceted crystal, she explained. With the correct ratio, the energies fluctuate less and result in a crystal every time.

“Imagine having a million balls of two colors, some red, some blue, in a container, and you try shaking them until you get alternating red and blue balls,” Mirkin explained. “It will never happen.

“But if you attach DNA that is complementary to nanoparticles — the red has one kind of DNA, say, the blue its complement — and now you shake, or in our case, just stir in water, all the particles will find one another and link together,” he said. “They beautifully assemble into a three-dimensional crystal that we predicted computationally and realized experimentally.”

To achieve a self-assembling single crystal in the lab, the research team reports taking two sets of gold nanoparticles outfitted with complementary DNA linker strands. Working with approximately 1 million nanoparticles in water, they heated the solution to a temperature just above the DNA linkers’ melting point and then slowly cooled the solution to room temperature, which took two or three days.

The very slow cooling process encouraged the single-stranded DNA to find its complement, resulting in a high-quality single crystal approximately three microns wide. “The process gives the system enough time and energy for all the particles to arrange themselves and find the spots they should be in,” Mirkin said.

The researchers determined that the length of DNA connected to each gold nanoparticle can’t be much longer than the size of the nanoparticle. In the study, the gold nanoparticles varied from five to 20 nanometers in diameter; for each, the DNA length that led to crystal formation was about 18 base pairs and six single-base “sticky ends.”

“There’s no reason we can’t grow extraordinarily large single crystals in the future using modifications of our technique,” said Mirkin, who also is a professor of medicine, chemical and biological engineering, biomedical engineering and materials science and engineering and director of Northwestern’s International Institute for Nanotechnology.

Here’s a link to and a citation for the published paper,

DNA-mediated nanoparticle crystallization into Wulff polyhedra by Evelyn Auyeung, Ting I. N. G. Li, Andrew J. Senesi, Abrin L. Schmucker, Bridget C. Pals, Monica Olvera de la Cruz, & Chad A. Mirkin. Nature (2013) doi:10.1038/nature12739 Published online 27 November 2013

This article is behind a paywall.

Points to anyone who recognized the song title (Bewitched, Bothered and Bewildered) embedded in the head for this posting.