Tag Archives: Institute of Electrical and Electronics Engineers

First carbon nanotube mirrors for Cubesat telescope

A July 12, 2016 news item on phys.org describes a project that could lead to the first carbon nanotube mirrors to be used in a Cubesat telescope in space,

A lightweight telescope that a team of NASA scientists and engineers is developing specifically for CubeSat scientific investigations could become the first to carry a mirror made of carbon nanotubes in an epoxy resin.

Led by Theodor Kostiuk, a scientist at NASA’s [US National Aeronautics and Space Administration] Goddard Space Flight Center in Greenbelt, Maryland, the technology-development effort is aimed at giving the scientific community a compact, reproducible, and relatively inexpensive telescope that would fit easily inside a CubeSat. Individual CubeSats measure four inches on a side.

John Kolasinski (left), Ted Kostiuk (center), and Tilak Hewagama (right) hold mirrors made of carbon nanotubes in an epoxy resin. The mirror is being tested for potential use in a lightweight telescope specifically for CubeSat scientific investigations. Credit: NASA/W. Hrybyk

John Kolasinski (left), Ted Kostiuk (center), and Tilak Hewagama (right) hold mirrors made of carbon nanotubes in an epoxy resin. The mirror is being tested for potential use in a lightweight telescope specifically for CubeSat scientific investigations. Credit: NASA/W. Hrybyk

A July 12, 2016 US National Aeronautics and Space Administration (NASA) news release, which originated the news item, provides more information about Cubesats,

Small satellites, including CubeSats, are playing an increasingly larger role in exploration, technology demonstration, scientific research and educational investigations at NASA. These miniature satellites provide a low-cost platform for NASA missions, including planetary space exploration; Earth observations; fundamental Earth and space science; and developing precursor science instruments like cutting-edge laser communications, satellite-to-satellite communications and autonomous movement capabilities. They also allow an inexpensive means to engage students in all phases of satellite development, operation and exploitation through real-world, hands-on research and development experience on NASA-funded rideshare launch opportunities.

Under this particular R&D effort, Kostiuk’s team seeks to develop a CubeSat telescope that would be sensitive to the ultraviolet, visible, and infrared wavelength bands. It would be equipped with commercial-off-the-shelf spectrometers and imagers and would be ideal as an “exploratory tool for quick looks that could lead to larger missions,” Kostiuk explained. “We’re trying to exploit commercially available components.”

While the concept won’t get the same scientific return as say a flagship-style mission or a large, ground-based telescope, it could enable first order of scientific investigations or be flown as a constellation of similarly equipped CubeSats, added Kostiuk.

With funding from Goddard’s Internal Research and Development program, the team has created a laboratory optical bench made up of three commercially available, miniaturized spectrometers optimized for the ultraviolet, visible, and near-infrared wavelength bands. The spectrometers are connected via fiber optic cables to the focused beam of a three-inch diameter carbon-nanotube mirror. The team is using the optical bench to test the telescope’s overall design.

The news release then describes the carbon nanotube mirrors,

By all accounts, the new-fangled mirror could prove central to creating a low-cost space telescope for a range of CubeSat scientific investigations.

Unlike most telescope mirrors made of glass or aluminum, this particular optic is made of carbon nanotubes embedded in an epoxy resin. Sub-micron-size, cylindrically shaped, carbon nanotubes exhibit extraordinary strength and unique electrical properties, and are efficient conductors of heat. Owing to these unusual properties, the material is valuable to nanotechnology, electronics, optics, and other fields of materials science, and, as a consequence, are being used as additives in various structural materials.

“No one has been able to make a mirror using a carbon-nanotube resin,” said Peter Chen, a Goddard contractor and president of Lightweight Telescopes, Inc., a Columbia, Maryland-based company working with the team to create the CubeSat-compatible telescope.

“This is a unique technology currently available only at Goddard,” he continued. “The technology is too new to fly in space, and first must go through the various levels of technological advancement. But this is what my Goddard colleagues (Kostiuk, Tilak Hewagama, and John Kolasinski) are trying to accomplish through the CubeSat program.”

The use of a carbon-nanotube optic in a CubeSat telescope offers a number of advantages, said Hewagama, who contacted Chen upon learning of a NASA Small Business Innovative Research program awarded to Chen’s company to further advance the mirror technology. In addition to being lightweight, highly stable, and easily reproducible, carbon-nanotube mirrors do not require polishing — a time-consuming and often times expensive process typically required to assure a smooth, perfectly shaped mirror, said Kolasinski, an engineer and science collaborator on the project.

To make a mirror, technicians simply pour the mixture of epoxy and carbon nanotubes into a mandrel or mold fashioned to meet a particular optical prescription. They then heat the mold to to cure and harden the epoxy. Once set, the mirror then is coated with a reflective material of aluminum and silicon dioxide.

“After making a specific mandrel or mold, many tens of identical low-mass, highly uniform replicas can be produced at low cost,” Chen said. “Complete telescope assemblies can be made this way, which is the team’s main interest. For the CubeSat program, this capability will enable many spacecraft to be equipped with identical optics and different detectors for a variety of experiments. They also can be flown in swarms and constellations.”

There could be other applications for these carbon nanotube mirrors according to the news release,

A CubeSat telescope is one possible application for the optics technology, Chen added.

He believes it also would work for larger telescopes, particularly those comprised of multiple mirror segments. Eighteen hexagonal-shape mirrors, for example, form the James Webb Space Telescope’s 21-foot primary mirror and each of the twin telescopes at the Keck Observatory in Mauna Kea, Hawaii, contain 36 segments to form a 32-foot mirror.

Many of the mirror segments in these telescopes are identical and can therefore be produced using a single mandrel. This approach avoids the need to grind and polish many individual segments to the same shape and focal length, thus potentially leading to significant savings in schedule and cost.

Moreover, carbon-nanotube mirrors can be made into ‘smart optics’. To maintain a single perfect focus in the Keck telescopes, for example, each mirror segment has several externally mounted actuators that deform the mirrors into the specific shapes required at different telescope orientations.

In the case of carbon-nanotube mirrors, the actuators can be formed into the optics at the time of fabrication. This is accomplished by applying electric fields to the resin mixture before cure, which leads to the formation of carbon-nanotube chains and networks. After curing, technicians then apply power to the mirror, thereby changing the shape of the optical surface. This concept has already been proven in the laboratory.

“This technology can potentially enable very large-area technically active optics in space,” Chen said. “Applications address everything from astronomy and Earth observing to deep-space communications.”

Dexter Johnson provides some additional tidbits in his July 14, 2016 post (on his Nanoclast blog on the IEEE [Institute for Electrical and Electronics Engineers] about the Cubesat mirrors.

Robots, Dallas (US), ethics, and killing

I’ve waited a while before posting this piece in the hope that the situation would calm. Sadly, it took longer than hoped as there was an additional shooting incident of police officers in Baton Rouge on July 17, 2016. There’s more about that shooting in a July 18, 2016 news posting by Steve Visser for CNN.)

Finally: Robots, Dallas, ethics, and killing: In the wake of the Thursday, July 7, 2016 shooting in Dallas (Texas, US) and subsequent use of a robot armed with a bomb to kill  the suspect, a discussion about ethics has been raised.

This discussion comes at a difficult period. In the same week as the targeted shooting of white police officers in Dallas, two African-American males were shot and killed in two apparently unprovoked shootings by police. The victims were Alton Sterling in Baton Rouge, Louisiana on Tuesday, July 5, 2016 and, Philando Castile in Minnesota on Wednesday, July 6, 2016. (There’s more detail about the shootings prior to Dallas in a July 7, 2016 news item on CNN.) The suspect in Dallas, Micah Xavier Johnson, a 25-year-old African-American male had served in the US Army Reserve and been deployed in Afghanistan (there’s more in a July 9, 2016 news item by Emily Shapiro, Julia Jacobo, and Stephanie Wash for abcnews.go.com). All of this has taken place within the context of a movement started in 2013 in the US, Black Lives Matter.

Getting back to robots, most of the material I’ve seen about ‘killing or killer’ robots has so far involved industrial accidents (very few to date) and ethical issues for self-driven cars (see a May 31, 2016 posting by Noah J. Goodall on the IEEE [Institute of Electrical and Electronics Engineers] Spectrum website).

The incident in Dallas is apparently the first time a US police organization has used a robot as a bomb, although it has been an occasional practice by US Armed Forces in combat situations. Rob Lever in a July 8, 2016 Agence France-Presse piece on phys.org focuses on the technology aspect,

The “bomb robot” killing of a suspected Dallas shooter may be the first lethal use of an automated device by American police, and underscores growing role of technology in law enforcement.

Regardless of the methods in Dallas, the use of robots is expected to grow, to handle potentially dangerous missions in law enforcement and the military.


Researchers at Florida International University meanwhile have been working on a TeleBot that would allow disabled police officers to control a humanoid robot.

The robot, described in some reports as similar to the “RoboCop” in films from 1987 and 2014, was designed “to look intimidating and authoritative enough for citizens to obey the commands,” but with a “friendly appearance” that makes it “approachable to citizens of all ages,” according to a research paper.

Robot developers downplay the potential for the use of automated lethal force by the devices, but some analysts say debate on this is needed, both for policing and the military.

A July 9, 2016 Associated Press piece by Michael Liedtke and Bree Fowler on phys.org focuses more closely on ethical issues raised by the Dallas incident,

When Dallas police used a bomb-carrying robot to kill a sniper, they also kicked off an ethical debate about technology’s use as a crime-fighting weapon.

The strategy opens a new chapter in the escalating use of remote and semi-autonomous devices to fight crime and protect lives. It also raises new questions over when it’s appropriate to dispatch a robot to kill dangerous suspects instead of continuing to negotiate their surrender.

“If lethally equipped robots can be used in this situation, when else can they be used?” says Elizabeth Joh, a University of California at Davis law professor who has followed U.S. law enforcement’s use of technology. “Extreme emergencies shouldn’t define the scope of more ordinary situations where police may want to use robots that are capable of harm.”

In approaching the question about the ethics, Mike Masnick’s July 8, 2016 posting on Techdirt provides a surprisingly sympathetic reading for the Dallas Police Department’s actions, as well as, asking some provocative questions about how robots might be better employed by police organizations (Note: Links have been removed),

The Dallas Police have a long history of engaging in community policing designed to de-escalate situations, rather than encourage antagonism between police and the community, have been handling all of this with astounding restraint, frankly. Many other police departments would be lashing out, and yet the Dallas Police Dept, while obviously grieving for a horrible situation, appear to be handling this tragic situation professionally. And it appears that they did everything they could in a reasonable manner. They first tried to negotiate with Johnson, but after that failed and they feared more lives would be lost, they went with the robot + bomb option. And, obviously, considering he had already shot many police officers, I don’t think anyone would question the police justification if they had shot Johnson.

But, still, at the very least, the whole situation raises a lot of questions about the legality of police using a bomb offensively to blow someone up. And, it raises some serious questions about how other police departments might use this kind of technology in the future. The situation here appears to be one where people reasonably concluded that this was the most effective way to stop further bloodshed. And this is a police department with a strong track record of reasonable behavior. But what about other police departments where they don’t have that kind of history? What are the protocols for sending in a robot or drone to kill someone? Are there any rules at all?

Furthermore, it actually makes you wonder, why isn’t there a focus on using robots to de-escalate these situations? What if, instead of buying military surplus bomb robots, there were robots being designed to disarm a shooter, or detain him in a manner that would make it easier for the police to capture him alive? Why should the focus of remote robotic devices be to kill him? This isn’t faulting the Dallas Police Department for its actions last night. But, rather, if we’re going to enter the age of robocop, shouldn’t we be looking for ways to use such robotic devices in a manner that would help capture suspects alive, rather than dead?

Gordon Corera’s July 12, 2016 article on the BBC’s (British Broadcasting Corporation) news website provides an overview of the use of automation and of ‘killing/killer robots’,

Remote killing is not new in warfare. Technology has always been driven by military application, including allowing killing to be carried out at distance – prior examples might be the introduction of the longbow by the English at Crecy in 1346, then later the Nazi V1 and V2 rockets.

More recently, unmanned aerial vehicles (UAVs) or drones such as the Predator and the Reaper have been used by the US outside of traditional military battlefields.

Since 2009, the official US estimate is that about 2,500 “combatants” have been killed in 473 strikes, along with perhaps more than 100 non-combatants. Critics dispute those figures as being too low.

Back in 2008, I visited the Creech Air Force Base in the Nevada desert, where drones are flown from.

During our visit, the British pilots from the RAF deployed their weapons for the first time.

One of the pilots visibly bristled when I asked him if it ever felt like playing a video game – a question that many ask.

The military uses encrypted channels to control its ordnance disposal robots, but – as any hacker will tell you – there is almost always a flaw somewhere that a determined opponent can find and exploit.

We have already seen cars being taken control of remotely while people are driving them, and the nightmare of the future might be someone taking control of a robot and sending a weapon in the wrong direction.

The military is at the cutting edge of developing robotics, but domestic policing is also a different context in which greater separation from the community being policed risks compounding problems.

The balance between risks and benefits of robots, remote control and automation remain unclear.

But Dallas suggests that the future may be creeping up on us faster than we can debate it.

The excerpts here do not do justice to the articles, if you’re interested in this topic and have the time, I encourage you to read all the articles cited here in their entirety.

*(ETA: July 25, 2016 at 1405 hours PDT: There is a July 25, 2016 essay by Carrie Sheffield for Salon.com which may provide some insight into the Black Lives matter movement and some of the generational issues within the US African-American community as revealed by the movement.)*

Pushing efficiency of perovskite-based solar cells to 31%

This atomic force microscopy image of the grainy surface of a perovskite solar cell reveals a new path to much greater efficiency. Individual grains are outlined in black, low-performing facets are red, and high-performing facets are green. A big jump in efficiency could possibly be obtained if the material can be grown so that more high-performing facets develop. (Credit: Berkeley Lab)

This atomic force microscopy image of the grainy surface of a perovskite solar cell reveals a new path to much greater efficiency. Individual grains are outlined in black, low-performing facets are red, and high-performing facets are green. A big jump in efficiency could possibly be obtained if the material can be grown so that more high-performing facets develop. (Credit: Berkeley Lab)

It’s always fascinating to observe a trend (or a craze) in science, an endeavour that outsiders (like me) tend to think of as impervious to such vagaries. Perovskite seems to be making its way past the trend/craze phase and moving into a more meaningful phase. From a July 4, 2016 news item on Nanowerk,

Scientists from the Department of Energy’s Lawrence Berkeley National Laboratory (Berkeley Lab) have discovered a possible secret to dramatically boosting the efficiency of perovskite solar cells hidden in the nanoscale peaks and valleys of the crystalline material.

Solar cells made from compounds that have the crystal structure of the mineral perovskite have captured scientists’ imaginations. They’re inexpensive and easy to fabricate, like organic solar cells. Even more intriguing, the efficiency at which perovskite solar cells convert photons to electricity has increased more rapidly than any other material to date, starting at three percent in 2009 — when researchers first began exploring the material’s photovoltaic capabilities — to 22 percent today. This is in the ballpark of the efficiency of silicon solar cells.

Now, as reported online July 4, 2016 in the journal Nature Energy (“Facet-dependent photovoltaic efficiency variations in single grains of hybrid halide perovskite”), a team of scientists from the Molecular Foundry and the Joint Center for Artificial Photosynthesis, both at Berkeley Lab, found a surprising characteristic of a perovskite solar cell that could be exploited for even higher efficiencies, possibly up to 31 percent.

A July 4, 2016 Berkeley Lab news release (also on EurekAlert), which originated the news item, details the research,

Using photoconductive atomic force microscopy, the scientists mapped two properties on the active layer of the solar cell that relate to its photovoltaic efficiency. The maps revealed a bumpy surface composed of grains about 200 nanometers in length, and each grain has multi-angled facets like the faces of a gemstone.

Unexpectedly, the scientists discovered a huge difference in energy conversion efficiency between facets on individual grains. They found poorly performing facets adjacent to highly efficient facets, with some facets approaching the material’s theoretical energy conversion limit of 31 percent.

The scientists say these top-performing facets could hold the secret to highly efficient solar cells, although more research is needed.

“If the material can be synthesized so that only very efficient facets develop, then we could see a big jump in the efficiency of perovskite solar cells, possibly approaching 31 percent,” says Sibel Leblebici, a postdoctoral researcher at the Molecular Foundry.

Leblebici works in the lab of Alexander Weber-Bargioni, who is a corresponding author of the paper that describes this research. Ian Sharp, also a corresponding author, is a Berkeley Lab scientist at the Joint Center for Artificial Photosynthesis. Other Berkeley Lab scientists who contributed include Linn Leppert, Francesca Toma, and Jeff Neaton, the director of the Molecular Foundry.

A team effort

The research started when Leblebici was searching for a new project. “I thought perovskites are the most exciting thing in solar right now, and I really wanted to see how they work at the nanoscale, which has not been widely studied,” she says.

She didn’t have to go far to find the material. For the past two years, scientists at the nearby Joint Center for Artificial Photosynthesis have been making thin films of perovskite-based compounds, and studying their ability to convert sunlight and CO2 into useful chemicals such as fuel. Switching gears, they created pervoskite solar cells composed of methylammonium lead iodide. They also analyzed the cells’ performance at the macroscale.

The scientists also made a second set of half cells that didn’t have an electrode layer. They packed eight of these cells on a thin film measuring one square centimeter. These films were analyzed at the Molecular Foundry, where researchers mapped the cells’ surface topography at a resolution of ten nanometers. They also mapped two properties that relate to the cells’ photovoltaic efficiency: photocurrent generation and open circuit voltage.

This was performed using a state-of-the-art atomic force microscopy technique, developed in collaboration with Park Systems, which utilizes a conductive tip to scan the material’s surface. The method also eliminates friction between the tip and the sample. This is important because the material is so rough and soft that friction can damage the tip and sample, and cause artifacts in the photocurrent.

Surprise discovery could lead to better solar cells

The resulting maps revealed an order of magnitude difference in photocurrent generation, and a 0.6-volt difference in open circuit voltage, between facets on the same grain. In addition, facets with high photocurrent generation had high open circuit voltage, and facets with low photocurrent generation had low open circuit voltage.

“This was a big surprise. It shows, for the first time, that perovskite solar cells exhibit facet-dependent photovoltaic efficiency,” says Weber-Bargioni.

Adds Toma, “These results open the door to exploring new ways to control the development of the material’s facets to dramatically increase efficiency.”

In practice, the facets behave like billions of tiny solar cells, all connected in parallel. As the scientists discovered, some cells operate extremely well and others very poorly. In this scenario, the current flows towards the bad cells, lowering the overall performance of the material. But if the material can be optimized so that only highly efficient facets interface with the electrode, the losses incurred by the poor facets would be eliminated.

“This means, at the macroscale, the material could possibly approach its theoretical energy conversion limit of 31 percent,” says Sharp.

A theoretical model that describes the experimental results predicts these facets should also impact the emission of light when used as an LED. …

The Molecular Foundry is a DOE Office of Science User Facility located at Berkeley Lab. The Joint Center for Artificial Photosynthesis is a DOE Energy Innovation Hub led by the California Institute of Technology in partnership with Berkeley Lab.

Here’s a link to and a citation for the paper,

Facet-dependent photovoltaic efficiency variations in single grains of hybrid halide perovskite by Sibel Y. Leblebici, Linn Leppert, Yanbo Li, Sebastian E. Reyes-Lillo, Sebastian Wickenburg, Ed Wong, Jiye Lee, Mauro Melli, Dominik Ziegler, Daniel K. Angell, D. Frank Ogletree, Paul D. Ashby, Francesca M. Toma, Jeffrey B. Neaton, Ian D. Sharp, & Alexander Weber-Bargioni. Nature Energy 1, Article number: 16093 (2016  doi:10.1038/nenergy.2016.93 Published online: 04 July 2016

This paper is behind a paywall.

Dexter Johnson’s July 6, 2016 posting on his Nanoclast blog (on the IEEE [Institute of Electrical and Electronics Engineers] website} presents his take on the impact that this new finding may have,

The rise of the crystal perovskite as a potential replacement for silicon in photovoltaics has been impressive over the last decade, with its conversion efficiency improving from 3.8 to 22.1 percent over that time period. Nonetheless, there has been a vague sense that this rise is beginning to peter out of late, largely because when a solar cell made from perovskite gets larger than 1 square centimeter the best conversion efficiency had been around 15.6 percent. …

Skin as a touchscreen (“smart” hands)

An April 11, 2016 news item on phys.org highlights some research presented at the IEEE (Institute of Electrical and Electronics Engineers) Haptics (touch) Symposium 2016,

Using your skin as a touchscreen has been brought a step closer after UK scientists successfully created tactile sensations on the palm using ultrasound sent through the hand.

The University of Sussex-led study – funded by the Nokia Research Centre and the European Research Council – is the first to find a way for users to feel what they are doing when interacting with displays projected on their hand.

This solves one of the biggest challenges for technology companies who see the human body, particularly the hand, as the ideal display extension for the next generation of smartwatches and other smart devices.

Current ideas rely on vibrations or pins, which both need contact with the palm to work, interrupting the display.

However, this new innovation, called SkinHaptics, sends sensations to the palm from the other side of the hand, leaving the palm free to display the screen.

An April 11, 2016 University of Sussex press release (also on EurekAlert) by James Hakmer, which originated the news item, provides more detail,

The device uses ‘time-reversal’ processing to send ultrasound waves through the hand. This technique is effectively like ripples in water but in reverse – the waves become more targeted as they travel through the hand, ending at a precise point on the palm.

It draws on a rapidly growing field of technology called haptics, which is the science of applying touch sensation and control to interaction with computers and technology.

Professor Sriram Subramanian, who leads the research team at the University of Sussex, says that technologies will inevitably need to engage other senses, such as touch, as we enter what designers are calling an ‘eye-free’ age of technology.

He says: “Wearables are already big business and will only get bigger. But as we wear technology more, it gets smaller and we look at it less, and therefore multisensory capabilities become much more important.

“If you imagine you are on your bike and want to change the volume control on your smartwatch, the interaction space on the watch is very small. So companies are looking at how to extend this space to the hand of the user.

“What we offer people is the ability to feel their actions when they are interacting with the hand.”

The findings were presented at the IEEE Haptics Symposium [April 8 – 11] 2016 in Philadelphia, USA, by the study’s co-author Dr Daniel Spelmezan, a research assistant in the Interact Lab.

There is a video of the work (I was not able to activate sound, if there is any accompanying this video),

The consequence of watching this silent video was that I found the whole thing somewhat mysterious.

2-D boron as a superconductor

A March 31, 2016 news item on ScienceDaily highlights some research into 2D (two-dimensional) boron at Rice University (Texas, US),

Rice University scientists have determined that two-dimensional boron is a natural low-temperature superconductor. In fact, it may be the only 2-D material with such potential.

Rice theoretical physicist Boris Yakobson and his co-workers published their calculations that show atomically flat boron is metallic and will transmit electrons with no resistance. …

The hitch, as with most superconducting materials, is that it loses its resistivity only when very cold, in this case between 10 and 20 kelvins (roughly, minus-430 degrees Fahrenheit). But for making very small superconducting circuits, it might be the only game in town.

A March 30, 2016 Rice University news release (also on EurekAlert but dated March 31, 2016), which originated the news item, expands on the theme,

The basic phenomenon of superconductivity has been known for more than 100 years, said Evgeni Penev, a research scientist in the Yakobson group, but had not been tested for its presence in atomically flat boron.

“It’s well-known that the material is pretty light because the atomic mass is small,” Penev said. “If it’s metallic too, these are two major prerequisites for superconductivity. That means at low temperatures, electrons can pair up in a kind of dance in the crystal.”

“Lower dimensionality is also helpful,” Yakobson said. “It may be the only, or one of very few, two-dimensional metals. So there are three factors that gave the initial motivation for us to pursue the research. Then we just got more and more excited as we got into it.”

Electrons with opposite momenta and spins effectively become Cooper pairs; they attract each other at low temperatures with the help of lattice vibrations, the so-called “phonons,” and give the material its superconducting properties, Penev said. “Superconductivity becomes a manifestation of the macroscopic wave function that describes the whole sample. It’s an amazing phenomenon,” he said.

It wasn’t entirely by chance that the first theoretical paper establishing conductivity in a 2-D material appeared at roughly the same time the first samples of the material were made by laboratories in the United States and China. In fact, an earlier paper by the Yakobson group had offered a road map for doing so.

That 2-D boron has now been produced is a good thing, according to Yakobson and lead authors Penev and Alex Kutana, a postdoctoral researcher at Rice. “We’ve been working to characterize boron for years, from cage clusters to nanotubes to planer sheets, but the fact that these papers appeared so close together means these labs can now test our theories,” Yakobson said.

“In principle, this work could have been done three years ago as well,” he said. “So why didn’t we? Because the material remained hypothetical; okay, theoretically possible, but we didn’t have a good reason to carry it too far.

“But then last fall it became clear from professional meetings and interactions that it can be made. Now those papers are published. When you think it’s coming for real, the next level of exploration becomes more justifiable,” Yakobson said.

Boron atoms can make more than one pattern when coming together as a 2-D material, another characteristic predicted by Yakobson and his team that has now come to fruition. These patterns, known as polymorphs, may allow researchers to tune the material’s conductivity “just by picking a selective arrangement of the hexagonal holes,” Penev said.

He also noted boron’s qualities were hinted at when researchers discovered more than a decade ago that magnesium diborite is a high-temperature electron-phonon superconductor. “People realized a long time ago the superconductivity is due to the boron layer,” Penev said. “The magnesium acts to dope the material by spilling some electrons into the boron layer. In this case, we don’t need them because the 2-D boron is already metallic.”

Penev suggested that isolating 2-D boron between layers of inert hexagonal boron nitride (aka “white graphene”) might help stabilize its superconducting nature.

Without the availability of a block of time on several large government supercomputers, the study would have taken a lot longer, Yakobson said. “Alex did the heavy lifting on the computational work,” he said. “To turn it from a lunchtime discussion into a real quantitative research result took a very big effort.”

The paper is the first by Yakobson’s group on the topic of superconductivity, though Penev is a published author on the subject. “I started working on superconductivity in 1993, but it was always kind of a hobby, and I hadn’t done anything on the topic in 10 years,” Penev said. “So this paper brings it full circle.”

Here’s a link to and a citation for the paper,

Can Two-Dimensional Boron Superconduct? by Evgeni S. Penev, Alex Kutana, and Boris I. Yakobson. Nano Lett., Article ASAP DOI: 10.1021/acs.nanolett.6b00070 Publication Date (Web): March 22, 2016

Copyright © 2016 American Chemical Society

This paper is behind a paywall.

Dexter Johnson has published an April 5, 2016 post on his Nanoclast blog (on the IEEE [Institute of Electrical and Electronics Engineers] website) about this latest Rice University work on 2D boron that includes comments from his email interview with Penev.

The world’s smallest diode is made from a single molecule

Both the University of Georgia (US) and the American Associates Ben-Gurion University of the Negev (Israel) have issued press releases about a joint research project resulting in the world’s smallest diode.

I stumbled across the April 4, 2016 University of Georgia news release on EurekAlert first,

Researchers at the University of Georgia and at Ben-Gurion University in Israel have demonstrated for the first time that nanoscale electronic components can be made from single DNA molecules. Their study, published in the journal Nature Chemistry, represents a promising advance in the search for a replacement for the silicon chip.

The finding may eventually lead to smaller, more powerful and more advanced electronic devices, according to the study’s lead author, Bingqian Xu.

“For 50 years, we have been able to place more and more computing power onto smaller and smaller chips, but we are now pushing the physical limits of silicon,” said Xu, an associate professor in the UGA College of Engineering and an adjunct professor in chemistry and physics. “If silicon-based chips become much smaller, their performance will become unstable and unpredictable.”

To find a solution to this challenge, Xu turned to DNA. He says DNA’s predictability, diversity and programmability make it a leading candidate for the design of functional electronic devices using single molecules.

In the Nature Chemistry paper, Xu and collaborators at Ben-Gurion University of the Negev describe using a single molecule of DNA to create the world’s smallest diode. A diode is a component vital to electronic devices that allows current to flow in one direction but prevents its flow in the other direction.

Xu and a team of graduate research assistants at UGA isolated a specifically designed single duplex DNA of 11 base pairs and connected it to an electronic circuit only a few nanometers in size. After the measured current showed no special behavior, the team site-specifically intercalated a small molecule named coralyne into the DNA. They found the current flowing through the DNA was 15 times stronger for negative voltages than for positive voltages, a necessary feature of a diode.

“This finding is quite counterintuitive because the molecular structure is still seemingly symmetrical after coralyne intercalation,” Xu said.

A theoretical model developed by Yanantan Dubi of Ben-Gurion University indicated the diode-like behavior of DNA originates from the bias voltage-induced breaking of spatial symmetry inside the DNA molecule after the coralyne is inserted.

“Our discovery can lead to progress in the design and construction of nanoscale electronic elements that are at least 1,000 times smaller than current components,” Xu said.

The research team plans to continue its work, with the goal of constructing additional molecular devices and enhancing the performance of the molecular diode.

The April 4, 2016 American Associates Ben-Gurion University of the Negev press release on EurekAlert covers much of the same ground while providing some new details,

The world’s smallest diode, the size of a single molecule, has been developed collaboratively by U.S. and Israeli researchers from the University of Georgia and Ben-Gurion University of the Negev (BGU).

“Creating and characterizing the world’s smallest diode is a significant milestone in the development of molecular electronic devices,” explains Dr. Yoni Dubi, a researcher in the BGU Department of Chemistry and Ilse Katz Institute for Nanoscale Science and Technology. “It gives us new insights into the electronic transport mechanism.”

Continuous demand for more computing power is pushing the limitations of present day methods. This need is driving researchers to look for molecules with interesting properties and find ways to establish reliable contacts between molecular components and bulk materials in an electrode, in order to mimic conventional electronic elements at the molecular scale.

An example for such an element is the nanoscale diode (or molecular rectifier), which operates like a valve to facilitate electronic current flow in one direction. A collection of these nanoscale diodes, or molecules, has properties that resemble traditional electronic components such as a wire, transistor or rectifier. The emerging field of single molecule electronics may provide a way to overcome Moore’s Law– the observation that over the history of computing hardware the number of transistors in a dense integrated circuit has doubled approximately every two years – beyond the limits of conventional silicon integrated circuits.

Prof. Bingqian Xu’s group at the College of Engineering at the University of Georgia took a single DNA molecule constructed from 11 base pairs and connected it to an electronic circuit only a few nanometers in size. When they measured the current through the molecule, it did not show any special behavior. However, when layers of a molecule called “coralyne,” were inserted (or intercalated) between layers of DNA, the behavior of the circuit changed drastically. The current jumped to 15 times larger negative vs. positive voltages–a necessary feature for a nano diode. “In summary, we have constructed a molecular rectifier by intercalating specific, small molecules into designed DNA strands,” explains Prof. Xu.

Dr. Dubi and his student, Elinor Zerah-Harush, constructed a theoretical model of the DNA molecule inside the electric circuit to better understand the results of the experiment. “The model allowed us to identify the source of the diode-like feature, which originates from breaking spatial symmetry inside the DNA molecule after coralyne is inserted.”

There’s an April 4, 2016 posting on the Nanoclast blog (on the IEEE [Institute of Electrical and Electronics Engineers] website) which provides a brief overview and a link to a previous essay, Whatever Happened to the Molecular Computer?

Here’s a link and citation for the paper,

Molecular rectifier composed of DNA with high rectification ratio enabled by intercalation by Cunlan Guo, Kun Wang, Elinor Zerah-Harush, Joseph Hamill, Bin Wang, Yonatan Dubi, & Bingqian Xu. Nature Chemistry (2016) doi:10.1038/nchem.2480 Published online 04 April 2016

This paper is behind a paywall.

UK’s National Graphene Institute kerfuffle gets bigger

First mentioned here in a March 18, 2016 posting titled: Tempest in a teapot or a sign of things to come? UK’s National Graphene Institute kerfuffle, the ‘scandal’ seems to be getting bigger, from a March 29, 2016 posting on Dexter Johnson’s Nanoclast blog on the IEEE (Institute of Electrical and Electronics Engineers) website (Note: A link has been removed),

Since that news story broke, damage control from the NGI [UK National Graphene Institute], the University of Manchester, and BGT Materials, the company identified in the Times article, has been coming fast and furious. Even this blog’s coverage of the story has gotten comments from representatives of BGT Materials and the University of Manchester.

There was perhaps no greater effort in this coordinated defense than getting Andre Geim, a University of Manchester researcher who was a co-discoverer of graphene, to weigh in. …

Despite Geim’s recent public defense, and a full-on PR campaign to turn around the perception that the UK government was investing millions into UK research only to have the fruits of that research sold off to foreign interests, there was news last week that the UK Parliament would be launching an inquiry into the “benefits and disbenefits of the way that graphene’s intellectual property and commercialisation has been managed, including through research and innovation collaborations.”

The timing for the inquiry is intriguing but there have been no public comments or hints that the NGI kerfuffle precipitated the Graphene Inquiry,

The Science and Technology Committee issues a call for written submissions for its inquiry on graphene.

Send written submissions

The inquiry explores the lessons from graphene for research and innovation in other areas, as well as the management and commercialisation of graphene’s intellectual property. Issues include:

  • The research obstacles that have had to be overcome for graphene, including identifying research priorities and securing research funding, and the lessons from this for other areas of research.
  • The factors that have contributed to the successful development of graphene and how these might be applied in other areas, including translating research into innovation, managing/sharing intellectual property, securing development funding, and bringing key stakeholders together.
  • The benefits and disbenefits of the way that graphene’s intellectual property and commercialisation has been managed, including through research and innovation collaborations, and the lessons from this for other areas.

The deadline for submissions is midday on Monday 18 April 2016.

The Committee expects to take oral evidence later in April 2016.

Getting back to the NGI, BGT Materials, and University of Manchester situation, there’s a forceful comment from Daniel Cochlin (identified as a graphene communications and marketing manager at the University of Manchester in an April 2, 2015 posting on Nanoclast) in Dexter’s latest posting about the NGI. From the comments section of a March 29, 2016 posting on the Nanoclast blog,

Maybe the best way to respond is to directly counter some of your assertions.

1. The NGI’s comments on this blog were to counter factual inaccuracies contained in your story. Your Editor-in-Chief and Editorial Director, Digital were also emailed to complain about the story, with not so much as an acknowledgement of the email.
2. There was categorically no ‘coaxing’ of Sir Andre to make comments. He was motivated to by the inaccuracies and insinuations of the Sunday Times article.
3. Members of the Science and Technology Select Committee visited the NGI about ten days before the Sunday Times article and this was followed by their desire to hold an evidence session to discuss graphene commercialisation.
4. The matter of how many researchers work in the NGI is not ‘hotly contested’. The NGI is 75% full with around 130 researchers regularly working there. We would expect this figure to grow by 10-15% within the next few days as other facilities are closed down.
5. Graphene Lighting PLC is the spin-out company set up to produce and market the lightbulb. To describe them as a ‘shadowy spin-out’ is unjustified and, I would suggest, libelous [emphasis mine].
6. Your question about why, if BGT Materials is a UK company, was it not mentioned [emphasis mine] in connection with the lightbulb is confusing – as stated earlier the company set up to manage the lightbulb was Graphene Lighting PLC.

Let’s hope it doesn’t take three days for this to be accepted by your moderators, as it did last time.

*ETA March 31, 2016 at 1530 hours PDT: Dexter has posted response comments in answer to Cochlin’s. You can read them for youself here .* I have a couple of observations (1) The use of the word ‘libelous’ seems a bit over the top. However, it should be noted that it’s much easier to sue someone for libel in England where the University of Manchester is located than it is in most jurisdictions. In fact, there’s an industry known as ‘libel tourism’ where litigious companies and individuals shop around for a jurisdiction such as England where they can easily file suit. (2) As for BGT Materials not being mentioned in the 2015 press release for the graphene lightbulb, I cannot emphasize how unusual that is. Generally speaking, everyone and every agency that had any involvement in developing and bringing to market a new product, especially one that was the ‘first consumer graphene-based product’, is mentioned. When you consider that BGT Materials is a newish company according to its About page,

BGT Materials Limited (BGT), established in 2013, is dedicated to the development of graphene technologies that utilize this “wonder material” to enhance our lives. BGT has pioneered the mass production of large-area, high-quality graphene rapidly achieving the first milestone required for the commercialization of graphene-enhanced applications.

the situation grows more peculiar. A new company wants and needs that kind of exposure to attract investment and/or keep current stakeholders happy. One last comment about BGT Materials and its public relations, Thanasis Georgiou, VP BGT Materials, Visiting scientist at the University of Manchester (more can be found on his website’s About page), waded into the comments section of Dexter’s March 15, 2016 posting and the first about the kerfuffle. Gheorgiou starts out in a relatively friendly fashion but his followup has a sharper tone,

I appreciate your position but a simple email to us and we would clarify most of the issues that you raised. Indeed your article carries the same inaccuracies that the initial Sunday Times article does, which is currently the subject of a legal claim by BGT Materials. [emphasis mine]

For example, BGT Materials is a UK registered company, not a Taiwanese one. A quick google search and you can confirm this. There was no “shadowy Canadian investor”, the company went through a round of financing, as most technology startups do, in order to reach the market quickly.

It’s hard to tell if Gheorgiou is trying to inform Dexter or threaten him in his comment to the March 15, 2016 posting but taken together with Daniel Cochlin’s claim of libel in his comment to the March 29, 2016 posting, it suggests an attempt at intimidation.

These are understandable responses given the stakes involved but moving to the most damaging munitions in your arsenal is usually not a good choice for your first  or second response.

Australians take step toward ‘smart’ contact lenses

Some research from RMIT University (Australia) and the University of Adelaide (Australia) is make quite an impression. A Feb. 19, 2016 article by Caleb Radford for The Lead explains some of the excitement,

NEW light-manipulating nano-technology may soon be used to make smart contact lenses.

The University of Adelaide in South Australia worked closely with RMIT University to develop small hi-tech lenses to filter harmful optical radiation without distorting vision.

Dr Withawat Withayachumnankul from the University of Adelaide helped conceive the idea and said the potential applications of the technology included creating new high-performance devices that connect to the Internet.

A Feb. 19, 2016 RMIT University press release on EurekAlert, which originated the news item, provides more detail,

The light manipulation relies on creating tiny artificial crystals termed “dielectric resonators”, which are a fraction of the wavelength of light – 100-200 nanometers, or over 500 times thinner than a human hair.

The research combined the University of Adelaide researchers’ expertise in interaction of light with artificial materials with the materials science and nanofabrication expertise at RMIT University.

Dr Withawat Withayachumnankul, from the University of Adelaide’s School of Electrical and Electronic Engineering, said: “Manipulation of light using these artificial crystals uses precise engineering.

“With advanced techniques to control the properties of surfaces, we can dynamically control their filter properties, which allow us to potentially create devices for high data-rate optical communication or smart contact lenses.

“The current challenge is that dielectric resonators only work for specific colours, but with our flexible surface we can adjust the operation range simply by stretching it.”

Associate Professor Madhu Bhaskaran, Co-Leader of the Functional Materials and Microsystems Research Group at RMIT, said the devices were made on a rubber-like material used for contact lenses.

“We embed precisely-controlled crystals of titanium oxide, a material that is usually found in sunscreen, in these soft and pliable materials,” she said.

“Both materials are proven to be bio-compatible, forming an ideal platform for wearable optical devices.

“By engineering the shape of these common materials, we can create a device that changes properties when stretched. This modifies the way the light interacts with and travels through the device, which holds promise of making smart contact lenses and stretchable colour changing surfaces.”

Lead author and RMIT researcher Dr. Philipp Gutruf said the major scientific hurdle overcome by the team was combining high temperature processed titanium dioxide with the rubber-like material, and achieving nanoscale features.

“With this technology, we now have the ability to develop light weight wearable optical components which also allow for the creation of futuristic devices such as smart contact lenses or flexible ultra thin smartphone cameras,” Gutruf said.

Here’s a link to and a citation for the paper,

Mechanically Tunable Dielectric Resonator Metasurfaces at Visible Frequencies by Philipp Gutruf, Chengjun Zou, Withawat Withayachumnankul, Madhu Bhaskaran, Sharath Sriram, and Christophe Fumeaux. ACS Nano, 2016, 10 (1), pp 133–141 DOI: 10.1021/acsnano.5b05954 Publication Date (Web): November 30, 2015

Copyright © 2015 American Chemical Society

This paper is behind a paywall.

ETA Feb. 24, 2016: Dexter Johnson (Nanoclast blog on the IEEE [Institute of Electrical and Electronics Engineers] website) has chimed in with additional insight into this research in his Feb. 23, 2016 posting.

Plasmonic interferometry without coherent light

There are already a number of biosensors based on plasmonic interferometry in use but this latest breakthrough from Brown University (US) could make them cheaper and more accessible. A Feb. 16, 2016 Brown University news release (also on EurekAlert), announces the new technique,

Imagine a hand-held environmental sensor that can instantly test water for lead, E. coli, and pesticides all at the same time, or a biosensor that can perform a complete blood workup from just a single drop. That’s the promise of nanoscale plasmonic interferometry, a technique that combines nanotechnology with plasmonics–the interaction between electrons in a metal and light.

Now researchers from Brown University’s School of Engineering have made an important fundamental advance that could make such devices more practical. The research team has developed a technique that eliminates the need for highly specialized external light sources that deliver coherent light, which the technique normally requires. The advance could enable more versatile and more compact devices.

“It has always been assumed that coherent light was necessary for plasmonic interferometry,” said Domenico Pacifici, a professor of engineering who oversaw the work with his postdoctoral researcher Dongfang Li, and graduate student Jing Feng. “But we were able to disprove that assumption.”

The research is described in Nature Scientific Reports.

Plasmonic interferometers make use of the interaction between light and surface plasmon polaritons, density waves created when light energy rattles free electrons in a metal. One type of interferometer looks like a bull’s-eye structure etched into a thin layer of metal. In the center is a hole poked through the metal layer with a diameter of about 300 nanometers–about 1,000 times smaller than the diameter of a human hair. The hole is encircled by a series of etched grooves, with diameters of a few micrometers. Thousands of these bulls-eyes can be placed on a chip the size of a fingernail.

When light from an external source is shown onto the surface of an interferometer, some of the photons go through the central hole, while others are scattered by the grooves. Those scattered photons generate surface plasmons that propagate through the metal inward toward the hole, where they interact with photons passing through the hole. That creates an interference pattern in the light emitted from the hole, which can be recorded by a detector beneath the metal surface.

When a liquid is deposited on top of an interferometer, the light and the surface plasmons propagate through that liquid before they interfere with each other. That alters the interference patterns picked up by the detector depending on the chemical makeup of the liquid or compounds present in it. By using different sizes of groove rings around the hole, the interferometers can be tuned to detect the signature of specific compounds or molecules. With the ability to put many differently tuned interferometers on one chip, engineers can hypothetically make a versatile detector.

Up to now, all plasmonic interferometers have required the use of highly specialized external light sources that can deliver coherent light–beams in which light waves are parallel, have the same wavelength, and travel in-phase (meaning the peaks and valleys of the waves are aligned). Without coherent light sources, the interferometers cannot produce usable interference patterns. Those kinds of light sources, however, tend to be bulky, expensive, and require careful alignment and periodic recalibration to obtain a reliable optical response.

But Pacifici and his group have come up with a way to eliminate the need for external coherent light. In the new method, fluorescent light-emitting atoms are integrated directly within the tiny hole in the center of the interferometer. An external light source is still necessary to excite the internal emitters, but it need not be a specialized coherent source.

“This is a whole new concept for optical interferometry,” Pacifici said, “an entirely new device.”

In this new device, incoherent light shown on the interferometer causes the fluorescent atoms inside the center hole to generate surface plasmons. Those plasmons propagate outward from the hole, bounce off the groove rings, and propagate back toward the hole after. Once a plasmon propagates back, it interacts with the atom that released it, causing an interference with the directly transmitted photon. Because the emission of a photon and the generation of a plasmon are indistinguishable, alternative paths originating from the same emitter, the process is naturally coherent and interference can therefore occur even though the emitters are excited incoherently.

“The important thing here is that this is a self-interference process,” Pacifici said. “It doesn’t matter that you’re using incoherent light to excite the emitters, you still get a coherent process.”

In addition to eliminating the need for specialized external light sources, the approach has several advantages, Pacifici said. Because the surface plasmons travel out from the hole and back again, they probe the sample on top of the interferometer surface twice. That makes the device more sensitive.

But that’s not the only advantage. In the new device, external light can be projected from underneath the metal surface containing the interferometers instead of from above. That eliminates the need for complex illumination architectures on top of the sensing surface, which could make for easier integration into compact devices.

The embedded light emitters also eliminate the need to control the amount of sample liquid deposited on the interferometer’s surface. Large droplets of liquid can cause lensing effects, a bending of light that can scramble the results from the interferometer. Most plasmonic sensors make use of tiny microfluidic channels to deliver a thin film of liquid to avoid lensing problems. But with internal light emitters excited from the bottom surface, the external light never comes in contact with the sample, so lensing effects are negated, as is the need for microfluidics.

Finally, the internal emitters produce a low intensity light. That’s good for probing delicate samples, such as proteins, than can be damaged by high-intensity light.

More work is required to get the system out of the lab and into devices, and Pacifici and his team plan to continue to refine the idea. The next step will be to try eliminating the external light source altogether. It might be possible, the researchers say, to eventually excite the internal emitters using tiny fiber optic lines, or perhaps electric current.

Still, this initial proof-of-concept is promising, Pacifici said.

“From a fundamental standpoint, we think this new device represents a significant step forward,” he said, “a first demonstration of plasmonic interferometry with incoherent light”.

Here’s a link to and a citation for the paper,

Nanoscale optical interferometry with incoherent light by Dongfang Li, Jing Feng, & Domenico Pacifici. Scientific Reports 6, Article number: 20836 (2016) doi:10.1038/srep20836 Published online: 16 February 2016

This paper is open access.

One final comment, Dexter Johnson has a Feb. 18, 2016 posting about this interferometer where he references Pacifici’s past work in this area, as well as, this latest breakthrough. Dexter’s posting can be found on his Nanoclast blog which is on the IEEE (Institute of Electrical and Electronics Engineers) website.

University of Alberta team may open door to flexible electronics with engineering breakthrough

There’s some exciting news from the University of Alberta. It emerges from a team that has reconsidered transistor architecture, from a Feb. 9, 2016 news item on ScienceDaily,

An engineering research team at the University of Alberta has invented a new transistor that could revolutionize thin-film electronic devices.

The team was exploring new uses for thin film transistors (TFT), which are most commonly found in low-power, low-frequency devices like the display screen you’re reading from now. Efforts by researchers and the consumer electronics industry to improve the performance of the transistors have been slowed by the challenges of developing new materials or slowly improving existing ones for use in traditional thin film transistor architecture, known technically as the metal oxide semiconductor field effect transistor (MOSFET).

But the U of A electrical engineering team did a run-around on the problem. Instead of developing new materials, the researchers improved performance by designing a new transistor architecture that takes advantage of a bipolar action. In other words, instead of using one type of charge carrier, as most thin film transistors do, it uses electrons and the absence of electrons (referred to as “holes”) to contribute to electrical output. Their first breakthrough was forming an ‘inversion’ hole layer in a ‘wide-bandgap’ semiconductor, which has been a great challenge in the solid-state electronics field.

A Feb. 9, 2016 University of Alberta news release by Richard Cairney and Grecia Pacheco (also on EurekAlert), which originated the news item, provides more detail about the research,

Once this was achieved, “we were able to construct a unique combination of semiconductor and insulating layers that allowed us to inject “holes” at the MOS interface,” said Gem Shoute, a PhD student in the Department of Electrical and Computer Engineering who is lead author on the article. Adding holes at the interface increased the chances of an electron “tunneling” across a dielectric barrier. Through this phenomenon, a type of quantum tunnelling, “we were finally able to achieve a transistor that behaves like a bipolar transistor.”

“It’s actually the best performing [TFT] device of its kind–ever,” said materials engineering professor Ken Cadien, a co-author on the paper. “This kind of device is normally limited by the non-crystalline nature of the material that they are made of”

The dimension of the device itself can be scaled with ease in order to improve performance and keep up with the need of miniaturization, an advantage that modern TFTs lack. The transistor has power-handling capabilities at least 10 times greater than commercially produced thin film transistors.

Electrical engineering professor Doug Barlage, who is Shoute’s PhD supervisor and one of the paper’s lead authors, says his group was determined to try new approaches and break new ground. He says the team knew it could produce a high-power thin film transistor–it was just a matter of finding out how.

“Our goal was to make a thin film transistor with the highest power handling and switching speed possible. Not many people want to look into that, but the raw properties of the film indicated dramatic performance increase was within reach,” he said. “The high quality sub 30 nanometre (a human hair is 50,000 nanometres wide) layers of materials produced by Professor Cadien’s group enabled us to successfully try these difficult concepts”

In the end, the team took advantage of the very phenomena other researchers considered roadblocks.

“Usually tunnelling current is considered a bad thing in MOSFETs and it contributes to unnecessary loss of power, which manifests as heat,” explained Shoute. “What we’ve done is build a transistor that considers tunnelling current a benefit.”

Here’s a link to and a citation for the paper,

Sustained hole inversion layer in a wide-bandgap metal-oxide semiconductor with enhanced tunnel current by Gem Shoute, Amir Afshar, Triratna Muneshwar, Kenneth Cadien, & Douglas Barlage. Nature Communications 7, Article number: 10632 doi:10.1038/ncomms10632 Published 04 February 2016

This is an open access paper.

ETA Feb. 12, 2016: Dexter Johnson has written up the research in a Feb. 11, 2016 posting (on this Nanoclast blog on the IEEE [Institute of Electrical and Electronics Engineers) where he offers enthusiam (rare) and additional explanation.