Tag Archives: Dexter Johnson

NASA, super-black nanotechnology, and an International Space Station livestreamed event

A super-black nanotechnology-enabled coating (first mentioned here in a July 18, 2013 posting featuring work by John Hagopian, an optics engineer at the US National Aeronautics and Space Administration [NASA’s] Goddard Space Flight Center on this project) is about to be tested in outer space. From an Oct. 23, 2014 news item on Nanowerk,

An emerging super-black nanotechnology that is to be tested for the first time this fall on the International Space Station will be applied to a complex, 3-D component critical for suppressing stray light in a new, smaller, less-expensive solar coronagraph designed to ultimately fly on the orbiting outpost or as a hosted payload on a commercial satellite.

The super-black carbon-nanotube coating, whose development is six years in the making, is a thin, highly uniform coating of multi-walled nanotubes made of pure carbon about 10,000 times thinner than a strand of human hair. Recently delivered to the International Space Station for testing, the coating is considered especially promising as a technology to reduce stray light, which can overwhelm faint signals that sensitive detectors are supposed to retrieve.

An Oct. 24, 2014 NASA news release by Lori Keesey, which originated the news item, further describes the work being done on the ground simultaneous to the tests on the International Space Station,

While the coating undergoes testing to determine its robustness in space, a team at NASA’s Goddard Space Flight Center in Greenbelt, Maryland, will apply the carbon-nanotube coating to a complex, cylindrically shaped baffle — a component that helps reduce stray light in telescopes.

Goddard optical engineer Qian Gong designed the baffle for a compact solar coronagraph that Principal Investigator Nat Gopalswamy is now developing. The goal is [to] build a solar coronagraph that could deploy on the International Space Station or as a hosted payload on a commercial satellite — a much-needed capability that could guarantee the continuation of important space weather-related measurements.

The effort will help determine whether the carbon nanotubes are as effective as black paint, the current state-of-the-art technology, for absorbing stray light in complex space instruments and components.

Preventing errant light is an especially tricky challenge for Gopalswamy’s team. “We have to have the right optical system and the best baffles going,” said Doug Rabin, a Goddard heliophysicist who studies diffraction and stray light in coronagraphs.

The new compact coronagraph — designed to reduce the mass, volume, and cost of traditional coronagraphs by about 50 percent — will use a single set of lenses, rather than a conventional three-stage system, to image the solar corona, and more particularly, coronal mass ejections (CMEs). These powerful bursts of solar material erupt and hurdle across the solar system, sometimes colliding with Earth’s protective magnetosphere and posing significant hazards to spacecraft and astronauts.

“Compact coronagraphs make greater demands on controlling stray light and diffraction,” Rabin explained, adding that the corona is a million times fainter than the sun’s photosphere. Coating the baffle or occulter with the carbon-nanotube material should improve the component’s overall performance by preventing stray light from reaching the focal plane and contaminating measurements.

The project is well timed and much needed, Rabin added.

Currently, the heliophysics community receives coronagraphic measurements from the Solar and Heliospheric Observatory (SOHO) and the Solar Terrestrial Relations Observatory (STEREO).

“SOHO, which we launched in 1995, is one of our Great Observatories,” Rabin said. “But it won’t last forever.” Although somewhat newer, STEREO has operated in space since 2006. “If one of these systems fails, it will affect a lot of people inside and outside NASA, who study the sun and forecast space weather. Right now, we have no scheduled mission that will carry a solar coronagraph. We would like to get a compact coronagraph up there as soon as possible,” Rabin added.

Ground-based laboratory testing indicates it could be a good fit. Testing has proven that the coating absorbs 99.5 percent of the light in the ultraviolet and visible and 99.8 percent in the longer infrared bands due to the fact that the carbon atoms occupying the tiny nested tubes absorb the light and prevent it from reflecting off surfaces, said Goddard optics engineer John Hagopian, who is leading the technology’s advancement. Because only a tiny fraction of light reflects off the coating, the human eye and sensitive detectors see the material as black — in this case, extremely black.

“We’ve made great progress on the coating,” Hagopian said. “The fact the coatings have survived the trip to the space station already has raised the maturity of the technology to a level that qualifies them for flight use. In many ways the external exposure of the samples on the space station subjects them to a much harsher environment than components will ever see inside of an instrument.”

Given the need for a compact solar coronagraph, Hagopian said he’s especially excited about working with the instrument team. “This is an important instrument-development effort, and, of course, one that could showcase the effectiveness of our technology on 3-D parts,” he said, adding that the lion’s share of his work so far has concentrated on 2-D applications.

By teaming with Goddard technologist Vivek Dwivedi, Hagopian believes the baffle project now is within reach. Dwivedi is advancing a technique called atomic layer deposition (ALD) that lays down a catalyst layer necessary for carbon-nanotube growth on complex, 3-D parts. “Previous ALD chambers could only hold objects a few millimeters high, while the chamber Vivek has developed for us can accommodate objects 20 times bigger; a necessary step for baffles of this type,” Hagopian said.

Other NASA researchers have flown carbon nanotubes on the space station, but their samples were designed for structural applications, not stray-light suppression — a completely different use requiring that the material demonstrate greater absorption properties, Hagopian said.

“We have extreme stray light requirements. Let’s see how this turns out,” Rabin said.

The researchers from NASA have kindly made available an image of a baffle prior to receiving its super-black coating,

This is a close-up view of a baffle that will be coated with a carbon-nanotube coating. Image Credit:  NASA Goddard/Paul Nikulla

This is a close-up view of a baffle that will be coated with a carbon-nanotube coating.
Image Credit: NASA Goddard/Paul Nikulla

There’s more information about the project in this August 12, 2014 NASA news release first announcing the upcoming test.

Serendipitously or not, NASA is hosting an interactive Space Technology Forum on Oct. 27, 2014 (this coming Monday) focusing on technologies being demonstrated on the International Space Station (ISS) according to an Oct. 20, 2014 NASA media advisory,

Media are invited to interact with NASA experts who will answer questions about technologies being demonstrated on the International Space Station (ISS) during “Destination Station: ISS Technology Forum” from 10 to 11 a.m. EDT (9 to 10 a.m. CDT [7 to 8 am PDT]) Monday, Oct. 27, at the U.S. Space & Rocket Center in Huntsville, Alabama.

The forum will be broadcast live on NASA Television and the agency’s website.

The Destination Station forums are a series of live, interactive panel discussions about the space station. This is the second in the series, and it will feature a discussion on how technologies are tested aboard the orbiting laboratory. Thousands of investigations have been performed on the space station, and although they provide benefits to people on Earth, they also prepare NASA to send humans farther into the solar system than ever before.

Forum panelists and exhibits will focus on space station environmental and life support systems; 3-D printing; Space Communications and Navigation (SCaN) systems; and Synchronized Position Hold, Engage, Reorient, Experimental Satellites (SPHERES).

The forum’s panelists are:
– Jeffrey Sheehy, senior technologist in NASA’s Space Technology Mission Directorate
– Robyn Gatens, manager for space station System and Technology Demonstration, and Environmental Control Life Support System expert
– Jose Benavides, SPHERES chief engineer
– Rich Reinhart, principal investigator for the SCaN Testbed
– Niki Werkeiser, project manager for the space station 3-D printer

During the forum, questions will be taken from the audience, including media, students and social media participants. Online followers may submit questions via social media using the hashtag, #asknasa. [emphasis mine] …

The “Destination Station: ISS Technology Forum” coincides with the 7th Annual Von Braun Memorial Symposium at the University of Alabama in Huntsville Oct. 27-29. Media can attend the three-day symposium, which features NASA officials, including NASA Administrator Charles Bolden, Associate Administrator for Human Exploration and Operation William Gerstenmaier and Assistant Deputy Associate Administrator for Exploration Systems Development Bill Hill. Jean-Jacques Dordain, director general of the European Space Agency, will be a special guest speaker. Representatives from industry and academia also will be participating.

For NASA TV streaming video, scheduling and downlink information, visit:

http://www.nasa.gov/nasatv

For more information on the International Space Station and its crews, visit:

http://www.nasa.gov/station

I have checked out the livestreaming/tv site and it appears that registration is not required for access. Sadly, I don’t see any the ‘super-black’ coating team members mentioned in the news release on the list of forum participants.

ETA Oct. 27, 2014: You can check out Dexter Johnson’s Oct. 24, 2014 posting on the Nanoclast blog (on the IEEE [Institute of Electrical and Electronics Engineers] website for a little more information

Bendable, stretchable, light-weight, and transparent: a new competitor in the competition for ‘thinnest electric generator’

An Oct. 15, 2014 Columbia University (New York, US) press release (also on EurekAlert), describes another contender for the title of the world’s thinnest electric generator,

Researchers from Columbia Engineering and the Georgia Institute of Technology [US] report today [Oct. 15, 2014] that they have made the first experimental observation of piezoelectricity and the piezotronic effect in an atomically thin material, molybdenum disulfide (MoS2), resulting in a unique electric generator and mechanosensation devices that are optically transparent, extremely light, and very bendable and stretchable.

In a paper published online October 15, 2014, in Nature, research groups from the two institutions demonstrate the mechanical generation of electricity from the two-dimensional (2D) MoS2 material. The piezoelectric effect in this material had previously been predicted theoretically.

Here’s a link to and a citation for the paper,

Piezoelectricity of single-atomic-layer MoS2 for energy conversion and piezotronics by Wenzhuo Wu, Lei Wang, Yilei Li, Fan Zhang, Long Lin, Simiao Niu, Daniel Chenet, Xian Zhang, Yufeng Hao, Tony F. Heinz, James Hone, & Zhong Lin Wang. Nature (2014) doi:10.1038/nature13792 Published online 15 October 2014

This paper is behind a paywall. There is a free preview available with ReadCube Access.

Getting back to the Columbia University press release, it offers a general description of piezoelectricity and some insight into this new research on molybdenum disulfide,

Piezoelectricity is a well-known effect in which stretching or compressing a material causes it to generate an electrical voltage (or the reverse, in which an applied voltage causes it to expand or contract). But for materials of only a few atomic thicknesses, no experimental observation of piezoelectricity has been made, until now. The observation reported today provides a new property for two-dimensional materials such as molybdenum disulfide, opening the potential for new types of mechanically controlled electronic devices.

“This material—just a single layer of atoms—could be made as a wearable device, perhaps integrated into clothing, to convert energy from your body movement to electricity and power wearable sensors or medical devices, or perhaps supply enough energy to charge your cell phone in your pocket,” says James Hone, professor of mechanical engineering at Columbia and co-leader of the research.

“Proof of the piezoelectric effect and piezotronic effect adds new functionalities to these two-dimensional materials,” says Zhong Lin Wang, Regents’ Professor in Georgia Tech’s School of Materials Science and Engineering and a co-leader of the research. “The materials community is excited about molybdenum disulfide, and demonstrating the piezoelectric effect in it adds a new facet to the material.”

Hone and his research group demonstrated in 2008 that graphene, a 2D form of carbon, is the strongest material. He and Lei Wang, a postdoctoral fellow in Hone’s group, have been actively exploring the novel properties of 2D materials like graphene and MoS2 as they are stretched and compressed.

Zhong Lin Wang and his research group pioneered the field of piezoelectric nanogenerators for converting mechanical energy into electricity. He and postdoctoral fellow Wenzhuo Wu are also developing piezotronic devices, which use piezoelectric charges to control the flow of current through the material just as gate voltages do in conventional three-terminal transistors.

There are two keys to using molybdenum disulfide for generating current: using an odd number of layers and flexing it in the proper direction. The material is highly polar, but, Zhong Lin Wang notes, so an even number of layers cancels out the piezoelectric effect. The material’s crystalline structure also is piezoelectric in only certain crystalline orientations.

For the Nature study, Hone’s team placed thin flakes of MoS2 on flexible plastic substrates and determined how their crystal lattices were oriented using optical techniques. They then patterned metal electrodes onto the flakes. In research done at Georgia Tech, Wang’s group installed measurement electrodes on samples provided by Hone’s group, then measured current flows as the samples were mechanically deformed. They monitored the conversion of mechanical to electrical energy, and observed voltage and current outputs.

The researchers also noted that the output voltage reversed sign when they changed the direction of applied strain, and that it disappeared in samples with an even number of atomic layers, confirming theoretical predictions published last year. The presence of piezotronic effect in odd layer MoS2 was also observed for the first time.

“What’s really interesting is we’ve now found that a material like MoS2, which is not piezoelectric in bulk form, can become piezoelectric when it is thinned down to a single atomic layer,” says Lei Wang.

To be piezoelectric, a material must break central symmetry. A single atomic layer of MoS2 has such a structure, and should be piezoelectric. However, in bulk MoS2, successive layers are oriented in opposite directions, and generate positive and negative voltages that cancel each other out and give zero net piezoelectric effect.

“This adds another member to the family of piezoelectric materials for functional devices,” says Wenzhuo Wu.

In fact, MoS2 is just one of a group of 2D semiconducting materials known as transition metal dichalcogenides, all of which are predicted to have similar piezoelectric properties. These are part of an even larger family of 2D materials whose piezoelectric materials remain unexplored. Importantly, as has been shown by Hone and his colleagues, 2D materials can be stretched much farther than conventional materials, particularly traditional ceramic piezoelectrics, which are quite brittle.

The research could open the door to development of new applications for the material and its unique properties.

“This is the first experimental work in this area and is an elegant example of how the world becomes different when the size of material shrinks to the scale of a single atom,” Hone adds. “With what we’re learning, we’re eager to build useful devices for all kinds of applications.”

Ultimately, Zhong Lin Wang notes, the research could lead to complete atomic-thick nanosystems that are self-powered by harvesting mechanical energy from the environment. This study also reveals the piezotronic effect in two-dimensional materials for the first time, which greatly expands the application of layered materials for human-machine interfacing, robotics, MEMS, and active flexible electronics.

I see there’s a reference in that last paragraph to “harvesting mechanical energy from  the environment.” I’m not sure what they mean by that but I have written a few times about harvesting biomechanical energy. One of my earliest pieces is a July 12, 2010 post which features work by Zhong Lin Wang on harvesting energy from heart beats, blood flow, muscle stretching, or even irregular vibrations. One of my latest pieces is a Sept. 17, 2014 post about some work in Canada on harvesting energy from the jaw as you chew.

A final note, Dexter Johnson discusses this work in an Oct. 16, 2014 post on the Nanoclast blog (on the IEEE [Institute of Electrical and Electronics Engineers] website).

Next supercapacitor: crumpled graphene?

An Oct. 3, 2014 news item on ScienceDaily features the use of graphene as a possible supercapacitor,

When someone crumples a sheet of paper, that usually means it’s about to be thrown away. But researchers have now found that crumpling a piece of graphene “paper” — a material formed by bonding together layers of the two-dimensional form of carbon — can actually yield new properties that could be useful for creating extremely stretchable supercapacitors to store energy for flexible electronic devices.

The finding is reported in the journal Scientific Reports by MIT’s {Massachusetts Institute of Technology] Xuanhe Zhao, an assistant professor of mechanical engineering and civil and environmental engineering, and four other authors. The new, flexible superconductors should be easy and inexpensive to fabricate, the team says.

An Oct. 3, 2014 MIT news release by David Chandler (also on EurekAlert), which originated the news item, explains the technology at more length,

“Many people are exploring graphene paper: It’s a good candidate for making supercapacitors, because of its large surface area per mass,” Zhao says. Now, he says, the development of flexible electronic devices, such as wearable or implantable biomedical sensors or monitoring devices, will require flexible power-storage systems.

Like batteries, supercapacitors can store electrical energy, but they primarily do so electrostatically, rather than chemically — meaning they can deliver their energy faster than batteries can. Now Zhao and his team have demonstrated that by crumpling a sheet of graphene paper into a chaotic mass of folds, they can make a supercapacitor that can easily be bent, folded, or stretched to as much as 800 percent of its original size. The team has made a simple supercapacitor using this method as a proof of principle.

The material can be crumpled and flattened up to 1,000 times, the team has demonstrated, without a significant loss of performance. “The graphene paper is pretty robust,” Zhao says, “and we can achieve very large deformations over multiple cycles.” Graphene, a structure of pure carbon just one atom thick with its carbon atoms arranged in a hexagonal array, is one of the strongest materials known.

To make the crumpled graphene paper, a sheet of the material was placed in a mechanical device that first compressed it in one direction, creating a series of parallel folds or pleats, and then in the other direction, leading to a chaotic, rumpled surface. When stretched, the material’s folds simply smooth themselves out.

Forming a capacitor requires two conductive layers — in this case, two sheets of crumpled graphene paper — with an insulating layer in between, which in this demonstration was made from a hydrogel material. Like the crumpled graphene, the hydrogel is highly deformable and stretchable, so the three layers remain in contact even while being flexed and pulled.

Though this initial demonstration was specifically to make a supercapacitor, the same crumpling technique could be applied to other uses, Zhao says. For example, the crumpled graphene material might be used as one electrode in a flexible battery, or could be used to make a stretchable sensor for specific chemical or biological molecules.

Here is a link to and a citation for the paper,

Stretchable and High-Performance Supercapacitors with Crumpled Graphene Papers by Jianfeng Zang, Changyong Cao, Yaying Feng, Jie Liu, & Xuanhe Zhao. Scientific Reports 4, Article number: 6492 doi:10.1038/srep06492 Published 01 October 2014

This is an open access article.

ETA Oct. 8, 2014: Dexter Johnson of the Nanoclast blog on the IEEE (Institute of Electrical and Electronics Engineers) website has an Oct. 7, 2014 post where he comments about the ‘flexibility’ aspect of this work.

Flexible, graphene-based display: first ever?

It seems like there’s been a lot of discussion about flexible displays, graphene or not, over the years so the announcement of the first graphene-based flexible display might seem a little anticlimactic. That’s one of the problems with the technology and science communities. Sometimes there’s so much talk about an idea or concept that by the time it becomes reality people think it’s already been done and is not news.

So, kudos to the folks at the University of Cambridge who have been working on this development for a long time. From a Sept. 10, 2014 news release on EurekAlert,

The partnership between the two organisations combines the graphene expertise of the Cambridge Graphene Centre (CGC), with the transistor and display processing steps that Plastic Logic has already developed for flexible electronics. This prototype is a first example of how the partnership will accelerate the commercial development of graphene, and is a first step towards the wider implementation of graphene and graphene-like materials into flexible electronics.

The new prototype is an active matrix electrophoretic display, similar to the screens used in today’s e-readers, except it is made of flexible plastic instead of glass. In contrast to conventional displays, the pixel electronics, or backplane, of this display includes a solution-processed graphene electrode, which replaces the sputtered metal electrode layer within Plastic Logic’s conventional devices, bringing product and process benefits.

Graphene is more flexible than conventional ceramic alternatives like indium-tin oxide (ITO) and more transparent than metal films. The ultra-flexible graphene layer may enable a wide range of products, including foldable electronics. Graphene can also be processed from solution bringing inherent benefits of using more efficient printed and roll-to-roll manufacturing approaches.

The new 150 pixel per inch (150 ppi) backplane was made at low temperatures (less than 100°C) using Plastic Logic’s Organic Thin Film Transistor (OTFT) technology. The graphene electrode was deposited from solution and subsequently patterned with micron-scale features to complete the backplane.

For this prototype, the backplane was combined with an electrophoretic imaging film to create an ultra-low power and durable display. Future demonstrations may incorporate liquid crystal (LCD) and organic light emitting diodes (OLED) technology to achieve full colour and video functionality. Lightweight flexible active-matrix backplanes may also be used for sensors, with novel digital medical imaging and gesture recognition applications already in development.

“We are happy to see our collaboration with Plastic Logic resulting in the first graphene-based electrophoretic display exploiting graphene in its pixels’ electronics,” said Professor Andrea Ferrari, Director of the Cambridge Graphene Centre. “This is a significant step forward to enable fully wearable and flexible devices. This cements the Cambridge graphene-technology cluster and shows how an effective academic-industrial partnership is key to help move graphene from the lab to the factory floor.”

As an example of how long this development has been in the works, I have a Nov. 7, 2011 posting about a University of Cambridge stretchable, electronic skin produced by what was then the university’s Nokia Research Centre. That ‘skin’ was a big step forward to achieving a phone/device/flexible display (the Morph), wrappable around your wrist, first publicized in 2008 as I noted in a March 30, 2010 posting.

According to the news release, there should be some more news soon,

This joint effort between Plastic Logic and the CGC was also recently boosted by a grant from the UK Technology Strategy Board, within the ‘realising the graphene revolution’ initiative. This will target the realisation of an advanced, full colour, OELD based display within the next 12 months.

My colleague Dexter Johnson has offered some business-oriented insight into this development at Cambridge in his Sept. 9, 2014 posting on the Nanoclast blog on the IEEE (Institute of Electrical and Electronics Engineers) website (Note: Links have been removed),

In the UK’s concerted efforts to become a hub for graphene commercialization, one of the key partnerships between academic research and industry has been the one between the Cambridge Graphene Centre located at the University of Cambridge and a number of companies, including Nokia, Dyson, BaE systems, Philips and Plastic Logic. The last on this list, Plastic Logic, was spun out originally from the University of Cambridge in 2000. However, since its beginnings it has required a $200 million investment from RusNano to keep itself afloat back in 2011 for a time called Mountain View, California, home.

The post is well worth reading for anyone interested in the twists and turns of graphene commercialization in the UK.

Nanotechnology, tobacco plants, and the Ebola virus

Before presenting information about the current Ebola crisis and issues with vaccines and curatives, here’s a description of the disease from its Wikipedia entry,

Ebola virus disease (EVD) or Ebola hemorrhagic fever (EHF) is a disease of humans and other primates caused by an ebola virus. Symptoms start two days to three weeks after contracting the virus, with a fever, sore throat, muscle pain, and headaches. Typically nausea, vomiting, and diarrhea follow, along with decreased functioning of the liver and kidneys. Around this time, affected people may begin to bleed both within the body and externally. [1]

As for the current crisis in countries situated on the west coast of the African continent, there’s this from an Aug. 14, 2014 news item on ScienceDaily,

The outbreak of Ebola virus disease that has claimed more than 1,000 lives in West Africa this year poses a serious, ongoing threat to that region: the spread to capital cities and Nigeria — Africa’s most populous nation — presents new challenges for healthcare professionals. The situation has garnered significant attention and fear around the world, but proven public health measures and sharpened clinical vigilance will contain the epidemic and thwart a global spread, according to a new commentary by Anthony S. Fauci, M.D., director of the National Institute of Allergy and Infectious Diseases (NIAID), part of the National Institutes of Health.

Dr. Fauci’s Aug. 13, 2014 commentary (open access) in the New England Journal of Medicine provides more detail (Note: A link has been removed),

An outbreak of Ebola virus disease (EVD) has jolted West Africa, claiming more than 1000 lives since the virus emerged in Guinea in early 2014 (see figure) Ebola Virus Cases and Deaths in West Africa (Guinea, Liberia, Nigeria, and Sierra Leone), as of August 11, 2014 (Panel A), and Over Time (Panel B).). The rapidly increasing numbers of cases in the African countries of Guinea, Liberia, and Sierra Leone have had public health authorities on high alert throughout the spring and summer. More recent events including the spread of EVD to Nigeria (Africa’s most populous country) and the recent evacuation to the United States of two American health care workers with EVD have captivated the world’s attention and concern. Health professionals and the general public are struggling to comprehend these unfolding dynamics and to separate misinformation and speculation from truth.

In early 2014, EVD emerged in a remote region of Guinea near its borders with Sierra Leone and Liberia. Since then, the epidemic has grown dramatically, fueled by several factors. First, Guinea, Sierra Leone, and Liberia are resource-poor countries already coping with major health challenges, such as malaria and other endemic diseases, some of which may be confused with EVD. Next, their borders are porous, and movement between countries is constant. Health care infrastructure is inadequate, and health workers and essential supplies including personal protective equipment are scarce. Traditional practices, such as bathing of corpses before burial, have facilitated transmission. The epidemic has spread to cities, which complicates tracing of contacts. Finally, decades of conflict have left the populations distrustful of governing officials and authority figures such as health professionals. Add to these problems a rapidly spreading virus with a high mortality rate, and the scope of the challenge becomes clear.

Although the regional threat of Ebola in West Africa looms large, the chance that the virus will establish a foothold in the United States or another high-resource country remains extremely small. Although global air transit could, and most likely will, allow an infected, asymptomatic person to board a plane and unknowingly carry Ebola virus to a higher-income country, containment should be readily achievable. Hospitals in such countries generally have excellent capacity to isolate persons with suspected cases and to care for them safely should they become ill. Public health authorities have the resources and training necessary to trace and monitor contacts. Protocols exist for the appropriate handling of corpses and disposal of biohazardous materials. In addition, characteristics of the virus itself limit its spread. Numerous studies indicate that direct contact with infected bodily fluids — usually feces, vomit, or blood — is necessary for transmission and that the virus is not transmitted from person to person through the air or by casual contact. Isolation procedures have been clearly outlined by the Centers for Disease Control and Prevention (CDC). A high index of suspicion, proper infection-control practices, and epidemiologic investigations should quickly limit the spread of the virus.

Fauci’s article makes it clear that public concerns are rising in the US and I imagine that’s true of Canada too and many other parts of the world, not to mention the countries currently experiencing the EVD outbreak. In the midst of all this comes a US Food and Drug Administration (FDA) warning as per an Aug. 15, 2014 news item (originated by Reuters reporter Toni Clarke) on Nanowerk,

The U.S. Food and Drug Administration said on Thursday [Aug. 14, 2014] it has become aware of products being sold online that fraudulently claim to prevent or treat Ebola.

The FDA’s warning comes on the heels of comments by Nigeria’s top health official, Onyebuchi Chukwu, who reportedly said earlier Thursday [Aug. 14, 2014] that eight Ebola patients in Lagos, the country’s capital, will receive an experimental treatment containing nano-silver.

Erica Jefferson, a spokeswoman for the FDA, said she could not provide any information about the product referenced by the Nigerians.

The Aug. 14,  2014 FDA warning reads in part,

The U.S. Food and Drug Administration is advising consumers to be aware of products sold online claiming to prevent or treat the Ebola virus. Since the outbreak of the Ebola virus in West Africa, the FDA has seen and received consumer complaints about a variety of products claiming to either prevent the Ebola virus or treat the infection.

There are currently no FDA-approved vaccines or drugs to prevent or treat Ebola. Although there are experimental Ebola vaccines and treatments under development, these investigational products are in the early stages of product development, have not yet been fully tested for safety or effectiveness, and the supply is very limited. There are no approved vaccines, drugs, or investigational products specifically for Ebola available for purchase on the Internet. By law, dietary supplements cannot claim to prevent or cure disease.

As per the FDA’s reference to experimental vaccines, an Aug. 6, 2014 article by Caroline Chen, Mark Niquette, Mark Langreth, and Marie French for Bloomberg describes the ZMapp vaccine/treatment (Note: Links have been removed),

On a small plot of land incongruously tucked amid a Kentucky industrial park sit five weather-beaten greenhouses. At the site, tobacco plants contain one of the most promising hopes for developing an effective treatment for the deadly Ebola virus.

The plants contain designer antibodies developed by San Diego-based Mapp Biopharmaceutical Inc. and are grown in Kentucky by a unit of Reynolds American Inc. Two stricken U.S. health workers received an experimental treatment containing the antibodies in Liberia last week. Since receiving doses of the drug, both patients’ conditions have improved.

Tobacco plant-derived medicines, which are also being developed by a company whose investors include Philip Morris International Inc., are part of a handful of cutting edge plant-based treatments that are in the works for everything from pandemic flu to rabies using plants such as lettuce, carrots and even duckweed. While the technique has existed for years, the treatments have only recently begun to reach the marketplace.

Researchers try to identify the best antibodies in the lab, before testing them on mice, then eventually on monkeys. Mapp’s experimental drug, dubbed ZMapp, has three antibodies, which work together to alert the immune system and neutralize the Ebola virus, she [Erica Ollman Saphire, a molecular biologist at the Scripps Research Institute,] said.

This is where the tobacco comes in: the plants are used as hosts to grow large amounts of the antibodies. Genes for the desired antibodies are fused to genes for a natural tobacco virus, Charles Arntzen, a plant biotechnology expert at Arizona State University, said in an Aug. 4 [2014] telephone interview.

The tobacco plants are then infected with this new artificial virus, and antibodies are grown inside the plant. Eventually, the tobacco is ground up and the antibody is extracted, Arntzen said.

The process of growing antibodies in mammals risks transferring viruses that could infect humans, whereas “plants are so far removed, so if they had some sort of plant virus we wouldn’t get sick because viruses are host-specific,” said Qiang Chen, a plant biologist at Arizona State University in Tempe, Arizona, in a telephone interview.

There is a Canadian (?) company working on a tobacco-based vaccines including one for EVD but as the Bloomberg writers note the project is highly secret,

Another tobacco giant-backed company working on biotech drugs grown in tobacco plants is Medicago Inc. in Quebec City, which is owned by Mitsubishi Tanabe Pharma Corp. and Philip Morris. [emphasis mine]

Medicago is working on testing a vaccine for pandemic influenza and has a production greenhouse facility in North Carolina, said Jean-Luc Martre, senior director for government affairs at Medicago. Medicago is planning a final stage trial of the pandemic flu vaccine for next year, he said in a telephone interview.

The plant method is flexible and capable of making antibodies and vaccines for numerous types of viruses, said Martre. In addition to influenza, the company’s website says it is in early stages of testing products for rabies and rotavirus.

Medicago ‘‘is currently closely working with partners for the production of an Ebola antibody as well as other antibodies that are of interest for bio-defense,” he said in an e-mail. He would not disclose who the partners were. [emphasis mine]

I have checked both the English and French language versions of Medicago’s website and cannot find any information about their work on ebola. (The Bloomberg article provides a good overview of the ebola situation and more. I recommend reading it and/or the Aug. 15, 2014 posting on CTV [Canadian Television Network] which originated from an Associated Press article by Malcolm Ritter).

Moving on to more research and ebola, Dexter Johnson in an Aug. 14, 2014 posting (on his Nanoclast blog on the IEEE [Institute of Electrical and Electronics Engineers] website,) describes some work from Northeastern University (US), Note: Links have been removed,

With the Ebola virus death toll now topping 1000 and even the much publicized experimental treatment ZMapp failing to save the life of a Spanish missionary priest who was treated with it, it is clear that scientists need to explore new ways of fighting the deadly disease. For researchers at Northeastern University in Boston, one possibility may be using nanotechnology.

“It has been very hard to develop a vaccine or treatment for Ebola or similar viruses because they mutate so quickly,” said Thomas Webster, the chair of Northeastern’s chemical engineering department, in a press release. “In nanotechnology we turned our attention to developing nanoparticles that could be attached chemically to the viruses and stop them from spreading.”

Webster, along with many researchers in the nanotechnology community, have been trying to use gold nanoparticles, in combination with near-infrared light, to kill cancer cells with heat. The hope is that the same approach could be used to kill the Ebola virus.

There is also an Aug. 6, 2014 Northeastern University news release by Joe O’Connell describing the technique being used by Webster’s team,

… According to Web­ster, gold nanopar­ti­cles are cur­rently being used to treat cancer. Infrared waves, he explained, heat up the gold nanopar­ti­cles, which, in turn, attack and destroy every­thing from viruses to cancer cells, but not healthy cells.

Rec­og­nizing that a larger sur­face area would lead to a quicker heat-​​up time, Webster’s team cre­ated gold nanos­tars. “The star has a lot more sur­face area, so it can heat up much faster than a sphere can,” Web­ster said. “And that greater sur­face area allows it to attack more viruses once they absorb to the par­ti­cles.” The problem the researchers face, how­ever, is making sure the hot gold nanopar­ti­cles attack the virus or cancer cells rather than the healthy cells.

At this point, there don’t seem to be any curative measures generally available although some are available experimentally in very small quantities.

IBM weighs in with plans for a 7nm computer chip

On the heels of Intel’s announcement about a deal utilizing their 14nm low-power manufacturing process and speculations about a 10nm computer chip (my July 9, 2014 posting), IBM makes an announcement about a 7nm chip as per this July 10, 2014 news item on Azonano,

IBM today [July 10, 2014] announced it is investing $3 billion over the next 5 years in two broad research and early stage development programs to push the limits of chip technology needed to meet the emerging demands of cloud computing and Big Data systems. These investments will push IBM’s semiconductor innovations from today’s breakthroughs into the advanced technology leadership required for the future.

A very comprehensive July 10, 2014 news release lays out the company’s plans for this $3B investment representing 10% of IBM’s total research budget,

The first research program is aimed at so-called “7 nanometer and beyond” silicon technology that will address serious physical challenges that are threatening current semiconductor scaling techniques and will impede the ability to manufacture such chips. The second is focused on developing alternative technologies for post-silicon era chips using entirely different approaches, which IBM scientists and other experts say are required because of the physical limitations of silicon based semiconductors.

Cloud and big data applications are placing new challenges on systems, just as the underlying chip technology is facing numerous significant physical scaling limits.  Bandwidth to memory, high speed communication and device power consumption are becoming increasingly challenging and critical.

The teams will comprise IBM Research scientists and engineers from Albany and Yorktown, New York; Almaden, California; and Europe. In particular, IBM will be investing significantly in emerging areas of research that are already underway at IBM such as carbon nanoelectronics, silicon photonics, new memory technologies, and architectures that support quantum and cognitive computing. [emphasis mine]

These teams will focus on providing orders of magnitude improvement in system level performance and energy efficient computing. In addition, IBM will continue to invest in the nanosciences and quantum computing–two areas of fundamental science where IBM has remained a pioneer for over three decades.

7 nanometer technology and beyond

IBM Researchers and other semiconductor experts predict that while challenging, semiconductors show promise to scale from today’s 22 nanometers down to 14 and then 10 nanometers in the next several years.  However, scaling to 7 nanometers and perhaps below, by the end of the decade will require significant investment and innovation in semiconductor architectures as well as invention of new tools and techniques for manufacturing.

“The question is not if we will introduce 7 nanometer technology into manufacturing, but rather how, when, and at what cost?” said John Kelly, senior vice president, IBM Research. “IBM engineers and scientists, along with our partners, are well suited for this challenge and are already working on the materials science and device engineering required to meet the demands of the emerging system requirements for cloud, big data, and cognitive systems. This new investment will ensure that we produce the necessary innovations to meet these challenges.”

“Scaling to 7nm and below is a terrific challenge, calling for deep physics competencies in processing nano materials affinities and characteristics. IBM is one of a very few companies who has repeatedly demonstrated this level of science and engineering expertise,” said Richard Doherty, technology research director, The Envisioneering Group.

Bridge to a “Post-Silicon” Era

Silicon transistors, tiny switches that carry information on a chip, have been made smaller year after year, but they are approaching a point of physical limitation. Their increasingly small dimensions, now reaching the nanoscale, will prohibit any gains in performance due to the nature of silicon and the laws of physics. Within a few more generations, classical scaling and shrinkage will no longer yield the sizable benefits of lower power, lower cost and higher speed processors that the industry has become accustomed to.

With virtually all electronic equipment today built on complementary metal–oxide–semiconductor (CMOS) technology, there is an urgent need for new materials and circuit architecture designs compatible with this engineering process as the technology industry nears physical scalability limits of the silicon transistor.

Beyond 7 nanometers, the challenges dramatically increase, requiring a new kind of material to power systems of the future, and new computing platforms to solve problems that are unsolvable or difficult to solve today. Potential alternatives include new materials such as carbon nanotubes, and non-traditional computational approaches such as neuromorphic computing, cognitive computing, machine learning techniques, and the science behind quantum computing.

As the leader in advanced schemes that point beyond traditional silicon-based computing, IBM holds over 500 patents for technologies that will drive advancements at 7nm and beyond silicon — more than twice the nearest competitor. These continued investments will accelerate the invention and introduction into product development for IBM’s highly differentiated computing systems for cloud, and big data analytics.

Several exploratory research breakthroughs that could lead to major advancements in delivering dramatically smaller, faster and more powerful computer chips, include quantum computing, neurosynaptic computing, silicon photonics, carbon nanotubes, III-V technologies, low power transistors and graphene:

Quantum Computing

The most basic piece of information that a typical computer understands is a bit. Much like a light that can be switched on or off, a bit can have only one of two values: “1” or “0.” Described as superposition, this special property of qubits enables quantum computers to weed through millions of solutions all at once, while desktop PCs would have to consider them one at a time.

IBM is a world leader in superconducting qubit-based quantum computing science and is a pioneer in the field of experimental and theoretical quantum information, fields that are still in the category of fundamental science – but one that, in the long term, may allow the solution of problems that are today either impossible or impractical to solve using conventional machines. The team recently demonstrated the first experimental realization of parity check with three superconducting qubits, an essential building block for one type of quantum computer.

Neurosynaptic Computing

Bringing together nanoscience, neuroscience, and supercomputing, IBM and university partners have developed an end-to-end ecosystem including a novel non-von Neumann architecture, a new programming language, as well as applications. This novel technology allows for computing systems that emulate the brain’s computing efficiency, size and power usage. IBM’s long-term goal is to build a neurosynaptic system with ten billion neurons and a hundred trillion synapses, all while consuming only one kilowatt of power and occupying less than two liters of volume.

Silicon Photonics

IBM has been a pioneer in the area of CMOS integrated silicon photonics for over 12 years, a technology that integrates functions for optical communications on a silicon chip, and the IBM team has recently designed and fabricated the world’s first monolithic silicon photonics based transceiver with wavelength division multiplexing.  Such transceivers will use light to transmit data between different components in a computing system at high data rates, low cost, and in an energetically efficient manner.

Silicon nanophotonics takes advantage of pulses of light for communication rather than traditional copper wiring and provides a super highway for large volumes of data to move at rapid speeds between computer chips in servers, large datacenters, and supercomputers, thus alleviating the limitations of congested data traffic and high-cost traditional interconnects.

Businesses are entering a new era of computing that requires systems to process and analyze, in real-time, huge volumes of information known as Big Data. Silicon nanophotonics technology provides answers to Big Data challenges by seamlessly connecting various parts of large systems, whether few centimeters or few kilometers apart from each other, and move terabytes of data via pulses of light through optical fibers.

III-V technologies

IBM researchers have demonstrated the world’s highest transconductance on a self-aligned III-V channel metal-oxide semiconductor (MOS) field-effect transistors (FETs) device structure that is compatible with CMOS scaling. These materials and structural innovation are expected to pave path for technology scaling at 7nm and beyond.  With more than an order of magnitude higher electron mobility than silicon, integrating III-V materials into CMOS enables higher performance at lower power density, allowing for an extension to power/performance scaling to meet the demands of cloud computing and big data systems.

Carbon Nanotubes

IBM Researchers are working in the area of carbon nanotube (CNT) electronics and exploring whether CNTs can replace silicon beyond the 7 nm node.  As part of its activities for developing carbon nanotube based CMOS VLSI circuits, IBM recently demonstrated — for the first time in the world — 2-way CMOS NAND gates using 50 nm gate length carbon nanotube transistors.

IBM also has demonstrated the capability for purifying carbon nanotubes to 99.99 percent, the highest (verified) purities demonstrated to date, and transistors at 10 nm channel length that show no degradation due to scaling–this is unmatched by any other material system to date.

Carbon nanotubes are single atomic sheets of carbon rolled up into a tube. The carbon nanotubes form the core of a transistor device that will work in a fashion similar to the current silicon transistor, but will be better performing. They could be used to replace the transistors in chips that power data-crunching servers, high performing computers and ultra fast smart phones.

Carbon nanotube transistors can operate as excellent switches at molecular dimensions of less than ten nanometers – the equivalent to 10,000 times thinner than a strand of human hair and less than half the size of the leading silicon technology. Comprehensive modeling of the electronic circuits suggests that about a five to ten times improvement in performance compared to silicon circuits is possible.

Graphene

Graphene is pure carbon in the form of a one atomic layer thick sheet.  It is an excellent conductor of heat and electricity, and it is also remarkably strong and flexible.  Electrons can move in graphene about ten times faster than in commonly used semiconductor materials such as silicon and silicon germanium. Its characteristics offer the possibility to build faster switching transistors than are possible with conventional semiconductors, particularly for applications in the handheld wireless communications business where it will be a more efficient switch than those currently used.

Recently in 2013, IBM demonstrated the world’s first graphene based integrated circuit receiver front end for wireless communications. The circuit consisted of a 2-stage amplifier and a down converter operating at 4.3 GHz.

Next Generation Low Power Transistors

In addition to new materials like CNTs, new architectures and innovative device concepts are required to boost future system performance. Power dissipation is a fundamental challenge for nanoelectronic circuits. To explain the challenge, consider a leaky water faucet — even after closing the valve as far as possible water continues to drip — this is similar to today’s transistor, in that energy is constantly “leaking” or being lost or wasted in the off-state.

A potential alternative to today’s power hungry silicon field effect transistors are so-called steep slope devices. They could operate at much lower voltage and thus dissipate significantly less power. IBM scientists are researching tunnel field effect transistors (TFETs). In this special type of transistors the quantum-mechanical effect of band-to-band tunneling is used to drive the current flow through the transistor. TFETs could achieve a 100-fold power reduction over complementary CMOS transistors, so integrating TFETs with CMOS technology could improve low-power integrated circuits.

Recently, IBM has developed a novel method to integrate III-V nanowires and heterostructures directly on standard silicon substrates and built the first ever InAs/Si tunnel diodes and TFETs using InAs as source and Si as channel with wrap-around gate as steep slope device for low power consumption applications.

“In the next ten years computing hardware systems will be fundamentally different as our scientists and engineers push the limits of semiconductor innovations to explore the post-silicon future,” said Tom Rosamilia, senior vice president, IBM Systems and Technology Group. “IBM Research and Development teams are creating breakthrough innovations that will fuel the next era of computing systems.”

IBM’s historic contributions to silicon and semiconductor innovation include the invention and/or first implementation of: the single cell DRAM, the “Dennard scaling laws” underpinning “Moore’s Law”, chemically amplified photoresists, copper interconnect wiring, Silicon on Insulator, strained engineering, multi core microprocessors, immersion lithography, high speed silicon germanium (SiGe), High-k gate dielectrics, embedded DRAM, 3D chip stacking, and Air gap insulators.

IBM researchers also are credited with initiating the era of nano devices following the Nobel prize winning invention of the scanning tunneling microscope which enabled nano and atomic scale invention and innovation.

IBM will also continue to fund and collaborate with university researchers to explore and develop the future technologies for the semiconductor industry. In particular, IBM will continue to support and fund university research through private-public partnerships such as the NanoElectornics Research Initiative (NRI), and the Semiconductor Advanced Research Network (STARnet), and the Global Research Consortium (GRC) of the Semiconductor Research Corporation.

I highlighted ‘memory systems’ as this brings to mind HP Labs and their major investment in ‘memristive’ technologies noted in my June 26, 2014 posting,

… During a two-hour presentation held a year and a half ago, they laid out how the computer might work, its benefits, and the expectation that about 75 percent of HP Labs personnel would be dedicated to this one project. “At the end, Meg {Meg Whitman, CEO of HP Labs] turned to [Chief Financial Officer] Cathie Lesjak and said, ‘Find them more money,’” says John Sontag, the vice president of systems research at HP, who attended the meeting and is in charge of bringing the Machine to life. “People in Labs see this as a once-in-a-lifetime opportunity.”

The Machine is based on the memristor and other associated technologies.

Getting back to IBM, there’s this analysis of the $3B investment ($600M/year for five years) by Alex Konrad in a July 10, 2014 article for Forbes (Note: A link has been removed),

When IBM … announced a $3 billion commitment to even tinier semiconductor chips that no longer depended on silicon on Wednesday, the big news was that IBM’s putting a lot of money into a future for chips where Moore’s Law no longer applies. But on second glance, the move to spend billions on more experimental ideas like silicon photonics and carbon nanotubes shows that IBM’s finally shifting large portions of its research budget into more ambitious and long-term ideas.

… IBM tells Forbes the $3 billion isn’t additional money being added to its R&D spend, an area where analysts have told Forbes they’d like to see more aggressive cash commitments in the future. IBM will still spend about $6 billion a year on R&D, 6% of revenue. Ten percent of that research budget, however, now has to come from somewhere else to fuel these more ambitious chip projects.

Neal Ungerleider’s July 11, 2014 article for Fast Company focuses on the neuromorphic computing and quantum computing aspects of this $3B initiative (Note: Links have been removed),

The new R&D initiatives fall into two categories: Developing nanotech components for silicon chips for big data and cloud systems, and experimentation with “post-silicon” microchips. This will include research into quantum computers which don’t know binary code, neurosynaptic computers which mimic the behavior of living brains, carbon nanotubes, graphene tools and a variety of other technologies.

IBM’s investment is one of the largest for quantum computing to date; the company is one of the biggest researchers in the field, along with a Canadian company named D-Wave which is partnering with Google and NASA to develop quantum computer systems.

The curious can find D-Wave Systems here. There’s also a January 19, 2012 posting here which discusses the D-Wave’s situation at that time.

Final observation, these are fascinating developments especially for the insight they provide into the worries troubling HP Labs, Intel, and IBM as they jockey for position.

ETA July 14, 2014: Dexter Johnson has a July 11, 2014 posting on his Nanoclast blog (on the IEEE [Institute for Electrical and Electronics Engineers]) about the IBM announcement and which features some responses he received from IBM officials to his queries,

While this may be a matter of fascinating speculation for investors, the impact on nanotechnology development  is going to be significant. To get a better sense of what it all means, I was able to talk to some of the key figures of IBM’s push in nanotechnology research.

I conducted e-mail interviews with Tze-Chiang (T.C.) Chen, vice president science & technology, IBM Fellow at the Thomas J. Watson Research Center and Wilfried Haensch, senior manager, physics and materials for logic and communications, IBM Research.

Silicon versus Nanomaterials

First, I wanted to get a sense for how long IBM envisioned sticking with silicon and when they expected the company would permanently make the move away from CMOS to alternative nanomaterials. Unfortunately, as expected, I didn’t get solid answers, except for them to say that new manufacturing tools and techniques need to be developed now.

He goes on to ask about carbon nanotubes and graphene. Interestingly, IBM does not have a wide range of electronics applications in mind for graphene.  I encourage you to read Dexter’s posting as Dexter got answers to some very astute and pointed questions.

Competition, collaboration, and a smaller budget: the US nano community responds

Before getting to the competition, collaboration, and budget mentioned in the head for this posting, I’m supplying some background information.

Within the context of a May 20, 2014 ‘National Nanotechnology Initiative’ hearing before the U.S. House of Representatives Subcommittee on Research and Technology, Committee on Science, Space, and Technology, the US General Accountability Office (GAO) presented a 22 pp. précis (PDF; titled: NANOMANUFACTURING AND U.S. COMPETITIVENESS; Challenges and Opportunities) of its 125 pp. (PDF version report titled: Nanomanufacturing: Emergence and Implications for U.S. Competitiveness, the Environment, and Human Health).

Having already commented on the full report itself in a Feb. 10, 2014 posting, I’m pointing you to Dexter Johnson’s May 21, 2014 post on his Nanoclast blog (on the IEEE [Institute of Electrical and Electronics Engineers] website) where he discusses the précis from the perspective of someone who was consulted by the US GAO when they were writing the full report (Note: Links have been removed),

I was interviewed extensively by two GAO economists for the accompanying [full] report “Nanomanufacturing: Emergence and Implications for U.S. Competitiveness, the Environment, and Human Health,” where I shared background information on research I helped compile and write on global government funding of nanotechnology.

While I acknowledge that the experts who were consulted for this report are more likely the source for its views than I am, I was pleased to see the report reflect many of my own opinions. Most notable among these is bridging the funding gap in the middle stages of the manufacturing-innovation process, which is placed at the top of the report’s list of challenges.

While I am in agreement with much of the report’s findings, it suffers from a fundamental misconception in seeing nanotechnology’s development as a kind of race between countries. [emphases mine]

(I encourage you to read the full text of Dexter’s comments as he offers more than a simple comment about competition.)

Carrying on from this notion of a ‘nanotechnology race’, at least one publication focused on that aspect. From the May 20, 2014 article by Ryan Abbott for CourthouseNews.com,

Nanotech Could Keep U.S. Ahead of China

WASHINGTON (CN) – Four of the nation’s leading nanotechnology scientists told a U.S. House of Representatives panel Tuesday that a little tweaking could go a long way in keeping the United States ahead of China and others in the industry.

The hearing focused on the status of the National Nanotechnology Initiative, a federal program launched in 2001 for the advancement of nanotechnology.

As I noted earlier, the hearing was focused on the National Nanotechnology Initiative (NNI) and all of its efforts. It’s quite intriguing to see what gets emphasized in media reports and, in this case, the dearth of media reports.

I have one more tidbit, the testimony from Lloyd Whitman, Interim Director of the National Nanotechnology Coordination Office and Deputy Director of the Center for Nanoscale Science and Technology, National Institute of Standards and Technology. The testimony is in a May 21, 2014 news item on insurancenewsnet.com,

Testimony by Lloyd Whitman, Interim Director of the National Nanotechnology Coordination Office and Deputy Director of the Center for Nanoscale Science and Technology, National Institute of Standards and Technology

Chairman Bucshon, Ranking Member Lipinski, and Members of the Committee, it is my distinct privilege to be here with you today to discuss nanotechnology and the role of the National Nanotechnology Initiative in promoting its development for the benefit of the United States.

Highlights of the National Nanotechnology Initiative

Our current Federal research and development program in nanotechnology is strong. The NNI agencies continue to further the NNI’s goals of (1) advancing nanotechnology R&D, (2) fostering nanotechnology commercialization, (3) developing and maintaining the U.S. workforce and infrastructure, and (4) supporting the responsible and safe development of nanotechnology. …

,,,

The sustained, strategic Federal investment in nanotechnology R&D combined with strong private sector investments in the commercialization of nanotechnology-enabled products has made the United States the global leader in nanotechnology. The most recent (2012) NNAP report analyzed a wide variety of sources and metrics and concluded that “… in large part as a result of the NNI the United States is today… the global leader in this exciting and economically promising field of research and technological development.” n10 A recent report on nanomanufacturing by Congress’s own Government Accountability Office (GAO) arrived at a similar conclusion, again drawing on a wide variety of sources and stakeholder inputs. n11 As discussed in the GAO report, nanomanufacturing and commercialization are key to capturing the value of Federal R&D investments for the benefit of the U.S. economy. The United States leads the world by one important measure of commercial activity in nanotechnology: According to one estimate, n12 U.S. companies invested $4.1 billion in nanotechnology R&D in 2012, far more than investments by companies in any other country.  …

There’s cognitive dissonance at work here as Dexter notes in his own way,

… somewhat ironically, the [GAO] report suggests that one of the ways forward is more international cooperation, at least in the development of international standards. And in fact, one of the report’s key sources of information, Mihail Roco, has made it clear that international cooperation in nanotechnology research is the way forward.

It seems to me that much of the testimony and at least some of the anxiety about being left behind can be traced to a decreased 2015 budget allotment for nanotechnology (mentioned here in a March 31, 2014 posting [US National Nanotechnology Initiative’s 2015 budget request shows a decrease of $200M]).

One can also infer a certain anxiety from a recent presentation by Barbara Herr Harthorn, head of UCSB’s [University of California at Santa Barbara) Center for Nanotechnology in Society (CNS). She was at a February 2014 meeting of the Presidential Commission for the Study of Bioethical Issues (mentioned in parts one and two [the more substantive description of the meeting which also features a Canadian academic from the genomics community] of my recent series on “Brains, prostheses, nanotechnology, and human enhancement”). II noted in part five of the series what seems to be a shift towards brain research as a likely beneficiary of the public engagement work accomplished under NNI auspices and, in the case of the Canadian academic, the genomics effort.

The Americans are not the only ones feeling competitive as this tweet from Richard Jones, Pro-Vice Chancellor for Research and Innovation at Sheffield University (UK), physicist, and author of Soft Machines, suggests,

May 18

The UK has fewer than 1% of world patents on graphene, despite it being discovered here, according to the FT –

I recall reading a report a few years back which noted that experts in China were concerned about falling behind internationally in their research efforts. These anxieties are not new, CP Snow’s book and lecture The Two Cultures (1959) also referenced concerns in the UK about scientific progress and being left behind.

Competition/collaboration is an age-old conundrum and about as ancient as anxieties of being left behind. The question now is how are we all going to resolve these issues this time?

ETA May 28, 2014: The American Institute of Physics (AIP) has produced a summary of the May 20, 2014 hearing as part of their FYI: The AIP Bulletin of Science Policy News, May 27, 2014 (no. 93).

ETA Sept. 12, 2014: My first posting about the diminished budget allocation for the US NNI was this March 31, 2014 posting.

Does more nano-enabled security = more nano-enabled surveillance?

A May 6, 2014 essay by Brandon Engel published on Nanotechnology Now poses an interesting question about the use of nanotechnology-enabled security and surveillance measures (Note: Links have been removed),

Security is of prime importance in an increasingly globalized society. It has a role to play in protecting citizens and states from myriad malevolent forces, such as organized crime or terrorist acts, and in responding, as well as preventing, both natural and man-made disasters. Research and development in this field often focuses on certain broad areas, including security of infrastructures and utilities; intelligence surveillance and border security; and stability and safety in cases of crisis. …

Nanotechnology is coming to play an ever greater title:role in these applications. Whether it’s used for detecting potentially harmful materials for homeland security, finding pathogens in water supply systems, or for early warning and detoxification of harmful airborne substances, its usefulness and efficiency are becoming more evident by the day.

He’s quite right about these applications. For example, I’ve just published (May 9, 2014) piece ‘Textiles laced with carbon nanotubes for clothing that protects against poison gas‘.

Engel goes on to describe a dark side to nanotechnology-enabled security,

On the other hand, more and more unsettling scenarios are fathomable with the advent of this new technology, such as covertly infiltrated devices, as small as tiny insects, being used to coordinate and execute a disarming attack on obsolete weapons systems, information apparatuses, or power grids.

Engel is also right about the potential surveillance issues. In a Dec. 18, 2013 posting I featured a special issue of SIGNAL Magazine (which covers the latest trends and techniques in topics that include C4ISR, information security, intelligence, electronics, homeland security, cyber technologies,  …) focusing on nanotechnology-enabled security and surveillance,

The Dec. 1, 2013 article by Rita Boland (h/t Dec. 13, 2013 Azonano news item) does a good job of presenting a ‘big picture’ approach including nonmilitary and military  nanotechnology applications  by interviewing the main players in the US,

Nanotechnology is the new cyber, according to several major leaders in the field. Just as cyber is entrenched across global society now, nano is poised to be the major capabilities enabler of the next decades. Expert members from the National Nanotechnology Initiative representing government and science disciplines say nano has great significance for the military and the general public.

For anyone who may think Engel is exaggerating when he mentions tiny insects being used for surveillance, there’s this May 8, 2014 post (Cyborg Beetles Detect Nerve Gas) by Dexter Johnson on his Nanoclast blog (Note: Dexter is an engineer who describes the technology in a somewhat detailed, technical fashion). I have a less technical description of some then current research in an Aug. 12, 2011 posting featuring some military experiments, for example, a surveillance camera disguised as a hummingbird (I have a brief video of a demonstration) and some research into how smartphones can be used for surveillance.

Engel comes to an interesting conclusion (Note: A link has been removed),

The point is this: whatever conveniences are seemingly afforded by these sort of technological advances, there is persistent ambiguity about the extent to which this technology actually protects or makes us more vulnerable. Striking the right balance between respecting privacy and security is an ever-elusive goal, and at such an early point in the development of nanotech, must be approached on a case by case basis. … [emphasis mine]

I don’t understand what Engel means when he says “case by case.” Are these individual applications that he feels are prone to misuse or specific usages of these applications? In any event, while I appreciate the concerns (I share many of them), I don’t think his proposed approach is practicable and that leads to another question, what can be done? Sadly, I have no answers but I am glad to see the question being asked in the ‘nanotechnology webspace’.

I did some searching for Bandon Engel online and found this January 17, 2014 guest post (about a Dean Koontz book) on The Belle’s Tales blog. He also has a blog of his own, Brandon Engel where he describes himself this way,

Musician, filmmaker, multimedia journalist, puppeteer, and professional blogger based in Chicago.

The man clearly has a wide range of interests and concerns.

As for the question posed in this post’s head, I don’t think there is a simple one-to-one equivalency where one more security procedure results in one more surveillance procedure. However, I do believe there is a relationship between the two and that sometimes increased security is an argument used to support increased surveillance procedures. While Engel doesn’t state that explicitly in his piece, I think it is implied.

One final thought, surveillance is not new and one of the more interesting examples of the ‘art’ is featured in a description of the Parisian constabulary of the 18th century written by Nina Kushner in ,

The Case of the Closely Watched Courtesans
The French police obsessively tracked the kept women of 18th-century Paris. Why? (Slate.com, April 15, 2014)

or

Republished as: French police obsessively tracked elite sex workers of 18th-century Paris — and well-to-do men who hired them (National Post, April 16, 2014)

Kushner starts her article by describing contemporary sex workers and a 2014 Urban Institute study and then draws parallels between now and 18th Century Parisian sex workers while detailing advances in surveillance reports,

… One of the very first police forces in the Western world emerged in 18th-century Paris, and one of its vice units asked many of the same questions as the Urban Institute authors: How much do sex workers earn? Why do they turn to sex work in the first place? What are their relationships with their employers?

The vice unit, which operated from 1747 to 1771, turned out thousands of hand-written pages detailing what these dames entretenues [kept women] did. …

… They gathered biographical and financial data on the men who hired kept women — princes, peers of the realm, army officers, financiers, and their sons, a veritable “who’s who” of high society, or le monde. Assembling all of this information required cultivating extensive spy networks. Making it intelligible required certain bureaucratic developments: These inspectors perfected the genre of the report and the information management system of the dossier. These forms of “police writing,” as one scholar has described them, had been emerging for a while. But they took a giant leap forward at midcentury, with the work of several Paris police inspectors, including Inspector Jean-Baptiste Meusnier, the officer in charge of this vice unit from its inception until 1759. Meusnier and his successor also had clear literary talent; the reports are extremely well written, replete with irony, clever turns of phrase, and even narrative tension — at times, they read like novels.

If you have the time, Kushner’s well written article offers fascinating insight.

Environmental impacts and graphene

Researchers at the University of California at Riverside (UCR) have published the results of what they claim is the first study featuring the environmental impact from graphene use. From the April 29, 2014 news item on ScienceDaily,

In a first-of-its-kind study of how a material some think could transform the electronics industry moves in water, researchers at the University of California, Riverside Bourns College of Engineering found graphene oxide nanoparticles are very mobile in lakes or streams and therefore may well cause negative environmental impacts if released.

Graphene oxide nanoparticles are an oxidized form of graphene, a single layer of carbon atoms prized for its strength, conductivity and flexibility. Applications for graphene include everything from cell phones and tablet computers to biomedical devices and solar panels.

The use of graphene and other carbon-based nanomaterials, such as carbon nanotubes, are growing rapidly. At the same time, recent studies have suggested graphene oxide may be toxic to humans. [emphasis mine]

As production of these nanomaterials increase, it is important for regulators, such as the Environmental Protection Agency, to understand their potential environmental impacts, said Jacob D. Lanphere, a UC Riverside graduate student who co-authored a just-published paper about graphene oxide nanoparticles transport in ground and surface water environments.

I wish they had cited the studies suggesting graphene oxide (GO) may be toxic. After a quick search I found: Internalization and cytotoxicity of graphene oxide and carboxyl graphene nanoplatelets in the human hepatocellular carcinoma cell line Hep G2 by Tobias Lammel, Paul Boisseaux, Maria-Luisa Fernández-Cruz, and José M Navas (free access paper in Particle and Fibre Toxicology 2013, 10:27 http://www.particleandfibretoxicology.com/content/10/1/27). From what I can tell, this was a highly specialized investigation conducted in a laboratory. While the results seem concerning it’s difficult to draw conclusions from this study or others that may have been conducted.

Dexter Johnson in a May 1, 2014 post on his Nanoclast blog (on the IEEE [Institute of Electrical and Electronics Engineers] website) provides more relevant citations and some answers (Note: Links have been removed),

While the UC Riverside  did not look at the toxicity of GO in their study, researchers at the Hersam group from Northwestern University did report in a paper published in the journal Nano Letters (“Minimizing Oxidation and Stable Nanoscale Dispersion Improves the Biocompatibility of Graphene in the Lung”) that GO was the most toxic form of graphene-based materials that were tested in mice lungs. In other research published in the Journal of Hazardous Materials (“Investigation of acute effects of graphene oxide on wastewater microbial community: A case study”), investigators determined that the toxicity of GO was dose dependent and was toxic in the range of 50 to 300 mg/L. So, below 50 mg/L there appear to be no toxic effects to GO. To give you some context, arsenic is considered toxic at 0.01 mg/L.

Dexter also contrasts graphene oxide with graphene (from his May 1, 2014 post; Note: A link has been removed),

While GO is quite different from graphene in terms of its properties (GO is an insulator while graphene is a conductor), there are many applications that are similar for both GO and graphene. This is the result of GO’s functional groups allowing for different derivatives to be made on the surface of GO, which in turn allows for additional chemical modification. Some have suggested that GO would make a great material to be deposited on additional substrates for thin conductive films where the surface could be tuned for use in optical data storage, sensors, or even biomedical applications.

Getting back to the UCR research, an April 28, 2014 UCR news release (also on EurekAlert but dated April 29, 2014) describes it  in more detail,

Walker’s [Sharon L. Walker, an associate professor and the John Babbage Chair in Environmental Engineering at UC Riverside] lab is one of only a few in the country studying the environmental impact of graphene oxide. The research that led to the Environmental Engineering Science paper focused on understanding graphene oxide nanoparticles’ stability, or how well they hold together, and movement in groundwater versus surface water.

The researchers found significant differences.

In groundwater, which typically has a higher degree of hardness and a lower concentration of natural organic matter, the graphene oxide nanoparticles tended to become less stable and eventually settle out or be removed in subsurface environments.

In surface waters, where there is more organic material and less hardness, the nanoparticles remained stable and moved farther, especially in the subsurface layers of the water bodies.

The researchers also found that graphene oxide nanoparticles, despite being nearly flat, as opposed to spherical, like many other engineered nanoparticles, follow the same theories of stability and transport.

I don’t know what conclusions to draw from the information that the graphene nanoparticles remain stable and moved further in the water. Is a potential buildup of graphene nanoparticles considered a problem because it could end up in our water supply and we would be poisoned by these particles? Dexter provides an answer (from his May 1, 2014 post),

Ultimately, the question of danger of any material or chemical comes down to the simple equation: Hazard x Exposure=Risk. To determine what the real risk is of GO reaching concentrations equal to those that have been found to be toxic (50-300 mg/L) is the key question.

The results of this latest study don’t really answer that question, but only offer a tool by which to measure the level of exposure to groundwater if there was a sudden spill of GO at a manufacturing facility.

While I was focused on ingestion by humans, it seems this research was more focused on the natural environment and possible future poisoning by graphene oxide.

Here’s a link to and a citation for the paper,

Stability and Transport of Graphene Oxide Nanoparticles in Groundwater and Surface Water by Jacob D. Lanphere, Brandon Rogers, Corey Luth, Carl H. Bolster, and Sharon L. Walker. Environmental Engineering Science. -Not available-, ahead of print. doi:10.1089/ees.2013.0392.

Online Ahead of Print: March 17, 2014

If available online, this is behind a paywall.