Tag Archives: Dexter Johnson

IBM weighs in with plans for a 7nm computer chip

On the heels of Intel’s announcement about a deal utilizing their 14nm low-power manufacturing process and speculations about a 10nm computer chip (my July 9, 2014 posting), IBM makes an announcement about a 7nm chip as per this July 10, 2014 news item on Azonano,

IBM today [July 10, 2014] announced it is investing $3 billion over the next 5 years in two broad research and early stage development programs to push the limits of chip technology needed to meet the emerging demands of cloud computing and Big Data systems. These investments will push IBM’s semiconductor innovations from today’s breakthroughs into the advanced technology leadership required for the future.

A very comprehensive July 10, 2014 news release lays out the company’s plans for this $3B investment representing 10% of IBM’s total research budget,

The first research program is aimed at so-called “7 nanometer and beyond” silicon technology that will address serious physical challenges that are threatening current semiconductor scaling techniques and will impede the ability to manufacture such chips. The second is focused on developing alternative technologies for post-silicon era chips using entirely different approaches, which IBM scientists and other experts say are required because of the physical limitations of silicon based semiconductors.

Cloud and big data applications are placing new challenges on systems, just as the underlying chip technology is facing numerous significant physical scaling limits.  Bandwidth to memory, high speed communication and device power consumption are becoming increasingly challenging and critical.

The teams will comprise IBM Research scientists and engineers from Albany and Yorktown, New York; Almaden, California; and Europe. In particular, IBM will be investing significantly in emerging areas of research that are already underway at IBM such as carbon nanoelectronics, silicon photonics, new memory technologies, and architectures that support quantum and cognitive computing. [emphasis mine]

These teams will focus on providing orders of magnitude improvement in system level performance and energy efficient computing. In addition, IBM will continue to invest in the nanosciences and quantum computing–two areas of fundamental science where IBM has remained a pioneer for over three decades.

7 nanometer technology and beyond

IBM Researchers and other semiconductor experts predict that while challenging, semiconductors show promise to scale from today’s 22 nanometers down to 14 and then 10 nanometers in the next several years.  However, scaling to 7 nanometers and perhaps below, by the end of the decade will require significant investment and innovation in semiconductor architectures as well as invention of new tools and techniques for manufacturing.

“The question is not if we will introduce 7 nanometer technology into manufacturing, but rather how, when, and at what cost?” said John Kelly, senior vice president, IBM Research. “IBM engineers and scientists, along with our partners, are well suited for this challenge and are already working on the materials science and device engineering required to meet the demands of the emerging system requirements for cloud, big data, and cognitive systems. This new investment will ensure that we produce the necessary innovations to meet these challenges.”

“Scaling to 7nm and below is a terrific challenge, calling for deep physics competencies in processing nano materials affinities and characteristics. IBM is one of a very few companies who has repeatedly demonstrated this level of science and engineering expertise,” said Richard Doherty, technology research director, The Envisioneering Group.

Bridge to a “Post-Silicon” Era

Silicon transistors, tiny switches that carry information on a chip, have been made smaller year after year, but they are approaching a point of physical limitation. Their increasingly small dimensions, now reaching the nanoscale, will prohibit any gains in performance due to the nature of silicon and the laws of physics. Within a few more generations, classical scaling and shrinkage will no longer yield the sizable benefits of lower power, lower cost and higher speed processors that the industry has become accustomed to.

With virtually all electronic equipment today built on complementary metal–oxide–semiconductor (CMOS) technology, there is an urgent need for new materials and circuit architecture designs compatible with this engineering process as the technology industry nears physical scalability limits of the silicon transistor.

Beyond 7 nanometers, the challenges dramatically increase, requiring a new kind of material to power systems of the future, and new computing platforms to solve problems that are unsolvable or difficult to solve today. Potential alternatives include new materials such as carbon nanotubes, and non-traditional computational approaches such as neuromorphic computing, cognitive computing, machine learning techniques, and the science behind quantum computing.

As the leader in advanced schemes that point beyond traditional silicon-based computing, IBM holds over 500 patents for technologies that will drive advancements at 7nm and beyond silicon — more than twice the nearest competitor. These continued investments will accelerate the invention and introduction into product development for IBM’s highly differentiated computing systems for cloud, and big data analytics.

Several exploratory research breakthroughs that could lead to major advancements in delivering dramatically smaller, faster and more powerful computer chips, include quantum computing, neurosynaptic computing, silicon photonics, carbon nanotubes, III-V technologies, low power transistors and graphene:

Quantum Computing

The most basic piece of information that a typical computer understands is a bit. Much like a light that can be switched on or off, a bit can have only one of two values: “1″ or “0.” Described as superposition, this special property of qubits enables quantum computers to weed through millions of solutions all at once, while desktop PCs would have to consider them one at a time.

IBM is a world leader in superconducting qubit-based quantum computing science and is a pioneer in the field of experimental and theoretical quantum information, fields that are still in the category of fundamental science – but one that, in the long term, may allow the solution of problems that are today either impossible or impractical to solve using conventional machines. The team recently demonstrated the first experimental realization of parity check with three superconducting qubits, an essential building block for one type of quantum computer.

Neurosynaptic Computing

Bringing together nanoscience, neuroscience, and supercomputing, IBM and university partners have developed an end-to-end ecosystem including a novel non-von Neumann architecture, a new programming language, as well as applications. This novel technology allows for computing systems that emulate the brain’s computing efficiency, size and power usage. IBM’s long-term goal is to build a neurosynaptic system with ten billion neurons and a hundred trillion synapses, all while consuming only one kilowatt of power and occupying less than two liters of volume.

Silicon Photonics

IBM has been a pioneer in the area of CMOS integrated silicon photonics for over 12 years, a technology that integrates functions for optical communications on a silicon chip, and the IBM team has recently designed and fabricated the world’s first monolithic silicon photonics based transceiver with wavelength division multiplexing.  Such transceivers will use light to transmit data between different components in a computing system at high data rates, low cost, and in an energetically efficient manner.

Silicon nanophotonics takes advantage of pulses of light for communication rather than traditional copper wiring and provides a super highway for large volumes of data to move at rapid speeds between computer chips in servers, large datacenters, and supercomputers, thus alleviating the limitations of congested data traffic and high-cost traditional interconnects.

Businesses are entering a new era of computing that requires systems to process and analyze, in real-time, huge volumes of information known as Big Data. Silicon nanophotonics technology provides answers to Big Data challenges by seamlessly connecting various parts of large systems, whether few centimeters or few kilometers apart from each other, and move terabytes of data via pulses of light through optical fibers.

III-V technologies

IBM researchers have demonstrated the world’s highest transconductance on a self-aligned III-V channel metal-oxide semiconductor (MOS) field-effect transistors (FETs) device structure that is compatible with CMOS scaling. These materials and structural innovation are expected to pave path for technology scaling at 7nm and beyond.  With more than an order of magnitude higher electron mobility than silicon, integrating III-V materials into CMOS enables higher performance at lower power density, allowing for an extension to power/performance scaling to meet the demands of cloud computing and big data systems.

Carbon Nanotubes

IBM Researchers are working in the area of carbon nanotube (CNT) electronics and exploring whether CNTs can replace silicon beyond the 7 nm node.  As part of its activities for developing carbon nanotube based CMOS VLSI circuits, IBM recently demonstrated — for the first time in the world — 2-way CMOS NAND gates using 50 nm gate length carbon nanotube transistors.

IBM also has demonstrated the capability for purifying carbon nanotubes to 99.99 percent, the highest (verified) purities demonstrated to date, and transistors at 10 nm channel length that show no degradation due to scaling–this is unmatched by any other material system to date.

Carbon nanotubes are single atomic sheets of carbon rolled up into a tube. The carbon nanotubes form the core of a transistor device that will work in a fashion similar to the current silicon transistor, but will be better performing. They could be used to replace the transistors in chips that power data-crunching servers, high performing computers and ultra fast smart phones.

Carbon nanotube transistors can operate as excellent switches at molecular dimensions of less than ten nanometers – the equivalent to 10,000 times thinner than a strand of human hair and less than half the size of the leading silicon technology. Comprehensive modeling of the electronic circuits suggests that about a five to ten times improvement in performance compared to silicon circuits is possible.


Graphene is pure carbon in the form of a one atomic layer thick sheet.  It is an excellent conductor of heat and electricity, and it is also remarkably strong and flexible.  Electrons can move in graphene about ten times faster than in commonly used semiconductor materials such as silicon and silicon germanium. Its characteristics offer the possibility to build faster switching transistors than are possible with conventional semiconductors, particularly for applications in the handheld wireless communications business where it will be a more efficient switch than those currently used.

Recently in 2013, IBM demonstrated the world’s first graphene based integrated circuit receiver front end for wireless communications. The circuit consisted of a 2-stage amplifier and a down converter operating at 4.3 GHz.

Next Generation Low Power Transistors

In addition to new materials like CNTs, new architectures and innovative device concepts are required to boost future system performance. Power dissipation is a fundamental challenge for nanoelectronic circuits. To explain the challenge, consider a leaky water faucet — even after closing the valve as far as possible water continues to drip — this is similar to today’s transistor, in that energy is constantly “leaking” or being lost or wasted in the off-state.

A potential alternative to today’s power hungry silicon field effect transistors are so-called steep slope devices. They could operate at much lower voltage and thus dissipate significantly less power. IBM scientists are researching tunnel field effect transistors (TFETs). In this special type of transistors the quantum-mechanical effect of band-to-band tunneling is used to drive the current flow through the transistor. TFETs could achieve a 100-fold power reduction over complementary CMOS transistors, so integrating TFETs with CMOS technology could improve low-power integrated circuits.

Recently, IBM has developed a novel method to integrate III-V nanowires and heterostructures directly on standard silicon substrates and built the first ever InAs/Si tunnel diodes and TFETs using InAs as source and Si as channel with wrap-around gate as steep slope device for low power consumption applications.

“In the next ten years computing hardware systems will be fundamentally different as our scientists and engineers push the limits of semiconductor innovations to explore the post-silicon future,” said Tom Rosamilia, senior vice president, IBM Systems and Technology Group. “IBM Research and Development teams are creating breakthrough innovations that will fuel the next era of computing systems.”

IBM’s historic contributions to silicon and semiconductor innovation include the invention and/or first implementation of: the single cell DRAM, the “Dennard scaling laws” underpinning “Moore’s Law”, chemically amplified photoresists, copper interconnect wiring, Silicon on Insulator, strained engineering, multi core microprocessors, immersion lithography, high speed silicon germanium (SiGe), High-k gate dielectrics, embedded DRAM, 3D chip stacking, and Air gap insulators.

IBM researchers also are credited with initiating the era of nano devices following the Nobel prize winning invention of the scanning tunneling microscope which enabled nano and atomic scale invention and innovation.

IBM will also continue to fund and collaborate with university researchers to explore and develop the future technologies for the semiconductor industry. In particular, IBM will continue to support and fund university research through private-public partnerships such as the NanoElectornics Research Initiative (NRI), and the Semiconductor Advanced Research Network (STARnet), and the Global Research Consortium (GRC) of the Semiconductor Research Corporation.

I highlighted ‘memory systems’ as this brings to mind HP Labs and their major investment in ‘memristive’ technologies noted in my June 26, 2014 posting,

… During a two-hour presentation held a year and a half ago, they laid out how the computer might work, its benefits, and the expectation that about 75 percent of HP Labs personnel would be dedicated to this one project. “At the end, Meg {Meg Whitman, CEO of HP Labs] turned to [Chief Financial Officer] Cathie Lesjak and said, ‘Find them more money,’” says John Sontag, the vice president of systems research at HP, who attended the meeting and is in charge of bringing the Machine to life. “People in Labs see this as a once-in-a-lifetime opportunity.”

The Machine is based on the memristor and other associated technologies.

Getting back to IBM, there’s this analysis of the $3B investment ($600M/year for five years) by Alex Konrad in a July 10, 2014 article for Forbes (Note: A link has been removed),

When IBM … announced a $3 billion commitment to even tinier semiconductor chips that no longer depended on silicon on Wednesday, the big news was that IBM’s putting a lot of money into a future for chips where Moore’s Law no longer applies. But on second glance, the move to spend billions on more experimental ideas like silicon photonics and carbon nanotubes shows that IBM’s finally shifting large portions of its research budget into more ambitious and long-term ideas.

… IBM tells Forbes the $3 billion isn’t additional money being added to its R&D spend, an area where analysts have told Forbes they’d like to see more aggressive cash commitments in the future. IBM will still spend about $6 billion a year on R&D, 6% of revenue. Ten percent of that research budget, however, now has to come from somewhere else to fuel these more ambitious chip projects.

Neal Ungerleider’s July 11, 2014 article for Fast Company focuses on the neuromorphic computing and quantum computing aspects of this $3B initiative (Note: Links have been removed),

The new R&D initiatives fall into two categories: Developing nanotech components for silicon chips for big data and cloud systems, and experimentation with “post-silicon” microchips. This will include research into quantum computers which don’t know binary code, neurosynaptic computers which mimic the behavior of living brains, carbon nanotubes, graphene tools and a variety of other technologies.

IBM’s investment is one of the largest for quantum computing to date; the company is one of the biggest researchers in the field, along with a Canadian company named D-Wave which is partnering with Google and NASA to develop quantum computer systems.

The curious can find D-Wave Systems here. There’s also a January 19, 2012 posting here which discusses the D-Wave’s situation at that time.

Final observation, these are fascinating developments especially for the insight they provide into the worries troubling HP Labs, Intel, and IBM as they jockey for position.

ETA July 14, 2014: Dexter Johnson has a July 11, 2014 posting on his Nanoclast blog (on the IEEE [Institute for Electrical and Electronics Engineers]) about the IBM announcement and which features some responses he received from IBM officials to his queries,

While this may be a matter of fascinating speculation for investors, the impact on nanotechnology development  is going to be significant. To get a better sense of what it all means, I was able to talk to some of the key figures of IBM’s push in nanotechnology research.

I conducted e-mail interviews with Tze-Chiang (T.C.) Chen, vice president science & technology, IBM Fellow at the Thomas J. Watson Research Center and Wilfried Haensch, senior manager, physics and materials for logic and communications, IBM Research.

Silicon versus Nanomaterials

First, I wanted to get a sense for how long IBM envisioned sticking with silicon and when they expected the company would permanently make the move away from CMOS to alternative nanomaterials. Unfortunately, as expected, I didn’t get solid answers, except for them to say that new manufacturing tools and techniques need to be developed now.

He goes on to ask about carbon nanotubes and graphene. Interestingly, IBM does not have a wide range of electronics applications in mind for graphene.  I encourage you to read Dexter’s posting as Dexter got answers to some very astute and pointed questions.

Competition, collaboration, and a smaller budget: the US nano community responds

Before getting to the competition, collaboration, and budget mentioned in the head for this posting, I’m supplying some background information.

Within the context of a May 20, 2014 ‘National Nanotechnology Initiative’ hearing before the U.S. House of Representatives Subcommittee on Research and Technology, Committee on Science, Space, and Technology, the US General Accountability Office (GAO) presented a 22 pp. précis (PDF; titled: NANOMANUFACTURING AND U.S. COMPETITIVENESS; Challenges and Opportunities) of its 125 pp. (PDF version report titled: Nanomanufacturing: Emergence and Implications for U.S. Competitiveness, the Environment, and Human Health).

Having already commented on the full report itself in a Feb. 10, 2014 posting, I’m pointing you to Dexter Johnson’s May 21, 2014 post on his Nanoclast blog (on the IEEE [Institute of Electrical and Electronics Engineers] website) where he discusses the précis from the perspective of someone who was consulted by the US GAO when they were writing the full report (Note: Links have been removed),

I was interviewed extensively by two GAO economists for the accompanying [full] report “Nanomanufacturing: Emergence and Implications for U.S. Competitiveness, the Environment, and Human Health,” where I shared background information on research I helped compile and write on global government funding of nanotechnology.

While I acknowledge that the experts who were consulted for this report are more likely the source for its views than I am, I was pleased to see the report reflect many of my own opinions. Most notable among these is bridging the funding gap in the middle stages of the manufacturing-innovation process, which is placed at the top of the report’s list of challenges.

While I am in agreement with much of the report’s findings, it suffers from a fundamental misconception in seeing nanotechnology’s development as a kind of race between countries. [emphases mine]

(I encourage you to read the full text of Dexter’s comments as he offers more than a simple comment about competition.)

Carrying on from this notion of a ‘nanotechnology race’, at least one publication focused on that aspect. From the May 20, 2014 article by Ryan Abbott for CourthouseNews.com,

Nanotech Could Keep U.S. Ahead of China

WASHINGTON (CN) – Four of the nation’s leading nanotechnology scientists told a U.S. House of Representatives panel Tuesday that a little tweaking could go a long way in keeping the United States ahead of China and others in the industry.

The hearing focused on the status of the National Nanotechnology Initiative, a federal program launched in 2001 for the advancement of nanotechnology.

As I noted earlier, the hearing was focused on the National Nanotechnology Initiative (NNI) and all of its efforts. It’s quite intriguing to see what gets emphasized in media reports and, in this case, the dearth of media reports.

I have one more tidbit, the testimony from Lloyd Whitman, Interim Director of the National Nanotechnology Coordination Office and Deputy Director of the Center for Nanoscale Science and Technology, National Institute of Standards and Technology. The testimony is in a May 21, 2014 news item on insurancenewsnet.com,

Testimony by Lloyd Whitman, Interim Director of the National Nanotechnology Coordination Office and Deputy Director of the Center for Nanoscale Science and Technology, National Institute of Standards and Technology

Chairman Bucshon, Ranking Member Lipinski, and Members of the Committee, it is my distinct privilege to be here with you today to discuss nanotechnology and the role of the National Nanotechnology Initiative in promoting its development for the benefit of the United States.

Highlights of the National Nanotechnology Initiative

Our current Federal research and development program in nanotechnology is strong. The NNI agencies continue to further the NNI’s goals of (1) advancing nanotechnology R&D, (2) fostering nanotechnology commercialization, (3) developing and maintaining the U.S. workforce and infrastructure, and (4) supporting the responsible and safe development of nanotechnology. …


The sustained, strategic Federal investment in nanotechnology R&D combined with strong private sector investments in the commercialization of nanotechnology-enabled products has made the United States the global leader in nanotechnology. The most recent (2012) NNAP report analyzed a wide variety of sources and metrics and concluded that “… in large part as a result of the NNI the United States is today… the global leader in this exciting and economically promising field of research and technological development.” n10 A recent report on nanomanufacturing by Congress’s own Government Accountability Office (GAO) arrived at a similar conclusion, again drawing on a wide variety of sources and stakeholder inputs. n11 As discussed in the GAO report, nanomanufacturing and commercialization are key to capturing the value of Federal R&D investments for the benefit of the U.S. economy. The United States leads the world by one important measure of commercial activity in nanotechnology: According to one estimate, n12 U.S. companies invested $4.1 billion in nanotechnology R&D in 2012, far more than investments by companies in any other country.  …

There’s cognitive dissonance at work here as Dexter notes in his own way,

… somewhat ironically, the [GAO] report suggests that one of the ways forward is more international cooperation, at least in the development of international standards. And in fact, one of the report’s key sources of information, Mihail Roco, has made it clear that international cooperation in nanotechnology research is the way forward.

It seems to me that much of the testimony and at least some of the anxiety about being left behind can be traced to a decreased 2015 budget allotment for nanotechnology (mentioned here in a March 31, 2014 posting [US National Nanotechnology Initiative’s 2015 budget request shows a decrease of $200M]).

One can also infer a certain anxiety from a recent presentation by Barbara Herr Harthorn, head of UCSB’s [University of California at Santa Barbara) Center for Nanotechnology in Society (CNS). She was at a February 2014 meeting of the Presidential Commission for the Study of Bioethical Issues (mentioned in parts one and two [the more substantive description of the meeting which also features a Canadian academic from the genomics community] of my recent series on “Brains, prostheses, nanotechnology, and human enhancement”). II noted in part five of the series what seems to be a shift towards brain research as a likely beneficiary of the public engagement work accomplished under NNI auspices and, in the case of the Canadian academic, the genomics effort.

The Americans are not the only ones feeling competitive as this tweet from Richard Jones, Pro-Vice Chancellor for Research and Innovation at Sheffield University (UK), physicist, and author of Soft Machines, suggests,

May 18

The UK has fewer than 1% of world patents on graphene, despite it being discovered here, according to the FT –

I recall reading a report a few years back which noted that experts in China were concerned about falling behind internationally in their research efforts. These anxieties are not new, CP Snow’s book and lecture The Two Cultures (1959) also referenced concerns in the UK about scientific progress and being left behind.

Competition/collaboration is an age-old conundrum and about as ancient as anxieties of being left behind. The question now is how are we all going to resolve these issues this time?

ETA May 28, 2014: The American Institute of Physics (AIP) has produced a summary of the May 20, 2014 hearing as part of their FYI: The AIP Bulletin of Science Policy News, May 27, 2014 (no. 93).

Does more nano-enabled security = more nano-enabled surveillance?

A May 6, 2014 essay by Brandon Engel published on Nanotechnology Now poses an interesting question about the use of nanotechnology-enabled security and surveillance measures (Note: Links have been removed),

Security is of prime importance in an increasingly globalized society. It has a role to play in protecting citizens and states from myriad malevolent forces, such as organized crime or terrorist acts, and in responding, as well as preventing, both natural and man-made disasters. Research and development in this field often focuses on certain broad areas, including security of infrastructures and utilities; intelligence surveillance and border security; and stability and safety in cases of crisis. …

Nanotechnology is coming to play an ever greater title:role in these applications. Whether it’s used for detecting potentially harmful materials for homeland security, finding pathogens in water supply systems, or for early warning and detoxification of harmful airborne substances, its usefulness and efficiency are becoming more evident by the day.

He’s quite right about these applications. For example, I’ve just published (May 9, 2014) piece ‘Textiles laced with carbon nanotubes for clothing that protects against poison gas‘.

Engel goes on to describe a dark side to nanotechnology-enabled security,

On the other hand, more and more unsettling scenarios are fathomable with the advent of this new technology, such as covertly infiltrated devices, as small as tiny insects, being used to coordinate and execute a disarming attack on obsolete weapons systems, information apparatuses, or power grids.

Engel is also right about the potential surveillance issues. In a Dec. 18, 2013 posting I featured a special issue of SIGNAL Magazine (which covers the latest trends and techniques in topics that include C4ISR, information security, intelligence, electronics, homeland security, cyber technologies,  …) focusing on nanotechnology-enabled security and surveillance,

The Dec. 1, 2013 article by Rita Boland (h/t Dec. 13, 2013 Azonano news item) does a good job of presenting a ‘big picture’ approach including nonmilitary and military  nanotechnology applications  by interviewing the main players in the US,

Nanotechnology is the new cyber, according to several major leaders in the field. Just as cyber is entrenched across global society now, nano is poised to be the major capabilities enabler of the next decades. Expert members from the National Nanotechnology Initiative representing government and science disciplines say nano has great significance for the military and the general public.

For anyone who may think Engel is exaggerating when he mentions tiny insects being used for surveillance, there’s this May 8, 2014 post (Cyborg Beetles Detect Nerve Gas) by Dexter Johnson on his Nanoclast blog (Note: Dexter is an engineer who describes the technology in a somewhat detailed, technical fashion). I have a less technical description of some then current research in an Aug. 12, 2011 posting featuring some military experiments, for example, a surveillance camera disguised as a hummingbird (I have a brief video of a demonstration) and some research into how smartphones can be used for surveillance.

Engel comes to an interesting conclusion (Note: A link has been removed),

The point is this: whatever conveniences are seemingly afforded by these sort of technological advances, there is persistent ambiguity about the extent to which this technology actually protects or makes us more vulnerable. Striking the right balance between respecting privacy and security is an ever-elusive goal, and at such an early point in the development of nanotech, must be approached on a case by case basis. … [emphasis mine]

I don’t understand what Engel means when he says “case by case.” Are these individual applications that he feels are prone to misuse or specific usages of these applications? In any event, while I appreciate the concerns (I share many of them), I don’t think his proposed approach is practicable and that leads to another question, what can be done? Sadly, I have no answers but I am glad to see the question being asked in the ‘nanotechnology webspace’.

I did some searching for Bandon Engel online and found this January 17, 2014 guest post (about a Dean Koontz book) on The Belle’s Tales blog. He also has a blog of his own, Brandon Engel where he describes himself this way,

Musician, filmmaker, multimedia journalist, puppeteer, and professional blogger based in Chicago.

The man clearly has a wide range of interests and concerns.

As for the question posed in this post’s head, I don’t think there is a simple one-to-one equivalency where one more security procedure results in one more surveillance procedure. However, I do believe there is a relationship between the two and that sometimes increased security is an argument used to support increased surveillance procedures. While Engel doesn’t state that explicitly in his piece, I think it is implied.

One final thought, surveillance is not new and one of the more interesting examples of the ‘art’ is featured in a description of the Parisian constabulary of the 18th century written by Nina Kushner in ,

The Case of the Closely Watched Courtesans
The French police obsessively tracked the kept women of 18th-century Paris. Why? (Slate.com, April 15, 2014)


Republished as: French police obsessively tracked elite sex workers of 18th-century Paris — and well-to-do men who hired them (National Post, April 16, 2014)

Kushner starts her article by describing contemporary sex workers and a 2014 Urban Institute study and then draws parallels between now and 18th Century Parisian sex workers while detailing advances in surveillance reports,

… One of the very first police forces in the Western world emerged in 18th-century Paris, and one of its vice units asked many of the same questions as the Urban Institute authors: How much do sex workers earn? Why do they turn to sex work in the first place? What are their relationships with their employers?

The vice unit, which operated from 1747 to 1771, turned out thousands of hand-written pages detailing what these dames entretenues [kept women] did. …

… They gathered biographical and financial data on the men who hired kept women — princes, peers of the realm, army officers, financiers, and their sons, a veritable “who’s who” of high society, or le monde. Assembling all of this information required cultivating extensive spy networks. Making it intelligible required certain bureaucratic developments: These inspectors perfected the genre of the report and the information management system of the dossier. These forms of “police writing,” as one scholar has described them, had been emerging for a while. But they took a giant leap forward at midcentury, with the work of several Paris police inspectors, including Inspector Jean-Baptiste Meusnier, the officer in charge of this vice unit from its inception until 1759. Meusnier and his successor also had clear literary talent; the reports are extremely well written, replete with irony, clever turns of phrase, and even narrative tension — at times, they read like novels.

If you have the time, Kushner’s well written article offers fascinating insight.

Environmental impacts and graphene

Researchers at the University of California at Riverside (UCR) have published the results of what they claim is the first study featuring the environmental impact from graphene use. From the April 29, 2014 news item on ScienceDaily,

In a first-of-its-kind study of how a material some think could transform the electronics industry moves in water, researchers at the University of California, Riverside Bourns College of Engineering found graphene oxide nanoparticles are very mobile in lakes or streams and therefore may well cause negative environmental impacts if released.

Graphene oxide nanoparticles are an oxidized form of graphene, a single layer of carbon atoms prized for its strength, conductivity and flexibility. Applications for graphene include everything from cell phones and tablet computers to biomedical devices and solar panels.

The use of graphene and other carbon-based nanomaterials, such as carbon nanotubes, are growing rapidly. At the same time, recent studies have suggested graphene oxide may be toxic to humans. [emphasis mine]

As production of these nanomaterials increase, it is important for regulators, such as the Environmental Protection Agency, to understand their potential environmental impacts, said Jacob D. Lanphere, a UC Riverside graduate student who co-authored a just-published paper about graphene oxide nanoparticles transport in ground and surface water environments.

I wish they had cited the studies suggesting graphene oxide (GO) may be toxic. After a quick search I found: Internalization and cytotoxicity of graphene oxide and carboxyl graphene nanoplatelets in the human hepatocellular carcinoma cell line Hep G2 by Tobias Lammel, Paul Boisseaux, Maria-Luisa Fernández-Cruz, and José M Navas (free access paper in Particle and Fibre Toxicology 2013, 10:27 http://www.particleandfibretoxicology.com/content/10/1/27). From what I can tell, this was a highly specialized investigation conducted in a laboratory. While the results seem concerning it’s difficult to draw conclusions from this study or others that may have been conducted.

Dexter Johnson in a May 1, 2014 post on his Nanoclast blog (on the IEEE [Institute of Electrical and Electronics Engineers] website) provides more relevant citations and some answers (Note: Links have been removed),

While the UC Riverside  did not look at the toxicity of GO in their study, researchers at the Hersam group from Northwestern University did report in a paper published in the journal Nano Letters (“Minimizing Oxidation and Stable Nanoscale Dispersion Improves the Biocompatibility of Graphene in the Lung”) that GO was the most toxic form of graphene-based materials that were tested in mice lungs. In other research published in the Journal of Hazardous Materials (“Investigation of acute effects of graphene oxide on wastewater microbial community: A case study”), investigators determined that the toxicity of GO was dose dependent and was toxic in the range of 50 to 300 mg/L. So, below 50 mg/L there appear to be no toxic effects to GO. To give you some context, arsenic is considered toxic at 0.01 mg/L.

Dexter also contrasts graphene oxide with graphene (from his May 1, 2014 post; Note: A link has been removed),

While GO is quite different from graphene in terms of its properties (GO is an insulator while graphene is a conductor), there are many applications that are similar for both GO and graphene. This is the result of GO’s functional groups allowing for different derivatives to be made on the surface of GO, which in turn allows for additional chemical modification. Some have suggested that GO would make a great material to be deposited on additional substrates for thin conductive films where the surface could be tuned for use in optical data storage, sensors, or even biomedical applications.

Getting back to the UCR research, an April 28, 2014 UCR news release (also on EurekAlert but dated April 29, 2014) describes it  in more detail,

Walker’s [Sharon L. Walker, an associate professor and the John Babbage Chair in Environmental Engineering at UC Riverside] lab is one of only a few in the country studying the environmental impact of graphene oxide. The research that led to the Environmental Engineering Science paper focused on understanding graphene oxide nanoparticles’ stability, or how well they hold together, and movement in groundwater versus surface water.

The researchers found significant differences.

In groundwater, which typically has a higher degree of hardness and a lower concentration of natural organic matter, the graphene oxide nanoparticles tended to become less stable and eventually settle out or be removed in subsurface environments.

In surface waters, where there is more organic material and less hardness, the nanoparticles remained stable and moved farther, especially in the subsurface layers of the water bodies.

The researchers also found that graphene oxide nanoparticles, despite being nearly flat, as opposed to spherical, like many other engineered nanoparticles, follow the same theories of stability and transport.

I don’t know what conclusions to draw from the information that the graphene nanoparticles remain stable and moved further in the water. Is a potential buildup of graphene nanoparticles considered a problem because it could end up in our water supply and we would be poisoned by these particles? Dexter provides an answer (from his May 1, 2014 post),

Ultimately, the question of danger of any material or chemical comes down to the simple equation: Hazard x Exposure=Risk. To determine what the real risk is of GO reaching concentrations equal to those that have been found to be toxic (50-300 mg/L) is the key question.

The results of this latest study don’t really answer that question, but only offer a tool by which to measure the level of exposure to groundwater if there was a sudden spill of GO at a manufacturing facility.

While I was focused on ingestion by humans, it seems this research was more focused on the natural environment and possible future poisoning by graphene oxide.

Here’s a link to and a citation for the paper,

Stability and Transport of Graphene Oxide Nanoparticles in Groundwater and Surface Water by Jacob D. Lanphere, Brandon Rogers, Corey Luth, Carl H. Bolster, and Sharon L. Walker. Environmental Engineering Science. -Not available-, ahead of print. doi:10.1089/ees.2013.0392.

Online Ahead of Print: March 17, 2014

If available online, this is behind a paywall.

Researchers at Purdue University (Indiana, US) and at the Indian Institute of Technology Madras (Chennai, India) develop Star Trek-type ‘tricorders’

To be clear, the Star Trek-type ‘tricorder’ referred to in the heading is, in fact, a hand-held spectrometer and the research from Purdue University and the Indian Institute of Technology Madras represents a developmental leap forward, not a new product. From a March 26, 2014 news item on Azonano,

Nanotechnology is advancing tools likened to Star Trek’s “tricorder” that perform on-the-spot chemical analysis for a range of applications including medical testing, explosives detection and food safety.

Researchers found that when paper used to collect a sample was coated with carbon nanotubes, the voltage required was 1,000 times reduced, the signal was sharpened and the equipment was able to capture far more delicate molecules.

Dexter Johnson in his March 26, 2014 posting (Nanoclast blog on the IEEE [Institute of Electrical and Electronics Engineers] website) provides some background information about the race to miniaturize spectrometers (Note: A link has been removed),

Recent research has been relying on nanomaterials to build smaller spectrometers. Late last year, a group at the Technische Universität Dresden and the Fraunhofer Institute in Germany developed a novel, miniature spectrometer, based on metallic nanowires, that was small enough to fit into a mobile phone.

Dexter goes on to provide a summary about this latest research, which I strongly recommend reading, especially if you don’t have the patience to read the rest of the news release. The March 25, 2014 Purdue University news release by Elizabeth K. Gardner, which originated the news item, provides insight from the researchers,

“This is a big step in our efforts to create miniature, handheld mass spectrometers for the field,” said R. Graham Cooks, Purdue’s Henry B. Hass Distinguished Professor of Chemistry. “The dramatic decrease in power required means a reduction in battery size and cost to perform the experiments. The entire system is becoming lighter and cheaper, which brings it that much closer to being viable for easy, widespread use.”

Cooks and Thalappil Pradeep, a professor of chemistry at the Indian Institute of Technology Madras, Chennai, led the research.

“Taking science to the people is what is most important,” Pradeep said. “Mass spectrometry is a fantastic tool, but it is not yet on every physician’s table or in the pocket of agricultural inspectors and security guards. Great techniques have been developed, but we need to hone them into tools that are affordable, can be efficiently manufactured and easily used.”

The news release goes on to describe the research,

The National Science Foundation-funded study used an analysis technique developed by Cooks and his colleagues called PaperSpray™ ionization. The technique relies on a sample obtained by wiping an object or placing a drop of liquid on paper wet with a solvent to capture residues from the object’s surface. A small triangle is then cut from the paper and placed on a special attachment of the mass spectrometer where voltage is applied. The voltage creates an electric field that turns the mixture of solvent and residues into fine droplets containing ionized molecules that pop off and are vacuumed into the mass spectrometer for analysis. The mass spectrometer then identifies the sample’s ionized molecules by their mass.

The technique depends on a strong electric field and the nanotubes act like tiny antennas that create a strong electric field from a very small voltage. One volt over a few nanometers creates an electric field equivalent to 10 million volts over a centimeter, Pradeep said.

“The trick was to isolate these tiny, nanoscale antennae and keep them from bundling together because individual nanotubes must project out of the paper,” he said. “The carbon nanotubes work well and can be dispersed in water and applied on suitable substrates.”

The Nano Mission of the Government of India supported the research at the Indian Institute of Technology Madras and graduate students Rahul Narayanan and Depanjan Sarkar performed the experiments.

In addition to reducing the size of the battery required and energy cost to run the tests, the new technique also simplified the analysis by nearly eliminating background noise, Cooks said.

“Under these conditions, the analysis is nearly noise free and a sharp, clear signal of the sample is delivered,” he said. “We don’t know why this is – why background molecules that surround us in the air or from within the equipment aren’t being ionized and entering into the analysis. It’s a puzzling, but pleasant surprise.”

The reduced voltage required also makes the method gentler than the standard PaperSpray™ ionization techniques.

“It is a very soft method,” Cooks said. “Fragile molecules and complexes are able to hold together here when they otherwise wouldn’t. This could lead to other potential applications.”

The team plans to investigate the mechanisms behind the reduction in background noise and potential applications of the gentle method, but the most promising aspect of the new technique is its potential to miniaturize the mass spectrometry system, Cooks said.

Here’s a link to and a citation for the paper,

Molecular Ionization from Carbon Nanotube Paper by Rahul Narayanan, Depanjan Sarkar, Prof. R. Graham Cooks, and Prof. Thalappil Pradeep. Angewandte Chemie International Edition Article first published online: 18 MAR 2014 DOI: 10.1002/anie.201311053

© 2014 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim

This paper is behind a paywall.

Water desalination by graphene and water purification by sapwood

I have two items about water. The first concerns a new technique from MIT (Massachusetts Institute of Technology) for desalination using graphene. From a Feb. 25, 2014 news release by David Chandler on EurekAlert,

Researchers have devised a way of making tiny holes of controllable size in sheets of graphene, a development that could lead to ultrathin filters for improved desalination or water purification.

The team of researchers at MIT, Oak Ridge National Laboratory, and in Saudi Arabia succeeded in creating subnanoscale pores in a sheet of the one-atom-thick material, which is one of the strongest materials known. …

The concept of using graphene, perforated by nanoscale pores, as a filter in desalination has been proposed and analyzed by other MIT researchers. The new work, led by graduate student Sean O’Hern and associate professor of mechanical engineering Rohit Karnik, is the first step toward actual production of such a graphene filter.

Making these minuscule holes in graphene — a hexagonal array of carbon atoms, like atomic-scale chicken wire — occurs in a two-stage process. First, the graphene is bombarded with gallium ions, which disrupt the carbon bonds. Then, the graphene is etched with an oxidizing solution that reacts strongly with the disrupted bonds — producing a hole at each spot where the gallium ions struck. By controlling how long the graphene sheet is left in the oxidizing solution, the MIT researchers can control the average size of the pores.

A big limitation in existing nanofiltration and reverse-osmosis desalination plants, which use filters to separate salt from seawater, is their low permeability: Water flows very slowly through them. The graphene filters, being much thinner, yet very strong, can sustain a much higher flow. “We’ve developed the first membrane that consists of a high density of subnanometer-scale pores in an atomically thin, single sheet of graphene,” O’Hern says.

For efficient desalination, a membrane must demonstrate “a high rejection rate of salt, yet a high flow rate of water,” he adds. One way of doing that is decreasing the membrane’s thickness, but this quickly renders conventional polymer-based membranes too weak to sustain the water pressure, or too ineffective at rejecting salt, he explains.

With graphene membranes, it becomes simply a matter of controlling the size of the pores, making them “larger than water molecules, but smaller than everything else,” O’Hern says — whether salt, impurities, or particular kinds of biochemical molecules.

The permeability of such graphene filters, according to computer simulations, could be 50 times greater than that of conventional membranes, as demonstrated earlier by a team of MIT researchers led by graduate student David Cohen-Tanugi of the Department of Materials Science and Engineering. But producing such filters with controlled pore sizes has remained a challenge. The new work, O’Hern says, demonstrates a method for actually producing such material with dense concentrations of nanometer-scale holes over large areas.

“We bombard the graphene with gallium ions at high energy,” O’Hern says. “That creates defects in the graphene structure, and these defects are more chemically reactive.” When the material is bathed in a reactive oxidant solution, the oxidant “preferentially attacks the defects,” and etches away many holes of roughly similar size. O’Hern and his co-authors were able to produce a membrane with 5 trillion pores per square centimeter, well suited to use for filtration. “To better understand how small and dense these graphene pores are, if our graphene membrane were to be magnified about a million times, the pores would be less than 1 millimeter in size, spaced about 4 millimeters apart, and span over 38 square miles, an area roughly half the size of Boston,” O’Hern says.

With this technique, the researchers were able to control the filtration properties of a single, centimeter-sized sheet of graphene: Without etching, no salt flowed through the defects formed by gallium ions. With just a little etching, the membranes started allowing positive salt ions to flow through. With further etching, the membranes allowed both positive and negative salt ions to flow through, but blocked the flow of larger organic molecules. With even more etching, the pores were large enough to allow everything to go through.

Scaling up the process to produce useful sheets of the permeable graphene, while maintaining control over the pore sizes, will require further research, O’Hern says.

Karnik says that such membranes, depending on their pore size, could find various applications. Desalination and nanofiltration may be the most demanding, since the membranes required for these plants would be very large. But for other purposes, such as selective filtration of molecules — for example, removal of unreacted reagents from DNA — even the very small filters produced so far might be useful.

“For biofiltration, size or cost are not as critical,” Karnik says. “For those applications, the current scale is suitable.”

Dexter Johnson in a Feb. 26,2014 posting provides some context for and insight into the work (from the Nanoclast blog on the IEEE [Institute of Electrical and Electronics Engineers]), Note: Links have been removed,

About 18 months ago, I wrote about an MIT project in which computer models demonstrated that graphene could act as a filter in the desalination of water through the reverse osmosis (RO) method. RO is slightly less energy intensive than the predominantly used multi-stage-flash process. The hope was that the nanopores of the graphene material would make the RO method even less energy intensive than current versions by making it easier to push the water through the filter membrane.

The models were promising, but other researchers in the field said at the time it was going to be a long road to translate a computer model to a real product.

It would seem that the MIT researchers agreed it was worth the effort and accepted the challenge to go from computer model to a real device as they announced this week that they had developed a method for creating selective pores in graphene that make it suitable for water desalination.

Here’s a link to and a citation for the paper,

Selective Ionic Transport through Tunable Subnanometer Pores in Single-Layer Graphene Membranes by Sean C. O’Hern, Michael S. H. Boutilier, Juan-Carlos Idrobo, Yi Song, Jing Kong, Tahar Laoui, Muataz Atieh, and Rohit Karnik. Nano Lett., Article ASAP DOI: 10.1021/nl404118f Publication Date (Web): February 3, 2014

Copyright © 2014 American Chemical Society

This article is behind a paywall.

The second item is also from MIT and concerns a low-tech means of purifying water. From a Feb. 27, 2014 news item on Azonano,

If you’ve run out of drinking water during a lakeside camping trip, there’s a simple solution: Break off a branch from the nearest pine tree, peel away the bark, and slowly pour lake water through the stick. The improvised filter should trap any bacteria, producing fresh, uncontaminated water.

In fact, an MIT team has discovered that this low-tech filtration system can produce up to four liters of drinking water a day — enough to quench the thirst of a typical person.

In a paper published this week in the journal PLoS ONE, the researchers demonstrate that a small piece of sapwood can filter out more than 99 percent of the bacteria E. coli from water. They say the size of the pores in sapwood — which contains xylem tissue evolved to transport sap up the length of a tree — also allows water through while blocking most types of bacteria.

Co-author Rohit Karnik, an associate professor of mechanical engineering at MIT, says sapwood is a promising, low-cost, and efficient material for water filtration, particularly for rural communities where more advanced filtration systems are not readily accessible.

“Today’s filtration membranes have nanoscale pores that are not something you can manufacture in a garage very easily,” Karnik says. “The idea here is that we don’t need to fabricate a membrane, because it’s easily available. You can just take a piece of wood and make a filter out of it.”

The Feb. 26, 2014 news release on EurekAlert, which originated the news item, describes current filtration techniques and the advantages associated with this new low-tech approach,

There are a number of water-purification technologies on the market today, although many come with drawbacks: Systems that rely on chlorine treatment work well at large scales, but are expensive. Boiling water to remove contaminants requires a great deal of fuel to heat the water. Membrane-based filters, while able to remove microbes, are expensive, require a pump, and can become easily clogged.

Sapwood may offer a low-cost, small-scale alternative. The wood is comprised of xylem, porous tissue that conducts sap from a tree’s roots to its crown through a system of vessels and pores. Each vessel wall is pockmarked with tiny pores called pit membranes, through which sap can essentially hopscotch, flowing from one vessel to another as it feeds structures along a tree’s length. The pores also limit cavitation, a process by which air bubbles can grow and spread in xylem, eventually killing a tree. The xylem’s tiny pores can trap bubbles, preventing them from spreading in the wood.

“Plants have had to figure out how to filter out bubbles but allow easy flow of sap,” Karnik observes. “It’s the same problem with water filtration where we want to filter out microbes but maintain a high flow rate. So it’s a nice coincidence that the problems are similar.”

The news release also describes the experimental procedure the scientists followed (from the news release),

To study sapwood’s water-filtering potential, the researchers collected branches of white pine and stripped off the outer bark. They cut small sections of sapwood measuring about an inch long and half an inch wide, and mounted each in plastic tubing, sealed with epoxy and secured with clamps.

Before experimenting with contaminated water, the group used water mixed with red ink particles ranging from 70 to 500 nanometers in size. After all the liquid passed through, the researchers sliced the sapwood in half lengthwise, and observed that much of the red dye was contained within the very top layers of the wood, while the filtrate, or filtered water, was clear. This experiment showed that sapwood is naturally able to filter out particles bigger than about 70 nanometers.

However, in another experiment, the team found that sapwood was unable to separate out 20-nanometer particles from water, suggesting that there is a limit to the size of particles coniferous sapwood can filter.

Finally, the team flowed inactivated, E. coli-contaminated water through the wood filter. When they examined the xylem under a fluorescent microscope, they saw that bacteria had accumulated around pit membranes in the first few millimeters of the wood. Counting the bacterial cells in the filtered water, the researchers found that the sapwood was able to filter out more than 99 percent of E. coli from water.

Karnik says sapwood likely can filter most types of bacteria, the smallest of which measure about 200 nanometers. However, the filter probably cannot trap most viruses, which are much smaller in size.

The researchers have future plans (from the news release),

Karnik says his group now plans to evaluate the filtering potential of other types of sapwood. In general, flowering trees have smaller pores than coniferous trees, suggesting that they may be able to filter out even smaller particles. However, vessels in flowering trees tend to be much longer, which may be less practical for designing a compact water filter.

Designers interested in using sapwood as a filtering material will also have to find ways to keep the wood damp, or to dry it while retaining the xylem function. In other experiments with dried sapwood, Karnik found that water either did not flow through well, or flowed through cracks, but did not filter out contaminants.

“There’s huge variation between plants,” Karnik says. “There could be much better plants out there that are suitable for this process. Ideally, a filter would be a thin slice of wood you could use for a few days, then throw it away and replace at almost no cost. It’s orders of magnitude cheaper than the high-end membranes on the market today.”

Here’s a link to and a citation for the paper,

Water Filtration Using Plant Xylem by Michael S. H. Boutilier, Jongho Lee, Valerie Chambers, Varsha Venkatesh, & Rohit Karnik. PLOS One Published: February 26, 2014 DOI: 10.1371/journal.pone.0089934

This paper is open access.

One final observation, two of the researchers listed as authors on the graphene/water desalination paper are also listed on the low-tech sapwood paper (Michael S. H. Boutilier & Rohit Karnik).

Injectable and more powerful* batteries for live salmon

Today’s live salmon may sport a battery for monitoring purposes and now scientists have developed one that is significantly more powerful according to a Feb. 17, 2014 Pacific Northwest National Laboratory (PNNL) news release (dated Feb. 18, 2014 on EurekAlert),

Scientists have created a microbattery that packs twice the energy compared to current microbatteries used to monitor the movements of salmon through rivers in the Pacific Northwest and around the world.

The battery, a cylinder just slightly larger than a long grain of rice, is certainly not the world’s smallest battery, as engineers have created batteries far tinier than the width of a human hair. But those smaller batteries don’t hold enough energy to power acoustic fish tags. The new battery is small enough to be injected into an organism and holds much more energy than similar-sized batteries.

Here’s a photo of the battery as it rests amongst grains of rice,

The microbattery created by Jie Xiao and Daniel Deng and colleagues, amid grains of rice. Courtesy PNNL

The microbattery created by Jie Xiao and Daniel Deng and colleagues, amid grains of rice. Courtesy PNNL

The news release goes on to explain why scientists are developing a lighter battery for salmon and how they achieved their goal,

For scientists tracking the movements of salmon, the lighter battery translates to a smaller transmitter which can be inserted into younger, smaller fish. That would allow scientists to track their welfare earlier in the life cycle, oftentimes in the small streams that are crucial to their beginnings. The new battery also can power signals over longer distances, allowing researchers to track fish further from shore or from dams, or deeper in the water.

“The invention of this battery essentially revolutionizes the biotelemetry world and opens up the study of earlier life stages of salmon in ways that have not been possible before,” said M. Brad Eppard, a fisheries biologist with the Portland District of the U.S. Army Corps of Engineers.

“For years the chief limiting factor to creating a smaller transmitter has been the battery size. That hurdle has now been overcome,” added Eppard, who manages the Portland District’s fisheries research program.

The Corps and other agencies use the information from tags to chart the welfare of endangered fish and to help determine the optimal manner to operate dams. Three years ago the Corps turned to Z. Daniel Deng, a PNNL engineer, to create a smaller transmitter, one small enough to be injected, instead of surgically implanted, into fish. Injection is much less invasive and stressful for the fish, and it’s a faster and less costly process.

“This was a major challenge which really consumed us these last three years,” said Deng. “There’s nothing like this available commercially, that can be injected. Either the batteries are too big, or they don’t last long enough to be useful. That’s why we had to design our own.”

Deng turned to materials science expert Jie Xiao to create the new battery design.

To pack more energy into a small area, Xiao’s team improved upon the “jellyroll” technique commonly used to make larger household cylindrical batteries. Xiao’s team laid down layers of the battery materials one on top of the other in a process known as lamination, then rolled them up together, similar to how a jellyroll is created. The layers include a separating material sandwiched by a cathode made of carbon fluoride and an anode made of lithium.

The technique allowed her team to increase the area of the electrodes without increasing their thickness or the overall size of the battery. The increased area addresses one of the chief problems when making such a small battery — keeping the impedance, which is a lot like resistance, from getting too high. High impedance occurs when so many electrons are packed into a small place that they don’t flow easily or quickly along the routes required in a battery, instead getting in each other’s way. The smaller the battery, the bigger the problem.

Using the jellyroll technique allowed Xiao’s team to create a larger area for the electrons to interact, reducing impedance so much that the capacity of the material is about double that of traditional microbatteries used in acoustic fish tags.

“It’s a bit like flattening wads of Play-Doh, one layer at a time, and then rolling them up together, like a jelly roll,” says Xiao. “This allows you to pack more of your active materials into a small space without increasing the resistance.”

The new battery is a little more than half the weight of batteries currently used in acoustic fish tags — just 70 milligrams, compared to about 135 milligrams — and measures six millimeters long by three millimeters wide. The battery has an energy density of about 240 watt hours per kilogram, compared to around 100 for commercially available silver oxide button microbatteries.

The battery holds enough energy to send out an acoustic signal strong enough to be useful for fish-tracking studies even in noisy environments such as near large dams. The battery can power a 744-microsecond signal sent every three seconds for about three weeks, or about every five seconds for a month. It’s the smallest battery the researchers know of with enough energy capacity to maintain that level of signaling.

The batteries also work better in cold water where salmon often live, sending clearer signals at low temperatures compared to current batteries. That’s because their active ingredients are lithium and carbon fluoride, a chemistry that is promising for other applications but has not been common for microbatteries.

Last summer in Xiao’s laboratory, scientists Samuel Cartmell and Terence Lozano made by hand more than 1,000 of the rice-sized batteries. It’s a painstaking process, cutting and forming tiny snippets of sophisticated materials, putting them through a flattening device that resembles a pasta maker, binding them together, and rolling them by hand into tiny capsules. Their skilled hands rival those of surgeons, working not with tissue but with sensitive electronic materials.

A PNNL team led by Deng surgically implanted 700 of the tags into salmon in a field trial in the Snake River last summer. Preliminary results show that the tags performed extremely well. The results of that study and more details about the smaller, enhanced fish tags equipped with the new microbattery will come out in a forthcoming publication. Battelle, which operates PNNL, has applied for a patent on the technology.

I notice that while the second paragraph of the news release (in the first excerpt) says the battery is injectable, the final paragraph (in the second excerpt) says the team “surgically implanted” the tags with their new batteries into the salmon.

Here’s a link to and a citation for the newly published article in Scientific Reports,

Micro-battery Development for Juvenile Salmon Acoustic Telemetry System Applications by Honghao Chen, Samuel Cartmell, Qiang Wang, Terence Lozano, Z. Daniel Deng, Huidong Li, Xilin Chen, Yong Yuan, Mark E. Gross, Thomas J. Carlson, & Jie Xiao. Scientific Reports 4, Article number: 3790 doi:10.1038/srep03790 Published 21 January 2014

This paper is open access.

* I changed the headline from ‘Injectable batteries for live salmon made more powerful’ to ‘Injectable and more powerful batteries for live salmon’  to better reflect the information in the news release. Feb. 19, 2014 at 11:43 am PST.

ETA Feb. 20, 2014: Dexter Johnson has weighed in on this very engaging and practical piece of research in a Feb. 19, 2014 posting on his Nanoclast blog (on the IEEE [Institute of Electrical and Electronics Engineers]) website (Note: Links have been removed),

There’s no denying that building the world’s smallest battery is a notable achievement. But while they may lay the groundwork for future battery technologies, today such microbatteries are mostly laboratory curiosities.

Developing a battery that’s no bigger than a grain of rice—and that’s actually useful in the real world—is quite another kind of achievement. Researchers at Pacific Northwest National Laboratory (PNNL) have done just that, creating a battery based on graphene that has successfully been used in monitoring the movements of salmon through rivers.

The microbattery is being heralded as a breakthrough in biotelemetry and should give researchers never before insights into the movements and the early stages of life of the fish.

The battery is partly made from a fluorinated graphene that was described last year …

Advice on marketing nano from a process engineering perspective

Robert Ferris, PhD, is writing a series of posts about the ‘Process Engineering of Nanotechnology’ on the Emerson Process Experts blog. Before getting to his marketing post, I’m going to briefly discuss his Jan. 4, 2014 posting (the first in this business-oriented series) which offers a good primer on the topic of nanotechnology although I do have a proviso, Ferris’ posts should be read with some caution,

I contribute [sic]  the knowledge gap to the fact that most of the writing out there is written by science-brains and first-adopters. Previous authors focus on the technology and potentials of bench-top scale innovation. This is great for the fellow science-brain but useless to the general population. I can say this because I am one of those science-brains.

The unfortunate truth is that most people do not understand nanotechnology nor care about the science behind it. They only care if the new product is better than the last. Nanotechnology is not a value proposition. So, the articles written do not focus on what the general population cares about. Instead, people are confused by nanotechnology and as a result are unsure of how it can be used.

I think Ferris means ‘attribute’ rather than ‘contribute’ and I infer from the evidence provided by the error that he (in common with me) does not have a copy editor. BTW, my worst was finding three errors in one of my sentences (sigh) weeks after after I’d published. At any rate, I’m suggesting caution not due to this error but to passages such as this (Note: Links have been removed),

Nanotechnology is not new; in fact, it was used as far back as the 16th century in stain glass windows. Also, nanotechnology is already being used in products today, ranging from consumer goods to food processing. Don’t be surprised if you didn’t know, a lot of companies do not publicize the fact that they use nanotechnology.

Strictly speaking the first sentence is problematic since Ferris is describing ‘accidental’ nanotechnology. The artisans weren’t purposefully creating gold nanoparticles to get that particular shade of red in the glass as opposed to what we’re doing today and I think that’s a significant difference. (Dexter Johnson on his Nanoclast blog for the IEEE [Institute of Electrical and Electronics Engineers] has been very clear that these previous forays (Damascus steel, the Lycurgus Cup) cannot be described as nanotechnology since they were unintended.) As for the rest of the excerpt, it’s all quite true.

Ferris’ Feb. 11, 2014 post tackles marketing,

… While companies and products can miss growth targets for any number of reasons, one of the more common failures for nanotechnology-enabled products is improper marketing. Most would agree that marketing is as much art as science but marketing of nanotechnology-enabled products can be particularly tricky.

True again and he’s about to focus on one aspect of marketing,

Companies that develop nanotechnology-enabled products tend to fall into two camps—those that use nanotechnology as a differentiator in their marketing materials and those that do not. In the 5 P’s of marketing (Product, Place, Price, Promotion, and People), we are contrasting how each company approaches product marketing.

Product marketing focuses on communicating how that product meets a customer need. To do this, the marketing material must differentiate from other potential solutions. The question is, does nanotechnology serves as a differentiating value proposition for the customer?

As I understand it, communicating about the product and value propositions would fall under Promotion while decisions about what features to offer, physical design elements, etc. would fall under Product. Still, Ferris goes on to make some good points with his example of selling a nano-manufactured valve,

A local salesperson calls you up to see what you think. As a customer, you ask a simple question, “Why should we buy this new valve over the one we have been using for years?” What will you think if the sales-person answers, “Because it is based on nanotechnology!”? Answering this way does not address your pain points or satisfy your concerns over the risks of purchasing a new product.

My main difficulty with Ferris’ marketing post is a lack of clarity. He never distinguishes between business-to-business (B2B) marketing and business to consumer (B2C) marketing. There are differences, for example, consumers may not have the scientific or technical training to understand the more involved aspects of the product but a business may have someone on staff who can and could respond negatively to a lack of technical/scientific information.

I agree with Ferris on many points but I do feel he might address the issue of selling technology. He uses L’Oréal as an example of a company selling nanotechnology-enabled products  which they do but their product is beauty. The company’s  nanotechnology-enabled products are simply a means of doing that. By contrast a company like IBM sells technology and a component or product that’s nanotechnology-enabled may require a little or a lot of education depending on the component/product and the customer.

For anyone who’s interested in marketing nanotechnology-enabled and products based on other emerging technologies, I recommend reading Geoffrey A. Moore’s book, Crossing the Chasm. His examples are dated as this written about the ‘computer revolution’ but I think the basis principles still hold. As for Ferris’ postings, there’s good information but you may want to check out other sources and I recommend Dexter Johnson’s Nanoclast blog and Cientifica, an emerging technologies consultancy. (Dexter works for Cientifica, in addition to writing for the IEEE, but most of the publications on that site are by Tim Harper). Oh, and you can check here too, although the business side of things is not my main focus, I still manage to write the odd piece about marketing (promotion usually).

3D television is resurrected by way of a nanocomposite

A Feb. 10, 2014 University of Central Florida news release by Barbara Abney (also on EurekAlert) tells the tale of a researcher working on the development of 3D images on television,

Gone are the goofy glasses required of existing sets. Instead, assistant professor Jayan Thomas is working on creating the materials necessary to create a 3-D image that could be seen from 360 degrees with no extra equipment.

“The TV screen should be like a table top,” Thomas said. “People would sit around and watch the TV from all angles like sitting around a table. Therefore, the images should be like real-world objects. If you watch a football game on this 3-D TV, you would feel like it is happening right in front of you. A holographic 3-D TV is a feasible direction to accomplish this without the need of glasses.”

His work is so far along that the National Science Foundation has given him a $400,000 grant over five years to develop the materials needed to produce display screens.

Here’s an image of Thomas sitting mimicking the experience of his 3D television at a tabletop,

UCF Researcher Jaden Thomas uses nantechnology to bring 3-D television back to life.

UCF [University of Central Florida] Researcher Jaden Thomas uses nantechnology to bring 3-D television back to life.

Thomas’ work comes at a very interesting juncture for the industry (from the news release),

When 3-D TVs first came on the market in 2010, there was a lot of hype and the market expected the new sets would take off. Several broadcasters even pledged to create special channels for 3-D programming, such as ESPN and the BBC.

But in the past year, those broadcasters have canceled plans because sales have lagged and the general public hasn’t adopted the sets as hoped. Some say that’s because the television sets are expensive and require bulky equipment and glasses.

Here’s how Thomas’ approach differs, in very general terms (from the news release),

Thomas’ approach would use new plastic composites made with nanotechnology to make the 3-D image recording process multitudes faster than currently possible. This would eliminate the need for glasses.

Thomas and his colleagues have developed the specific plastic composite needed to create the display screens necessary for effectively showing the 3-D images. That work has been published in the journals Nature and Advanced Materials.

There’s more about Dr. Thomas along with listings of his publications on his NanoScience Technology Center faculty page.

ETA Feb. 14, 2014: You may want to read Dexter Johnson’s Feb. 14, 2014 posting on his Nanoclast blog (on the IEEE [Institute for Electrical and Electronics Engineers]) concerning 3D televisions and Thomas’ work (Note: A link has been removed),

 At this year’s Consumer Electronics Show (CES), it became clear that the much-ballyhooed age of 3-D TV was coming to a quiet and uncelebrated end. One of the suggested causes of its demise was the cost of the 3D glasses. If you wanted to invite a group over to watch the big sporting event, you had better have a lot of extra pairs on hand, which might cost you a small fortune.

Eliminating the glasses from the experience has been proposed from the first moment 3-D TVs were introduced to the marketplace.

Dexter goes on to provide technical context for Thomas’ work as he expands on his theme.