Tag Archives: Princeton University

Machine learning programs learn bias

The notion of bias in artificial intelligence (AI)/algorithms/robots is gaining prominence (links to other posts featuring algorithms and bias are at the end of this post). The latest research concerns machine learning where an artificial intelligence system trains itself with ordinary human language from the internet. From an April 13, 2017 American Association for the Advancement of Science (AAAS) news release on EurekAlert,

As artificial intelligence systems “learn” language from existing texts, they exhibit the same biases that humans do, a new study reveals. The results not only provide a tool for studying prejudicial attitudes and behavior in humans, but also emphasize how language is intimately intertwined with historical biases and cultural stereotypes. A common way to measure biases in humans is the Implicit Association Test (IAT), where subjects are asked to pair two concepts they find similar, in contrast to two concepts they find different; their response times can vary greatly, indicating how well they associated one word with another (for example, people are more likely to associate “flowers” with “pleasant,” and “insects” with “unpleasant”). Here, Aylin Caliskan and colleagues developed a similar way to measure biases in AI systems that acquire language from human texts; rather than measuring lag time, however, they used the statistical number of associations between words, analyzing roughly 2.2 million words in total. Their results demonstrate that AI systems retain biases seen in humans. For example, studies of human behavior show that the exact same resume is 50% more likely to result in an opportunity for an interview if the candidate’s name is European American rather than African-American. Indeed, the AI system was more likely to associate European American names with “pleasant” stimuli (e.g. “gift,” or “happy”). In terms of gender, the AI system also reflected human biases, where female words (e.g., “woman” and “girl”) were more associated than male words with the arts, compared to mathematics. In a related Perspective, Anthony G. Greenwald discusses these findings and how they could be used to further analyze biases in the real world.

There are more details about the research in this April 13, 2017 Princeton University news release on EurekAlert (also on ScienceDaily),

In debates over the future of artificial intelligence, many experts think of the new systems as coldly logical and objectively rational. But in a new study, researchers have demonstrated how machines can be reflections of us, their creators, in potentially problematic ways. Common machine learning programs, when trained with ordinary human language available online, can acquire cultural biases embedded in the patterns of wording, the researchers found. These biases range from the morally neutral, like a preference for flowers over insects, to the objectionable views of race and gender.

Identifying and addressing possible bias in machine learning will be critically important as we increasingly turn to computers for processing the natural language humans use to communicate, for instance in doing online text searches, image categorization and automated translations.

“Questions about fairness and bias in machine learning are tremendously important for our society,” said researcher Arvind Narayanan, an assistant professor of computer science and an affiliated faculty member at the Center for Information Technology Policy (CITP) at Princeton University, as well as an affiliate scholar at Stanford Law School’s Center for Internet and Society. “We have a situation where these artificial intelligence systems may be perpetuating historical patterns of bias that we might find socially unacceptable and which we might be trying to move away from.”

The paper, “Semantics derived automatically from language corpora contain human-like biases,” published April 14  [2017] in Science. Its lead author is Aylin Caliskan, a postdoctoral research associate and a CITP fellow at Princeton; Joanna Bryson, a reader at University of Bath, and CITP affiliate, is a coauthor.

As a touchstone for documented human biases, the study turned to the Implicit Association Test, used in numerous social psychology studies since its development at the University of Washington in the late 1990s. The test measures response times (in milliseconds) by human subjects asked to pair word concepts displayed on a computer screen. Response times are far shorter, the Implicit Association Test has repeatedly shown, when subjects are asked to pair two concepts they find similar, versus two concepts they find dissimilar.

Take flower types, like “rose” and “daisy,” and insects like “ant” and “moth.” These words can be paired with pleasant concepts, like “caress” and “love,” or unpleasant notions, like “filth” and “ugly.” People more quickly associate the flower words with pleasant concepts, and the insect terms with unpleasant ideas.

The Princeton team devised an experiment with a program where it essentially functioned like a machine learning version of the Implicit Association Test. Called GloVe, and developed by Stanford University researchers, the popular, open-source program is of the sort that a startup machine learning company might use at the heart of its product. The GloVe algorithm can represent the co-occurrence statistics of words in, say, a 10-word window of text. Words that often appear near one another have a stronger association than those words that seldom do.

The Stanford researchers turned GloVe loose on a huge trawl of contents from the World Wide Web, containing 840 billion words. Within this large sample of written human culture, Narayanan and colleagues then examined sets of so-called target words, like “programmer, engineer, scientist” and “nurse, teacher, librarian” alongside two sets of attribute words, such as “man, male” and “woman, female,” looking for evidence of the kinds of biases humans can unwittingly possess.

In the results, innocent, inoffensive biases, like for flowers over bugs, showed up, but so did examples along lines of gender and race. As it turned out, the Princeton machine learning experiment managed to replicate the broad substantiations of bias found in select Implicit Association Test studies over the years that have relied on live, human subjects.

For instance, the machine learning program associated female names more with familial attribute words, like “parents” and “wedding,” than male names. In turn, male names had stronger associations with career attributes, like “professional” and “salary.” Of course, results such as these are often just objective reflections of the true, unequal distributions of occupation types with respect to gender–like how 77 percent of computer programmers are male, according to the U.S. Bureau of Labor Statistics.

Yet this correctly distinguished bias about occupations can end up having pernicious, sexist effects. An example: when foreign languages are naively processed by machine learning programs, leading to gender-stereotyped sentences. The Turkish language uses a gender-neutral, third person pronoun, “o.” Plugged into the well-known, online translation service Google Translate, however, the Turkish sentences “o bir doktor” and “o bir hem?ire” with this gender-neutral pronoun are translated into English as “he is a doctor” and “she is a nurse.”

“This paper reiterates the important point that machine learning methods are not ‘objective’ or ‘unbiased’ just because they rely on mathematics and algorithms,” said Hanna Wallach, a senior researcher at Microsoft Research New York City, who was not involved in the study. “Rather, as long as they are trained using data from society and as long as society exhibits biases, these methods will likely reproduce these biases.”

Another objectionable example harkens back to a well-known 2004 paper by Marianne Bertrand of the University of Chicago Booth School of Business and Sendhil Mullainathan of Harvard University. The economists sent out close to 5,000 identical resumes to 1,300 job advertisements, changing only the applicants’ names to be either traditionally European American or African American. The former group was 50 percent more likely to be offered an interview than the latter. In an apparent corroboration of this bias, the new Princeton study demonstrated that a set of African American names had more unpleasantness associations than a European American set.

Computer programmers might hope to prevent cultural stereotype perpetuation through the development of explicit, mathematics-based instructions for the machine learning programs underlying AI systems. Not unlike how parents and mentors try to instill concepts of fairness and equality in children and students, coders could endeavor to make machines reflect the better angels of human nature.

“The biases that we studied in the paper are easy to overlook when designers are creating systems,” said Narayanan. “The biases and stereotypes in our society reflected in our language are complex and longstanding. Rather than trying to sanitize or eliminate them, we should treat biases as part of the language and establish an explicit way in machine learning of determining what we consider acceptable and unacceptable.”

Here’s a link to and a citation for the Princeton paper,

Semantics derived automatically from language corpora contain human-like biases by Aylin Caliskan, Joanna J. Bryson, Arvind Narayanan. Science  14 Apr 2017: Vol. 356, Issue 6334, pp. 183-186 DOI: 10.1126/science.aal4230

This paper appears to be open access.

Links to more cautionary posts about AI,

Aug 5, 2009: Autonomous algorithms; intelligent windows; pretty nano pictures

June 14, 2016:  Accountability for artificial intelligence decision-making

Oct. 25, 2016 Removing gender-based stereotypes from algorithms

March 1, 2017: Algorithms in decision-making: a government inquiry in the UK

There’s also a book which makes some of the current use of AI programmes and big data quite accessible reading: Cathy O’Neal’s ‘Weapons of Math Destruction: How Big Data Increases Inequality and Threatens Democracy’.

Making lead look like gold (so to speak)

Apparently you can make lead ‘look’ like gold if you can get it to reflect light in the same way. From a Feb. 28, 2017 news item on Nanowerk (Note: A link has been removed),

Since the Middle Ages, alchemists have sought to transmute elements, the most famous example being the long quest to turn lead into gold. Transmutation has been realized in modern times, but on a minute scale using a massive particle accelerator.

Now, theorists at Princeton University have proposed a different approach to this ancient ambition — just make one material behave like another. A computational theory published Feb. 24 [2017] in the journal Physical Review Letters (“How to Make Distinct Dynamical Systems Appear Spectrally Identical”) demonstrates that any two systems can be made to look alike, even if just for the smallest fraction of a second.

In this context, for two objects to “look” like each other, they need to reflect light in the same way. The Princeton researchers’ method involves using light to make non-permanent changes to a substance’s molecules so that they mimic the reflective properties of another substance’s molecules. This ability could have implications for optical computing, a type of computing in which electrons are replaced by photons that could greatly enhance processing power but has proven extremely difficult to engineer. It also could be applied to molecular detection and experiments in which expensive samples could be replaced by cheaper alternatives.

A Feb. 28, 2017 Princeton University news release (also on EurekAlert) by Tien Nguyen, which originated the news item, expands on the theme (Note: Links have been removed),

“It was a big shock for us that such a general statement as ‘any two objects can be made to look alike’ could be made,” said co-author Denys Bondar, an associate research scholar in the laboratory of co-author Herschel Rabitz, Princeton’s Charles Phelps Smyth ’16 *17 Professor of Chemistry.

The Princeton researchers posited that they could control the light that bounces off a molecule or any substance by controlling the light shone on it, which would allow them to alter how it looks. This type of manipulation requires a powerful light source such as an ultrafast laser and would last for only a femtosecond, or one quadrillionth of a second. Unlike normal light sources, this ultrafast laser pulse is strong enough to interact with molecules and distort their electron cloud while not actually changing their identity.

“The light emitted by a molecule depends on the shape of its electron cloud, which can be sculptured by modern lasers,” Bondar said. Using advanced computational theory, the research team developed a method called “spectral dynamic mimicry” that allowed them to calculate the laser pulse shape, which includes timing and wavelength, to produce any desired spectral output. In other words, making any two systems look alike.

Conversely, this spectral control could also be used to make two systems look as different from one another as possible. This differentiation, the researchers suggested, could prove valuable for applications of molecular detections such as identifying toxic versus safe chemicals.

Shaul Mukamel, a chemistry professor at the University of California-Irvine, said that the Princeton research is a step forward in an important and active research field called coherent control, in which light can be manipulated to control behavior at the molecular level. Mukamel, who has collaborated with the Rabitz lab but was not involved in the current work, said that the Rabitz group has had a prominent role in this field for decades, advancing technology such as quantum computing and using light to drive artificial chemical reactivity.

“It’s a very general and nice application of coherent control,” Mukamel said. “It demonstrates that you can, by shaping the optical paths, bring the molecules to do things that you want beforehand — it could potentially be very significant.”

Since the Middle Ages, alchemists have sought to transmute elements, the most famous example being the long quest to turn lead into gold. Now, theorists at Princeton University have proposed a different approach to this ancient ambition — just make one material behave like another, even if just for the smallest fraction of a second. The researchers are, left to right, Renan Cabrera, an associate research scholar in chemistry; Herschel Rabitz, Princeton’s Charles Phelps Smyth ’16 *17 Professor of Chemistry; associate research scholar in chemistry Denys Bondar; and graduate student Andre Campos. (Photo by C. Todd Reichart, Department of Chemistry)

Here’s a link to and a citation for the paper,

How to Make Distinct Dynamical Systems Appear Spectrally Identical by
Andre G. Campos, Denys I. Bondar, Renan Cabrera, and Herschel A. Rabitz.
Phys. Rev. Lett. 118, 083201 (Vol. 118, Iss. 8) DOI:https://doi.org/10.1103/PhysRevLett.118.083201 Published 24 February 2017

© 2017 American Physical Society

This paper is behind a paywall.

Brushing your way to nanofibres

The scientists are using what looks like a hairbrush to create nanofibres ,

Figure 2: Brush-spinning of nanofibers. (Reprinted with permission by Wiley-VCH Verlag)) [downloaded from http://www.nanowerk.com/spotlight/spotid=41398.php]

Figure 2: Brush-spinning of nanofibers. (Reprinted with permission by Wiley-VCH Verlag)) [downloaded from http://www.nanowerk.com/spotlight/spotid=41398.php]

A Sept. 23, 2015 Nanowerk Spotlight article by Michael Berger provides an in depth look at this technique (developed by a joint research team of scientists from the University of Georgia, Princeton University, and Oxford University) which could make producing nanofibers for use in scaffolds (tissue engineering and other applications) more easily and cheaply,

Polymer nanofibers are used in a wide range of applications such as the design of new composite materials, the fabrication of nanostructured biomimetic scaffolds for artificial bones and organs, biosensors, fuel cells or water purification systems.

“The simplest method of nanofiber fabrication is direct drawing from a polymer solution using a glass micropipette,” Alexander Tokarev, Ph.D., a Research Associate in the Nanostructured Materials Laboratory at the University of Georgia, tells Nanowerk. “This method however does not scale up and thus did not find practical applications. In our new work, we introduce a scalable method of nanofiber spinning named touch-spinning.”

James Cook in a Sept. 23, 2015 article for Materials Views provides a description of the technology,

A glass rod is glued to a rotating stage, whose diameter can be chosen over a wide range of a few centimeters to more than 1 m. A polymer solution is supplied, for example, from a needle of a syringe pump that faces the glass rod. The distance between the droplet of polymer solution and the tip of the glass rod is adjusted so that the glass rod contacts the polymer droplet as it rotates.

Following the initial “touch”, the polymer droplet forms a liquid bridge. As the stage rotates the bridge stretches and fiber length increases, with the diameter decreasing due to mass conservation. It was shown that the diameter of the fiber can be precisely controlled down to 40 nm by the speed of the stage rotation.

The method can be easily scaled-up by using a round hairbrush composed of 600 filaments.

When the rotating brush touches the surface of a polymer solution, the brush filaments draw many fibers simultaneously producing hundred kilometers of fibers in minutes.

The drawn fibers are uniform since the fiber diameter depends on only two parameters: polymer concentration and speed of drawing.

Returning to Berger’s Spotlight article, there is an important benefit with this technique,

As the team points out, one important aspect of the method is the drawing of single filament fibers.

These single filament fibers can be easily wound onto spools of different shapes and dimensions so that well aligned one-directional, orthogonal or randomly oriented fiber meshes with a well-controlled average mesh size can be fabricated using this very simple method.

“Owing to simplicity of the method, our set-up could be used in any biomedical lab and facility,” notes Tokarev. “For example, a customized scaffold by size, dimensions and othermorphologic characteristics can be fabricated using donor biomaterials.”

Berger’s and Cook’s articles offer more illustrations and details.

Here’s a link to and a citation for the paper,

Touch- and Brush-Spinning of Nanofibers by Alexander Tokarev, Darya Asheghal, Ian M. Griffiths, Oleksandr Trotsenko, Alexey Gruzd, Xin Lin, Howard A. Stone, and Sergiy Minko. Advanced Materials DOI: 10.1002/adma.201502768ViewFirst published: 23 September 2015

This paper is behind a paywall.

Magnetospinning with an inexpensive magnet

The fridge magnet mentioned in the headline for a May 11, 2015  Nanowerk spotlight aricle by Michael Berger isn’t followed up until the penultimate paragraph but it is worth the wait,

“Our method for spinning of continuous micro- and nanofibers uses a permanent revolving magnet,” Alexander Tokarev, Ph.D., a Research Associate in the Nanostructured Materials Laboratory at the University of Georgia, tells Nanowerk. “This fabrication technique utilizes magnetic forces and hydrodynamic features of stretched threads to produce fine nanofibers.”

“The new method provides excellent control over the fiber diameter and is compatible with a range of polymeric materials and polymer composite materials including biopolymers,” notes Tokarev. “Our research showcases this new technique and demonstrates its advantages to the scientific community.”

Electrospinning is the most popular method to produce nanofibers in labs now. Owing to its simplicity and low costs, a magnetospinning set-up could be installed in any non-specialized laboratory for broader use of magnetospun nanofibers in different methods and technologies. The total cost of a laboratory electrospinning system is above $10,000. In contrast, no special equipment is needed for magnetospinning. It is possible to build a magnetospinning set-up, such as the University of Georgia team utilizes, by just using a $30 rotating motor and a $5 permanent magnet. [emphasis mine]

Berger’s article references a recent paper published by the team,

Magnetospinning of Nano- and Microfibers by Alexander Tokarev, Oleksandr Trotsenko, Ian M. Griffiths, Howard A. Stone, and Sergiy Minko. Advanced Materials First published: 8 May 2015Full publication history DOI: 10.1002/adma.201500374View/save citation

This paper is behind a paywall.

* The headline originally stated that a ‘fridge’ magnet was used. Researcher Alexander Tokarev kindly dropped by correct this misunderstanding on my part and the headline has been changed to read  ‘inexpensive magnet’ on May 14, 2015 at approximately 1400 hundred hours PDT.

A 2nd European roadmap for graphene

About 2.5 years ago there was an article titled, “A roadmap for graphene” (behind a paywall) which Nature magazine published online in Oct. 2012. I see at least two of the 2012 authors, Konstantin (Kostya) Novoselov and Vladimir Fal’ko,, are party to this second, more comprehensive roadmap featured in a Feb. 24, 2015 news item on Nanowerk,

In October 2013, academia and industry came together to form the Graphene Flagship. Now with 142 partners in 23 countries, and a growing number of associate members, the Graphene Flagship was established following a call from the European Commission to address big science and technology challenges of the day through long-term, multidisciplinary R&D efforts.

A Feb.  24, 2015 University of Cambridge news release, which originated the news item, describes the roadmap in more detail,

In an open-access paper published in the Royal Society of Chemistry journal Nanoscale, more than 60 academics and industrialists lay out a science and technology roadmap for graphene, related two-dimensional crystals, other 2D materials, and hybrid systems based on a combination of different 2D crystals and other nanomaterials. The roadmap covers the next ten years and beyond, and its objective is to guide the research community and industry toward the development of products based on graphene and related materials.

The roadmap highlights three broad areas of activity. The first task is to identify new layered materials, assess their potential, and develop reliable, reproducible and safe means of producing them on an industrial scale. Identification of new device concepts enabled by 2D materials is also called for, along with the development of component technologies. The ultimate goal is to integrate components and structures based on 2D materials into systems capable of providing new functionalities and application areas.

Eleven science and technology themes are identified in the roadmap. These are: fundamental science, health and environment, production, electronic devices, spintronics, photonics and optoelectronics, sensors, flexible electronics, energy conversion and storage, composite materials, and biomedical devices. The roadmap addresses each of these areas in turn, with timelines.

Research areas outlined in the roadmap correspond broadly with current flagship work packages, with the addition of a work package devoted to the growing area of biomedical applications, to be included in the next phase of the flagship. A recent independent assessment has confirmed that the Graphene Flagship is firmly on course, with hundreds of research papers, numerous patents and marketable products to its name.

Roadmap timelines predict that, before the end of the ten-year period of the flagship, products will be close to market in the areas of flexible electronics, composites, and energy, as well as advanced prototypes of silicon-integrated photonic devices, sensors, high-speed electronics, and biomedical devices.

“This publication concludes a four-year effort to collect and coordinate state-of-the-art science and technology of graphene and related materials,” says Andrea Ferrari, director of the Cambridge Graphene Centre, and chairman of the Executive Board of the Graphene Flagship. “We hope that this open-access roadmap will serve as the starting point for academia and industry in their efforts to take layered materials and composites from laboratory to market.” Ferrari led the roadmap effort with Italian Institute of Technology physicist Francesco Bonaccorso, who is a Royal Society Newton Fellow of the University of Cambridge, and a Fellow of Hughes Hall.

“We are very proud of the joint effort of the many authors who have produced this roadmap,” says Jari Kinaret, director of the Graphene Flagship. “The roadmap forms a solid foundation for the graphene community in Europe to plan its activities for the coming years. It is not a static document, but will evolve to reflect progress in the field, and new applications identified and pursued by industry.”

I have skimmed through the report briefly (wish I had more time) and have a couple of comments. First, there’s an excellent glossary of terms for anyone who might stumble over chemical abbreviations and/or more technical terminology. Second, they present a very interesting analysis of the intellectual property (patents) landscape (Note: Links have been removed. Incidental numbers are footnote references),

In the graphene area, there has been a particularly rapid increase in patent activity from around 2007.45 Much of this is driven by patent applications made by major corporations and universities in South Korea and USA.53 Additionally, a high level of graphene patent activity in China is also observed.54 These features have led some commentators to conclude that graphene innovations arising in Europe are being mainly exploited elsewhere.55 Nonetheless, an analysis of the Intellectual Property (IP) provides evidence that Europe already has a significant foothold in the graphene patent landscape and significant opportunities to secure future value. As the underlying graphene technology space develops, and the GRM [graphene and related materials] patent landscape matures, re-distribution of the patent landscape seems inevitable and Europe is well positioned to benefit from patent-based commercialisation of GRM research.

Overall, the graphene patent landscape is growing rapidly and already resembles that of sub-segments of the semiconductor and biotechnology industries,56 which experience high levels of patent activity. The patent strategies of the businesses active in such sub-sectors frequently include ‘portfolio maximization’56 and ‘portfolio optimization’56 strategies, and the sub-sectors experience the development of what commentators term ‘patent thickets’56, or multiple overlapping granted patent rights.56 A range of policies, regulatory and business strategies have been developed to limit such patent practices.57 In such circumstances, accurate patent landscaping may provide critical information to policy-makers, investors and individual industry participants, underpinning the development of sound policies, business strategies and research commercialisation plans.

It sounds like a patent thicket is developing (Note: Links have been removed. Incidental numbers are footnote references),,

Fig. 13 provides evidence of a relative increase in graphene patent filings in South Korea from 2007 to 2009 compared to 2004–2006. This could indicate increased commercial interest in graphene technology from around 2007. The period 2010 to 2012 shows a marked relative increase in graphene patent filings in China. It should be noted that a general increase in Chinese patent filings across many ST domains in this period is observed.76 Notwithstanding this general increase in Chinese patent activity, there does appear to be increased commercial interest in graphene in China. It is notable that the European Patent Office contribution as a percentage of all graphene patent filings globally falls from a 8% in the period 2007 to 2009 to 4% in the period 2010 to 2012.

The importance of the US, China and South Korea is emphasised by the top assignees, shown in Fig. 14. The corporation with most graphene patent applications is the Korean multinational Samsung, with over three times as many filings as its nearest rival. It has also patented an unrivalled range of graphene-technology applications, including synthesis procedures,77 transparent display devices,78 composite materials,79 transistors,80 batteries and solar cells.81 Samsung’s patent applications indicate a sustained and heavy investment in graphene R&D, as well as collaboration (co-assignment of patents) with a wide range of academic institutions.82,83


image file: c4nr01600a-f14.tif
Fig. 14 Top 10 graphene patent assignees by number and cumulative over all time as of end-July 2014. Number of patents are indicated in the red histograms referred to the left Y axis, while the cumulative percentage is the blue line, referred to the right Y axis.

It is also interesting to note that patent filings by universities and research institutions make up a significant proportion ([similar]50%) of total patent filings: the other half comprises contributions from small and medium-sized enterprises (SMEs) and multinationals.

Europe’s position is shown in Fig. 10, 12 and 14. While Europe makes a good showing in the geographical distribution of publications, it lags behind in patent applications, with only 7% of patent filings as compared to 30% in the US, 25% in China, and 13% in South Korea (Fig. 13) and only 9% of filings by academic institutions assigned in Europe (Fig. 15).


image file: c4nr01600a-f15.tif
Fig. 15 Geographical breakdown of academic patent holders as of July 2014.

While Europe is trailing other regions in terms of number of patent filings, it nevertheless has a significant foothold in the patent landscape. Currently, the top European patent holder is Finland’s Nokia, primarily around incorporation of graphene into electrical devices, including resonators and electrodes.72,84,85

This may sound like Europe is trailing behind but that’s not the case according to the roadmap (Note: Links have been removed. Incidental numbers are footnote references),

European Universities also show promise in the graphene patent landscape. We also find evidence of corporate-academic collaborations in Europe, including e.g. co-assignments filed with European research institutions and Germany’s AMO GmbH,86 and chemical giant BASF.87,88 Finally, Europe sees significant patent filings from a number of international corporate and university players including Samsung,77 Vorbeck Materials,89 Princeton University,90–92 and Rice University,93–95 perhaps reflecting the quality of the European ST base around graphene, and its importance as a market for graphene technologies.

There are a number of features in the graphene patent landscape which may lead to a risk of patent thickets96 or ‘multiple overlapping granted patents’ existing around aspects of graphene technology systems. [emphasis mine] There is a relatively high volume of patent activity around graphene, which is an early stage technology space, with applications in patent intensive industry sectors. Often patents claim carbon nano structures other than graphene in graphene patent landscapes, illustrating difficulties around defining ‘graphene’ and mapping the graphene patent landscape. Additionally, the graphene patent nomenclature is not entirely settled. Different patent examiners might grant patents over the same components which the different experts and industry players call by different names.

For anyone new to this blog, I am not a big fan of current patent regimes as they seem to be stifling rather encouraging innovation. Sadly, patents and copyright were originally developed to encourage creativity and innovation by allowing the creators to profit from their ideas. Over time a system designed to encourage innovation has devolved into one that does the opposite. (My Oct. 31, 2011 post titled Patents as weapons and obstacles, details my take on this matter.) I’m not arguing against patents and copyright but suggesting that the system be fixed or replaced with something that delivers on the original intention.

Getting back to the matter at hand, here’s a link to and a citation for the 200 pp. 2015 European Graphene roadmap,

Science and technology roadmap for graphene, related two-dimensional crystals, and hybrid systems by Andrea C. Ferrari, Francesco Bonaccorso, Vladimir Fal’ko, Konstantin S. Novoselov, Stephan Roche, Peter Bøggild, Stefano Borini, Frank H. L. Koppens, Vincenzo Palermo, Nicola Pugno, José A. Garrido, Roman Sordan, Alberto Bianco, Laura Ballerini, Maurizio Prato, Elefterios Lidorikis, Jani Kivioja, Claudio Marinelli, Tapani Ryhänen, Alberto Morpurgo, Jonathan N. Coleman, Valeria Nicolosi, Luigi Colombo, Albert Fert, Mar Garcia-Hernandez, Adrian Bachtold, Grégory F. Schneider, Francisco Guinea, Cees Dekker, Matteo Barbone, Zhipei Sun, Costas Galiotis,  Alexander N. Grigorenko, Gerasimos Konstantatos, Andras Kis, Mikhail Katsnelson, Lieven Vandersypen, Annick Loiseau, Vittorio Morandi, Daniel Neumaier, Emanuele Treossi, Vittorio Pellegrini, Marco Polini, Alessandro Tredicucci, Gareth M. Williams, Byung Hee Hong, Jong-Hyun Ahn, Jong Min Kim, Herbert Zirath, Bart J. van Wees, Herre van der Zant, Luigi Occhipinti, Andrea Di Matteo, Ian A. Kinloch, Thomas Seyller, Etienne Quesnel, Xinliang Feng,  Ken Teo, Nalin Rupesinghe, Pertti Hakonen, Simon R. T. Neil, Quentin Tannock, Tomas Löfwander and Jari Kinaret. Nanoscale, 2015, Advance Article DOI: 10.1039/C4NR01600A First published online 22 Sep 2014

Here’s a diagram illustrating the roadmap process,

Fig. 122 The STRs [science and technology roadmaps] follow a hierarchical structure where the strategic level in a) is connected to the more detailed roadmap shown in b). These general roadmaps are the condensed form of the topical roadmaps presented in the previous sections, and give technological targets for key applications to become commercially competitive and the forecasts for when the targets are predicted to be met.  Courtesy: Researchers and  the Royal Society's journal, Nanoscale

Fig. 122 The STRs [science and technology roadmaps] follow a hierarchical structure where the strategic level in a) is connected to the more detailed roadmap shown in b). These general roadmaps are the condensed form of the topical roadmaps presented in the previous sections, and give technological targets for key applications to become commercially competitive and the forecasts for when the targets are predicted to be met.
Courtesy: Researchers and the Royal Society’s journal, Nanoscale

The image here is not the best quality; the one embedded in the relevant Nanowerk news item is better.

As for the earlier roadmap, here’s my Oct. 11, 2012 post on the topic.

Projecting beams of light from contact lenses courtesy of Princeton University (US)

Princeton University’s 3D printed contact lenses with LED (light-emitting diodes) included are not meant for use by humans or other living beings but they are a flashy demonstration. From a Dec. 10, 2014 news item on phys.org,

As part of a project demonstrating new 3-D printing techniques, Princeton researchers have embedded tiny light-emitting diodes into a standard contact lens, allowing the device to project beams of colored light.

Michael McAlpine, the lead researcher, cautioned that the lens is not designed for actual use—for one, it requires an external power supply. Instead, he said the team created the device to demonstrate the ability to “3-D print” electronics into complex shapes and materials.

“This shows that we can use 3-D printing to create complex electronics including semiconductors,” said McAlpine, an assistant professor of mechanical and aerospace engineering. “We were able to 3-D print an entire device, in this case an LED.”

A Dec. 9, 2014 Princeton University news release by John Sullivan, which originated the news item, describes the 3D lens, the objectives for this project, and an earlier project involving a ‘bionic ear’ in more detail (Note: Links have been removed),

The hard contact lens is made of plastic. The researchers used tiny crystals, called quantum dots, to create the LEDs that generated the colored light. Different size dots can be used to generate various colors.

“We used the quantum dots [also known as nanoparticles] as an ink,” McAlpine said. “We were able to generate two different colors, orange and green.”

The contact lens is also part of an ongoing effort to use 3-D printing to assemble diverse, and often hard-to-combine, materials into functioning devices. In the recent past, a team of Princeton professors including McAlpine created a bionic ear out of living cells with an embedded antenna that could receive radio signals.

Yong Lin Kong, a researcher on both projects, said the bionic ear presented a different type of challenge.

“The main focus of the bionic ear project was to demonstrate the merger of electronics and biological materials,” said Kong, a graduate student in mechanical and aerospace engineering.

Kong, the lead author of the Oct. 31 [2014] article describing the current work in the journal Nano Letters, said that the contact lens project, on the other hand, involved the printing of active electronics using diverse materials. The materials were often mechanically, chemically or thermally incompatible — for example, using heat to shape one material could inadvertently destroy another material in close proximity. The team had to find ways to handle these incompatibilities and also had to develop new methods to print electronics, rather than use the techniques commonly used in the electronics industry.

“For example, it is not trivial to pattern a thin and uniform coating of nanoparticles and polymers without the involvement of conventional microfabrication techniques, yet the thickness and uniformity of the printed films are two of the critical parameters that determine the performance and yield of the printed active device,” Kong said.

To solve these interdisciplinary challenges, the researchers collaborated with Ian Tamargo, who graduated this year with a bachelor’s degree in chemistry; Hyoungsoo Kim, a postdoctoral research associate and fluid dynamics expert in the mechanical and aerospace engineering department; and Barry Rand, an assistant professor of electrical engineering and the Andlinger Center for Energy and the Environment.

McAlpine said that one of 3-D printing’s greatest strengths is its ability to create electronics in complex forms. Unlike traditional electronics manufacturing, which builds circuits in flat assemblies and then stacks them into three dimensions, 3-D printers can create vertical structures as easily as horizontal ones.

“In this case, we had a cube of LEDs,” he said. “Some of the wiring was vertical and some was horizontal.”

To conduct the research, the team built a new type of 3-D printer that McAlpine described as “somewhere between off-the-shelf and really fancy.” Dan Steingart, an assistant professor of mechanical and aerospace engineering and the Andlinger Center, helped design and build the new printer, which McAlpine estimated cost in the neighborhood of $20,000.

McAlpine said that he does not envision 3-D printing replacing traditional manufacturing in electronics any time soon; instead, they are complementary technologies with very different strengths. Traditional manufacturing, which uses lithography to create electronic components, is a fast and efficient way to make multiple copies with a very high reliability. Manufacturers are using 3-D printing, which is slow but easy to change and customize, to create molds and patterns for rapid prototyping.

Prime uses for 3-D printing are situations that demand flexibility and that need to be tailored to a specific use. For example, conventional manufacturing techniques are not practical for medical devices that need to be fit to a patient’s particular shape or devices that require the blending of unusual materials in customized ways.

“Trying to print a cellphone is probably not the way to go,” McAlpine said. “It is customization that gives the power to 3-D printing.”

In this case, the researchers were able to custom 3-D print electronics on a contact lens by first scanning the lens, and feeding the geometric information back into the printer. This allowed for conformal 3-D printing of an LED on the contact lens.

Here’s what the contact lens looks like,

Michael McAlpine, an assistant professor of mechanical and aerospace engineering at Princeton, is leading a research team that uses 3-D printing to create complex electronics devices such as this light-emitting diode printed in a plastic contact lens. (Photos by Frank Wojciechowski)

Michael McAlpine, an assistant professor of mechanical and aerospace engineering at Princeton, is leading a research team that uses 3-D printing to create complex electronics devices such as this light-emitting diode printed in a plastic contact lens. (Photos by Frank Wojciechowski)

Also, here’s a link to and a citation for the research paper,

3D Printed Quantum Dot Light-Emitting Diodes by Yong Lin Kong, Ian A. Tamargo, Hyoungsoo Kim, Blake N. Johnson, Maneesh K. Gupta, Tae-Wook Koh, Huai-An Chin, Daniel A. Steingart, Barry P. Rand, and Michael C. McAlpine. Nano Lett., 2014, 14 (12), pp 7017–7023 DOI: 10.1021/nl5033292 Publication Date (Web): October 31, 2014

Copyright © 2014 American Chemical Society

This paper is behind a paywall.

I’m always a day behind for Dexter Johnson’s postings on the Nanoclast blog (located on the IEEE [institute of Electrical and Electronics Engineers]) so I didn’t see his Dec. 11, 2014 post about these 3Dprinted LED[embedded contact lenses until this morning (Dec. 12, 2014). In any event, I’m excerpting his very nice description of quantum dots,

The LED was made out of the somewhat exotic nanoparticles known as quantum dots. Quantum dots are a nanocrystal that have been fashioned out of semiconductor materials and possess distinct optoelectronic properties, most notably fluorescence, which makes them applicable in this case for the LEDs of the contact lens.

“We used the quantum dots [also known as nanoparticles] as an ink,” McAlpine said. “We were able to generate two different colors, orange and green.”

I encourage you to read Dexter’s post as he provides additional insights based on his long-standing membership within the nanotechnology community.

Materials research and nanotechnology for clean energy at Addis Ababa University (Ethiopia)

Getting to the bottom line of a complex set of  interlinked programs and initiatives, it’s safe to say that a group of US students went to study with research Addis Ababa University (Ethiopia) in the first Materials Research School which was held Dec. 9 -21, 2012.

Rutgers University (New Jersey, US)  student Aleksandra Biedron attended the Materials Research School as a member of a joint Rutgers University-Princeton University Nanotechnology for Clean Energy graduate training program (one of the US National Science Foundation’s Integrative Graduate Education Research Traineeship [IGERT] programs).

In a Summer 2013 (volume 14) issue of Rutgers University’s Chemistry and Chemical Biology News, Biedron describes the experience,

The program brought together approximately 50 graduate students and early-career materials researchers from across the United States and East Africa, as well as 15 internationally recognized instructors, for two weeks of lectures, problem solving, and cultural exchange. “I was interested in meeting young African scientists to discuss energy materials, a universal concern, which is relevant to my research in ionic liquids,” said Biedron, a graduate of Livingston High School [Berkeley Heights, New Jersey]. “I was also excited to see Addis Ababa, Ethiopia, and experience the culture and historical attractions.”

A cornerstone of the Nanotechnology for Clean Energy IGERT program is having the students apply their training in a dynamic educational exchange program with African institutions, promoting development of the students’ global awareness and understanding of the challenges involved in global scientific and economic development. In Addis Ababa, Biedron quickly noticed how different the scope of research was between the African scientists and their international counterparts.

“The African scientists’ research was really solution-based,” said Biedron. “They were looking at how they could use their natural resources to solve some of their region’s most pressing issues, not only for energy, but also health, clean water, and housing. You don’t really see that as much in the U.S. because we are already thinking about the future, 10 or 20 years from now.”

H/T centraljerseycentral.com, Aug. 1, 2013 news item.

I found a little more information about the first Materials Research School on this Columbia University JUAMI (Joint US-Africa Materials Initiative) webpage,

The Joint US-Africa Materials Initiative
Announces its first Materials Research School
To be held in Addis Ababa, Ethiopia, December 9-21, 2012

Theme of the school:

The first school will concentrate on materials research for sustainable energy. Tutorials and seminar topics will range from photocatalysis and photovoltaics to fuel cells and batteries.

Goals of the school:

The initiative aims to build materials science research and collaborations between the United States and Africa, with an initial focus on East Africa, and to develop ties between young materials researchers in both regions in a school taught by top materials researchers. The school will bring together approximately 50 PhD and early career materials researchers from across the US and East Africa, and 15 internationally recognized instructors, for two weeks of lectures, problem solving and cultural exchange in historic Addis Ababa, Ethiopia. Topics include photocatalysis, photovoltaics, thermoelectrics, fuel cells, and batteries.

I also found this on the IGERT homepage,

IGERT Trainees participate in:
  • Interdisciplinary courses in the fundamentals of energy technology, nanotechnology and energy policy.
  • Dissertation research emphasizing nanotechnology and energy.
  • Dynamic educational exchange between U.S. and select African institutions.

Unpredictable beauty at Princeton University

Princeton University recently held an ‘Art of Science’ exhibition, which has now been made available online and here’s the one I liked best of the ones I’ve seen so far,

People's Second Place: Bridging the gap. Credit: Jason Wexler (graduate student) and Howard A. Stone (faculty) Department of Mechanical and Aerospace Engineering When drops of liquid are trapped in a thin gap between two solids, a strong negative pressure develops inside the drops. If the solids are flexible, this pressure deforms the solids to close the gap. In our experiment the solids are transparent, which allows us to image the drops from above. Alternating dark and light lines represent lines of constant gap height, much like the lines on a topological map. Â These lines are caused by light interference, which is the phenomenon responsible for the beautiful rainbow pattern in an oil slick. The blue areas denote the extent of the drops. Since the drops pull the gap closed, the areas of minimum gap height (i.e. maximum deformation) are inside the drops, at the center of the concentric rings.

People’s Second Place: Bridging the gap. Credit: Jason Wexler (graduate student) and Howard A. Stone (faculty)
Department of Mechanical and Aerospace Engineering
When drops of liquid are trapped in a thin gap between two solids, a strong negative pressure develops inside the drops. If the solids are flexible, this pressure deforms the solids to close the gap. In our experiment the solids are transparent, which allows us to image the drops from above. Alternating dark and light lines represent lines of constant gap height, much like the lines on a topological map. These lines are caused by light interference, which is the phenomenon responsible for the beautiful rainbow pattern in an oil slick. The blue areas denote the extent of the drops. Since the drops pull the gap closed, the areas of minimum gap height (i.e. maximum deformation) are inside the drops, at the center of the concentric rings.

There’s more about the real life and online exhibition in the May 16, 2013 Princeton University news release on EurekAlert,

The Princeton University Art of Science 2013 exhibit can now be viewed in a new online gallery. The exhibit consists of 43 images of artistic merit created during the course of scientific research:


The gallery features the top three awards in a juried competition as well as the top three “People’s Choice” images.

The physical Art of Science 2013 gallery opened May 10 with a reception attended by about 200 people in the Friend Center on the Princeton University campus. The works were chosen from 170 images submitted from 24 different departments across campus.

“Like art, science and engineering are deeply creative activities,” said Pablo Debenedetti, the recently appointed Dean for Research at Princeton who served as master of ceremonies at the opening reception. “Also like art, science and engineering at their very best are highly unpredictable in their outcomes. The Art of Science exhibit celebrates the beauty of unpredictability and the unpredictability of beauty.” [emphasis mine]

Adam Finkelstein, professor of computer science and one of the exhibit organizers, said that Art of Science spurs debate among artists about the nature of art while opening scientists to new ways of “seeing” their own research. “At the same time,” Finkelstein said, “this striking imagery serves as a democratic window through which non-experts can appreciate the thrill of scientific discovery.”

The top three entrants as chosen by a distinguished jury received cash prizes in amounts calculated by the golden ratio (whose proportions have since antiquity been considered to be aesthetically pleasing): first prize, $250; second prize, $154.51; and third prize, $95.49. [emphasis mine]

The physical exhibit is located in the Friend Center on the Princeton University campus in Princeton, N.J.. The exhibit is free and open to the public, Monday through Friday, from 9 a.m. to 6 p.m.

There are three pages of viewing delight at Princeton’s Art of Science 2013 online gallery. Have a lovely weekend picking your favourites.

More than human—a bionic ear that extends hearing beyond the usual frequencies

It’s now possible to print a bionic ear in 3D that can hear beyond the human range and all you need is off-the-shelf printing equipment—and technical expertise. A May 2, 2013 news item on Azonano provides more detail,

Scientists at Princeton University used off-the-shelf printing tools to create a functional ear that can “hear” radio frequencies far beyond the range of normal human capability.

“In general, there are mechanical and thermal challenges with interfacing electronic materials with biological materials,” said Michael McAlpine, an assistant professor of mechanical and aerospace engineering at Princeton and the lead researcher. “Previously, researchers have suggested some strategies to tailor the electronics so that this merger is less awkward. That typically happens between a 2D sheet of electronics and a surface of the tissue. However, our work suggests a new approach — to build and grow the biology up with the electronics synergistically and in a 3D interwoven format.”

McAlpine’s team has made several advances in recent years involving the use of small-scale medical sensors and antenna. Last year, a research effort led by McAlpine and Naveen Verma, an assistant professor of electrical engineering, and Fio Omenetto of Tufts University, resulted in the development of a “tattoo” made up of a biological sensor and antenna that can be affixed to the surface of a tooth.

The tooth tattoo is mentioned in my Nov. 9, 2012 posting; I focused more on Tufts University than Princeton in that piece. As for the ear (from the news item on Azonano),

The finished ear consists of a coiled antenna inside a cartilage structure. Two wires lead from the base of the ear and wind around a helical “cochlea” – the part of the ear that senses sound – which can connect to electrodes. Although McAlpine cautions that further work and extensive testing would need to be done before the technology could be used on a patient, he said the ear in principle could be used to restore or enhance human hearing. He said electrical signals produced by the ear could be connected to a patient’s nerve endings, similar to a hearing aid. The current system receives radio waves, but he said the research team plans to incorporate other materials, such as pressure-sensitive electronic sensors, to enable the ear to register acoustic sounds.

Here’s the technique the researchers used to create their bionic ear (from the news item),

Standard tissue engineering involves seeding types of cells, such as those that form ear cartilage, onto a scaffold of a polymer material called a hydrogel. However, the researchers said that this technique has problems replicating complicated three dimensional biological structures. Ear reconstruction “remains one of the most difficult problems in the field of plastic and reconstructive surgery,” they wrote.

To solve the problem, the team turned to a manufacturing approach called 3D printing. These printers use computer-assisted design to conceive of objects as arrays of thin slices. The printer then deposits layers of a variety of materials – ranging from plastic to cells – to build up a finished product. Proponents say additive manufacturing promises to revolutionize home industries by allowing small teams or individuals to create work that could previously only be done by factories.

Creating organs using 3D printers is a recent advance; several groups have reported using the technology for this purpose in the past few months. But this is the first time that researchers have demonstrated that 3D printing is a convenient strategy to interweave tissue with electronics.

The technique allowed the researchers to combine the antenna electronics with tissue within the highly complex topology of a human ear. The researchers used an ordinary 3D printer to combine a matrix of hydrogel and calf cells with silver nanoparticles that form an antenna. The calf cells later develop into cartilage.

Here’s an image of the ear,

Scientists used 3-D printing to merge tissue and an antenna capable of receiving radio signals. Credit: Photo by Frank Wojciechowski

Scientists used 3-D printing to merge tissue and an antenna capable of receiving radio signals. Credit: Photo by Frank Wojciechowski

For interested parties,a link to and a citation for the published research,

A 3D Printed Bionic Ear by Manu S Mannoor , Ziwen Jiang , Teena James , Yong Lin Kong , Karen A Malatesta , Winston Soboyejo , Naveen Verma , David H Gracias , and Michael C. McAlpine. Nano Lett., Just Accepted Manuscript DOI: 10.1021/nl4007744 Publication Date (Web): May 1, 2013

Copyright © 2013 American Chemical Society

This article is behind a paywall.

At this point, the ear is strictly for use in the laboratory they have not run any ‘in vivo’ experiments, which would be one of the next steps and a prerequisite before  human clinical trials are considered.

I have written about human enhancement before, notably in my Aug. 30, 2011 posting where I highlighted this excerpt from an article by Paul Hochman,

“I don’t think I would have said this if it had never happened,” says Bailey, referring to the accident that tore off his pinkie, ring, and middle fingers. “But I told Touch Bionics I’d cut the rest of my hand off if I could make all five of my fingers robotic.” [originally excerpted from Paul Hochman’s Feb. 1, 2010 article, Bionic Legs, i-Limbs, and Other Super Human Prostheses You’ll Envy for Fast Company]

The Bailey quote stimulated this question for me, what would you choose if you could get an ear that hears beyond the human range?

Tooth tattoos at Tufts University

In spring 2012, there was a fluttering in the blogosphere about tooth tattoos with the potential for monitoring dental health. As sometimes happens, I put off posting about the work until it seemed everyone else had written about it (e.g. Mar. 30, 2012 posting by Dexter Johnson for his Nanoclast blog on the IEEE website) and there was nothing left for me to say.  Happily, the researchers at Tufts University (where part of this research [Princeton University is also involved] is being pursued) have released more information in a Nov. 1, 2012 news article by David Levin,

The sensor, dubbed a “tooth tattoo,” was developed by the Princeton nanoscientist Michael McAlpine and Tufts bioengineers Fiorenzo Omenetto, David Kaplan and Hu Tao. The team first published their research last spring in the journal Nature Communications.

The sensor is relatively simple in its construction, says McAlpine. It’s made up of just three layers: a sheet of thin gold foil electrodes, an atom-thick layer of graphite known as graphene and a layer of specially engineered peptides, chemical structures that “sense” bacteria by binding to parts of their cell membranes.

“We created a new type of peptide that can serve as an intermediary between bacteria and the sensor,” says McAlpine. “At one end is a molecule that can bond with the graphene, and at the other is a molecule that bonds with bacteria,” allowing the sensor to register the presence of bacteria, he says.

Because the layers of the device are so thin and fragile, they need to be mounted atop a tough but flexible backing in order to transfer them to a tooth. The ideal foundation, McAlpine says, turns out to be silk—a substance with which Kaplan and Omenetto have been working for years.

By manipulating the proteins that make up a single strand of silk, it’s possible to create silk structures in just about any shape, says Omenetto, a professor of biomedical engineering at Tufts. Since 2005, he’s created dozens of different structures out of silk, from optical lenses to orthopedic implants. Silk is “kind of like plastic, in that we can make [it] do almost anything,” he says. “We have a lot of control over the material. It can be rigid. It can be flexible. We can make it dissolve in water, stay solid, become a gel—whatever we need.”

Omenetto, Kaplan and Tao created a thin, water-soluble silk backing for McAlpine’s bacterial sensor—a film that’s strong enough to hold the sensor components in place, but soft and pliable enough to wrap easily around the irregular contours of a tooth.

To apply the sensor, McAlpine says, you need only to wet the surface of the entire assembly—silk, sensor and all—and then press it onto the tooth. Once there, the silk backing will dissolve within 15 or 20 minutes, leaving behind the sensor, a rectangle of interwoven gold and black electrodes about half the size of a postage stamp and about as thick as a sheet of paper. The advantage of being attached directly to a tooth means that the sensor is in direct contact with bacteria in the mouth—an ideal way to monitor oral health.

Because the sensor doesn’t carry any onboard batteries, it must be both read and powered simultaneously through a built-in antenna. Using a custom-made handheld device about the size of a TV remote, McAlpine’s team can “ping” that antenna with radio waves, causing it to resonate electronically and send back information that the device then uses to determine if bacteria are present.

The sensor (A), attached to a tooth (B) and activated by radio signals (C), binds with certain bacteria (D). Illustration: Manu Mannoor/Nature Communications (downloaded from http://now.tufts.edu/articles/tooth-tattoo)

In addition to its potential for  monitoring dental health, the tooth tattoo could replace some of the more invasive health monitoring techniques (e.g., drawing blood), from the Tufts University article,

In addition to monitoring oral health, Kugel [Gerard Kugel, Tufts professor of prosthodontics and operative dentistry and associate dean for research at Tufts School of Dental Medicine] believes the tooth tattoo might be useful for monitoring a patient’s overall health. Biological markers for many diseases—from stomach ulcers to AIDS—appear in human saliva, he says. So if a sensor could be modified to react to those markers, it potentially could help dentists identify problems early on and refer patients to a physician before a condition becomes serious.

“The mouth is a window to the rest of the body,” Kugel says. “You can spot a lot of potential health problems through saliva, and it’s a much less invasive way to do diagnostic tests than drawing blood.”

Before monitoring of any type can take place, there is at least one major hurdle still be overcome. Humans are quite sensitive to objects being placed in their mouths. According to one of the researchers, we can sense objects that are 50 to 60 microns wide, about the thickness piece of paper, and that may be too uncomfortable to bear.

H/T Nov. 9, 2012 news item on Nanowerk for pointing me towards the latest information about these tooth tattoos.