Tag Archives: Princeton University

Of musical parodies, Despacito, and evolution

What great timing, I just found out about a musical science parody featuring evolution and biology and learned of the latest news about the study of evolution on one of the islands in the Galapagos (where Charles Darwin made some of his observations). Thanks to Stacey Johnson for her November 24, 2017 posting on the Signals blog for featuring Evo-Devo (Despacito Biology Parody), an A Capella Science music video from Tim Blais,

Now, for the latest regarding the Galapagos and evolution (from a November 24, 2017 news item on ScienceDaily),

The arrival 36 years ago of a strange bird to a remote island in the Galapagos archipelago has provided direct genetic evidence of a novel way in which new species arise.

In this week’s issue of the journal Science, researchers from Princeton University and Uppsala University in Sweden report that the newcomer belonging to one species mated with a member of another species resident on the island, giving rise to a new species that today consists of roughly 30 individuals.

The study comes from work conducted on Darwin’s finches, which live on the Galapagos Islands in the Pacific Ocean. The remote location has enabled researchers to study the evolution of biodiversity due to natural selection.

The direct observation of the origin of this new species occurred during field work carried out over the last four decades by B. Rosemary and Peter Grant, two scientists from Princeton, on the small island of Daphne Major.

A November 23, 2017 Princeton University news release on EurekAlert, which originated the news item, provides more detail,

“The novelty of this study is that we can follow the emergence of new species in the wild,” said B. Rosemary Grant, a senior research biologist, emeritus, and a senior biologist in the Department of Ecology and Evolutionary Biology. “Through our work on Daphne Major, we were able to observe the pairing up of two birds from different species and then follow what happened to see how speciation occurred.”

In 1981, a graduate student working with the Grants on Daphne Major noticed the newcomer, a male that sang an unusual song and was much larger in body and beak size than the three resident species of birds on the island.

“We didn’t see him fly in from over the sea, but we noticed him shortly after he arrived. He was so different from the other birds that we knew he did not hatch from an egg on Daphne Major,” said Peter Grant, the Class of 1877 Professor of Zoology, Emeritus, and a professor of ecology and evolutionary biology, emeritus.

The researchers took a blood sample and released the bird, which later bred with a resident medium ground finch of the species Geospiz fortis, initiating a new lineage. The Grants and their research team followed the new “Big Bird lineage” for six generations, taking blood samples for use in genetic analysis.

In the current study, researchers from Uppsala University analyzed DNA collected from the parent birds and their offspring over the years. The investigators discovered that the original male parent was a large cactus finch of the species Geospiza conirostris from Española island, which is more than 100 kilometers (about 62 miles) to the southeast in the archipelago.

The remarkable distance meant that the male finch was not able to return home to mate with a member of his own species and so chose a mate from among the three species already on Daphne Major. This reproductive isolation is considered a critical step in the development of a new species when two separate species interbreed.

The offspring were also reproductively isolated because their song, which is used to attract mates, was unusual and failed to attract females from the resident species. The offspring also differed from the resident species in beak size and shape, which is a major cue for mate choice. As a result, the offspring mated with members of their own lineage, strengthening the development of the new species.

Researchers previously assumed that the formation of a new species takes a very long time, but in the Big Bird lineage it happened in just two generations, according to observations made by the Grants in the field in combination with the genetic studies.

All 18 species of Darwin’s finches derived from a single ancestral species that colonized the Galápagos about one to two million years ago. The finches have since diversified into different species, and changes in beak shape and size have allowed different species to utilize different food sources on the Galápagos. A critical requirement for speciation to occur through hybridization of two distinct species is that the new lineage must be ecologically competitive — that is, good at competing for food and other resources with the other species — and this has been the case for the Big Bird lineage.

“It is very striking that when we compare the size and shape of the Big Bird beaks with the beak morphologies of the other three species inhabiting Daphne Major, the Big Birds occupy their own niche in the beak morphology space,” said Sangeet Lamichhaney, a postdoctoral fellow at Harvard University and the first author on the study. “Thus, the combination of gene variants contributed from the two interbreeding species in combination with natural selection led to the evolution of a beak morphology that was competitive and unique.”

The definition of a species has traditionally included the inability to produce fully fertile progeny from interbreeding species, as is the case for the horse and the donkey, for example. However, in recent years it has become clear that some closely related species, which normally avoid breeding with each other, do indeed produce offspring that can pass genes to subsequent generations. The authors of the study have previously reported that there has been a considerable amount of gene flow among species of Darwin’s finches over the last several thousands of years.

One of the most striking aspects of this study is that hybridization between two distinct species led to the development of a new lineage that after only two generations behaved as any other species of Darwin’s finches, explained Leif Andersson, a professor at Uppsala University who is also affiliated with the Swedish University of Agricultural Sciences and Texas A&M University. “A naturalist who came to Daphne Major without knowing that this lineage arose very recently would have recognized this lineage as one of the four species on the island. This clearly demonstrates the value of long-running field studies,” he said.

It is likely that new lineages like the Big Birds have originated many times during the evolution of Darwin’s finches, according to the authors. The majority of these lineages have gone extinct but some may have led to the evolution of contemporary species. “We have no indication about the long-term survival of the Big Bird lineage, but it has the potential to become a success, and it provides a beautiful example of one way in which speciation occurs,” said Andersson. “Charles Darwin would have been excited to read this paper.”

Here’s a link to and a citation for the paper,

Rapid hybrid speciation in Darwin’s finches by Sangeet Lamichhaney, Fan Han, Matthew T. Webster, Leif Andersson, B. Rosemary Grant, Peter R. Grant. Science 23 Nov 2017: eaao4593 DOI: 10.1126/science.aao4593

This paper is behind a paywall.

Happy weekend! And for those who love their Despacito, there’s this parody featuring three Italians in a small car (thanks again to Stacey Johnson’s blog posting),

A different type of ‘smart’ window with a new solar cell technology

I always like a ‘smart’ window story. Given my issues with summer (I don’t like the heat), anything which promises to help reduce the heat in my home at that time of year, has my vote. Unfortunately, solutions don’t seem to have made a serious impact on the marketplace. Nonetheless, there’s always hope and perhaps this development at Princeton University will be the one to break through the impasse. From a June 30, 2017 news item on ScienceDaily,

Smart windows equipped with controllable glazing can augment lighting, cooling and heating systems by varying their tint, saving up to 40 percent in an average building’s energy costs.

These smart windows require power for operation, so they are relatively complicated to install in existing buildings. But by applying a new solar cell technology, researchers at Princeton University have developed a different type of smart window: a self-powered version that promises to be inexpensive and easy to apply to existing windows. This system features solar cells that selectively absorb near-ultraviolet (near-UV) light, so the new windows are completely self-powered.

A June 30, 2017 Princeton University news release, which originated the news item, expands on the theme,

“Sunlight is a mixture of electromagnetic radiation made up of near-UV rays, visible light, and infrared energy, or heat,” said Yueh-Lin (Lynn) Loo, director of the Andlinger Center for Energy and the Environment, and the Theodora D. ’78 and William H. Walton III ’74 Professor in Engineering. “We wanted the smart window to dynamically control the amount of natural light and heat that can come inside, saving on energy cost and making the space more comfortable.”

The smart window controls the transmission of visible light and infrared heat into the building, while the new type of solar cell uses near-UV light to power the system.

“This new technology is actually smart management of the entire spectrum of sunlight,” said Loo, who is a professor of chemical and biological engineering. Loo is one of the authors of a paper, published June 30, that describes this technology, which was developed in her lab.

Because near-UV light is invisible to the human eye, the researchers set out to harness it for the electrical energy needed to activate the tinting technology.

“Using near-UV light to power these windows means that the solar cells can be transparent and occupy the same footprint of the window without competing for the same spectral range or imposing aesthetic and design constraints,” Loo added. “Typical solar cells made of silicon are black because they absorb all visible light and some infrared heat – so those would be unsuitable for this application.”

In the paper published in Nature Energy, the researchers described how they used organic semiconductors – contorted hexabenzocoronene (cHBC) derivatives – for constructing the solar cells. The researchers chose the material because its chemical structure could be modified to absorb a narrow range of wavelengths – in this case, near-UV light. To construct the solar cell, the semiconductor molecules are deposited as thin films on glass with the same production methods used by organic light-emitting diode manufacturers. When the solar cell is operational, sunlight excites the cHBC semiconductors to produce electricity.

At the same time, the researchers constructed a smart window consisting of electrochromic polymers, which control the tint, and can be operated solely using power produced by the solar cell. When near-UV light from the sun generates an electrical charge in the solar cell, the charge triggers a reaction in the electrochromic window, causing it to change from clear to dark blue. When darkened, the window can block more than 80 percent of light.

Nicholas Davy, a doctoral student in the chemical and biological engineering department and the paper’s lead author, said other researchers have already developed transparent solar cells, but those target infrared energy. However, infrared energy carries heat, so using it to generate electricity can conflict with a smart window’s function of controlling the flow of heat in or out of a building. Transparent near-UV solar cells, on the other hand, don’t generate as much power as the infrared version, but don’t impede the transmission of infrared radiation, so they complement the smart window’s task.

Davy said that the Princeton team’s aim is to create a flexible version of the solar-powered smart window system that can be applied to existing windows via lamination.

“Someone in their house or apartment could take these wireless smart window laminates – which could have a sticky backing that is peeled off – and install them on the interior of their windows,” said Davy. “Then you could control the sunlight passing into your home using an app on your phone, thereby instantly improving energy efficiency, comfort, and privacy.”

Joseph Berry, senior research scientist at the National Renewable Energy Laboratory, who studies solar cells but was not involved in the research, said the research project is interesting because the device scales well and targets a specific part of the solar spectrum.

“Integrating the solar cells into the smart windows makes them more attractive for retrofits and you don’t have to deal with wiring power,” said Berry. “And the voltage performance is quite good. The voltage they have been able to produce can drive electronic devices directly, which is technologically quite interesting.”

Davy and Loo have started a new company, called Andluca Technologies, based on the technology described in the paper, and are already exploring other applications for the transparent solar cells. They explained that the near-UV solar cell technology can also power internet-of-things sensors and other low-power consumer products.

“It does not generate enough power for a car, but it can provide auxiliary power for smaller devices, for example, a fan to cool the car while it’s parked in the hot sun,” Loo said.

Here’s a link to and a citation for the paper,

Pairing of near-ultraviolet solar cells with electrochromic windows for smart management of the solar spectrum by Nicholas C. Davy, Melda Sezen-Edmonds, Jia Gao, Xin Lin, Amy Liu, Nan Yao, Antoine Kahn, & Yueh-Lin Loo. Nature Energy 2, Article number: 17104 (2017 doi:10.1038/nenergy.2017.104 Published online: 30 June 2017

This paper is behind a paywall.

Here’s what a sample of the special glass looks like,

Graduate student Nicholas Davy holds a sample of the special window glass. (Photos by David Kelly Crow)

Machine learning programs learn bias

The notion of bias in artificial intelligence (AI)/algorithms/robots is gaining prominence (links to other posts featuring algorithms and bias are at the end of this post). The latest research concerns machine learning where an artificial intelligence system trains itself with ordinary human language from the internet. From an April 13, 2017 American Association for the Advancement of Science (AAAS) news release on EurekAlert,

As artificial intelligence systems “learn” language from existing texts, they exhibit the same biases that humans do, a new study reveals. The results not only provide a tool for studying prejudicial attitudes and behavior in humans, but also emphasize how language is intimately intertwined with historical biases and cultural stereotypes. A common way to measure biases in humans is the Implicit Association Test (IAT), where subjects are asked to pair two concepts they find similar, in contrast to two concepts they find different; their response times can vary greatly, indicating how well they associated one word with another (for example, people are more likely to associate “flowers” with “pleasant,” and “insects” with “unpleasant”). Here, Aylin Caliskan and colleagues developed a similar way to measure biases in AI systems that acquire language from human texts; rather than measuring lag time, however, they used the statistical number of associations between words, analyzing roughly 2.2 million words in total. Their results demonstrate that AI systems retain biases seen in humans. For example, studies of human behavior show that the exact same resume is 50% more likely to result in an opportunity for an interview if the candidate’s name is European American rather than African-American. Indeed, the AI system was more likely to associate European American names with “pleasant” stimuli (e.g. “gift,” or “happy”). In terms of gender, the AI system also reflected human biases, where female words (e.g., “woman” and “girl”) were more associated than male words with the arts, compared to mathematics. In a related Perspective, Anthony G. Greenwald discusses these findings and how they could be used to further analyze biases in the real world.

There are more details about the research in this April 13, 2017 Princeton University news release on EurekAlert (also on ScienceDaily),

In debates over the future of artificial intelligence, many experts think of the new systems as coldly logical and objectively rational. But in a new study, researchers have demonstrated how machines can be reflections of us, their creators, in potentially problematic ways. Common machine learning programs, when trained with ordinary human language available online, can acquire cultural biases embedded in the patterns of wording, the researchers found. These biases range from the morally neutral, like a preference for flowers over insects, to the objectionable views of race and gender.

Identifying and addressing possible bias in machine learning will be critically important as we increasingly turn to computers for processing the natural language humans use to communicate, for instance in doing online text searches, image categorization and automated translations.

“Questions about fairness and bias in machine learning are tremendously important for our society,” said researcher Arvind Narayanan, an assistant professor of computer science and an affiliated faculty member at the Center for Information Technology Policy (CITP) at Princeton University, as well as an affiliate scholar at Stanford Law School’s Center for Internet and Society. “We have a situation where these artificial intelligence systems may be perpetuating historical patterns of bias that we might find socially unacceptable and which we might be trying to move away from.”

The paper, “Semantics derived automatically from language corpora contain human-like biases,” published April 14  [2017] in Science. Its lead author is Aylin Caliskan, a postdoctoral research associate and a CITP fellow at Princeton; Joanna Bryson, a reader at University of Bath, and CITP affiliate, is a coauthor.

As a touchstone for documented human biases, the study turned to the Implicit Association Test, used in numerous social psychology studies since its development at the University of Washington in the late 1990s. The test measures response times (in milliseconds) by human subjects asked to pair word concepts displayed on a computer screen. Response times are far shorter, the Implicit Association Test has repeatedly shown, when subjects are asked to pair two concepts they find similar, versus two concepts they find dissimilar.

Take flower types, like “rose” and “daisy,” and insects like “ant” and “moth.” These words can be paired with pleasant concepts, like “caress” and “love,” or unpleasant notions, like “filth” and “ugly.” People more quickly associate the flower words with pleasant concepts, and the insect terms with unpleasant ideas.

The Princeton team devised an experiment with a program where it essentially functioned like a machine learning version of the Implicit Association Test. Called GloVe, and developed by Stanford University researchers, the popular, open-source program is of the sort that a startup machine learning company might use at the heart of its product. The GloVe algorithm can represent the co-occurrence statistics of words in, say, a 10-word window of text. Words that often appear near one another have a stronger association than those words that seldom do.

The Stanford researchers turned GloVe loose on a huge trawl of contents from the World Wide Web, containing 840 billion words. Within this large sample of written human culture, Narayanan and colleagues then examined sets of so-called target words, like “programmer, engineer, scientist” and “nurse, teacher, librarian” alongside two sets of attribute words, such as “man, male” and “woman, female,” looking for evidence of the kinds of biases humans can unwittingly possess.

In the results, innocent, inoffensive biases, like for flowers over bugs, showed up, but so did examples along lines of gender and race. As it turned out, the Princeton machine learning experiment managed to replicate the broad substantiations of bias found in select Implicit Association Test studies over the years that have relied on live, human subjects.

For instance, the machine learning program associated female names more with familial attribute words, like “parents” and “wedding,” than male names. In turn, male names had stronger associations with career attributes, like “professional” and “salary.” Of course, results such as these are often just objective reflections of the true, unequal distributions of occupation types with respect to gender–like how 77 percent of computer programmers are male, according to the U.S. Bureau of Labor Statistics.

Yet this correctly distinguished bias about occupations can end up having pernicious, sexist effects. An example: when foreign languages are naively processed by machine learning programs, leading to gender-stereotyped sentences. The Turkish language uses a gender-neutral, third person pronoun, “o.” Plugged into the well-known, online translation service Google Translate, however, the Turkish sentences “o bir doktor” and “o bir hem?ire” with this gender-neutral pronoun are translated into English as “he is a doctor” and “she is a nurse.”

“This paper reiterates the important point that machine learning methods are not ‘objective’ or ‘unbiased’ just because they rely on mathematics and algorithms,” said Hanna Wallach, a senior researcher at Microsoft Research New York City, who was not involved in the study. “Rather, as long as they are trained using data from society and as long as society exhibits biases, these methods will likely reproduce these biases.”

Another objectionable example harkens back to a well-known 2004 paper by Marianne Bertrand of the University of Chicago Booth School of Business and Sendhil Mullainathan of Harvard University. The economists sent out close to 5,000 identical resumes to 1,300 job advertisements, changing only the applicants’ names to be either traditionally European American or African American. The former group was 50 percent more likely to be offered an interview than the latter. In an apparent corroboration of this bias, the new Princeton study demonstrated that a set of African American names had more unpleasantness associations than a European American set.

Computer programmers might hope to prevent cultural stereotype perpetuation through the development of explicit, mathematics-based instructions for the machine learning programs underlying AI systems. Not unlike how parents and mentors try to instill concepts of fairness and equality in children and students, coders could endeavor to make machines reflect the better angels of human nature.

“The biases that we studied in the paper are easy to overlook when designers are creating systems,” said Narayanan. “The biases and stereotypes in our society reflected in our language are complex and longstanding. Rather than trying to sanitize or eliminate them, we should treat biases as part of the language and establish an explicit way in machine learning of determining what we consider acceptable and unacceptable.”

Here’s a link to and a citation for the Princeton paper,

Semantics derived automatically from language corpora contain human-like biases by Aylin Caliskan, Joanna J. Bryson, Arvind Narayanan. Science  14 Apr 2017: Vol. 356, Issue 6334, pp. 183-186 DOI: 10.1126/science.aal4230

This paper appears to be open access.

Links to more cautionary posts about AI,

Aug 5, 2009: Autonomous algorithms; intelligent windows; pretty nano pictures

June 14, 2016:  Accountability for artificial intelligence decision-making

Oct. 25, 2016 Removing gender-based stereotypes from algorithms

March 1, 2017: Algorithms in decision-making: a government inquiry in the UK

There’s also a book which makes some of the current use of AI programmes and big data quite accessible reading: Cathy O’Neil’s ‘Weapons of Math Destruction: How Big Data Increases Inequality and Threatens Democracy’.

Making lead look like gold (so to speak)

Apparently you can make lead ‘look’ like gold if you can get it to reflect light in the same way. From a Feb. 28, 2017 news item on Nanowerk (Note: A link has been removed),

Since the Middle Ages, alchemists have sought to transmute elements, the most famous example being the long quest to turn lead into gold. Transmutation has been realized in modern times, but on a minute scale using a massive particle accelerator.

Now, theorists at Princeton University have proposed a different approach to this ancient ambition — just make one material behave like another. A computational theory published Feb. 24 [2017] in the journal Physical Review Letters (“How to Make Distinct Dynamical Systems Appear Spectrally Identical”) demonstrates that any two systems can be made to look alike, even if just for the smallest fraction of a second.

In this context, for two objects to “look” like each other, they need to reflect light in the same way. The Princeton researchers’ method involves using light to make non-permanent changes to a substance’s molecules so that they mimic the reflective properties of another substance’s molecules. This ability could have implications for optical computing, a type of computing in which electrons are replaced by photons that could greatly enhance processing power but has proven extremely difficult to engineer. It also could be applied to molecular detection and experiments in which expensive samples could be replaced by cheaper alternatives.

A Feb. 28, 2017 Princeton University news release (also on EurekAlert) by Tien Nguyen, which originated the news item, expands on the theme (Note: Links have been removed),

“It was a big shock for us that such a general statement as ‘any two objects can be made to look alike’ could be made,” said co-author Denys Bondar, an associate research scholar in the laboratory of co-author Herschel Rabitz, Princeton’s Charles Phelps Smyth ’16 *17 Professor of Chemistry.

The Princeton researchers posited that they could control the light that bounces off a molecule or any substance by controlling the light shone on it, which would allow them to alter how it looks. This type of manipulation requires a powerful light source such as an ultrafast laser and would last for only a femtosecond, or one quadrillionth of a second. Unlike normal light sources, this ultrafast laser pulse is strong enough to interact with molecules and distort their electron cloud while not actually changing their identity.

“The light emitted by a molecule depends on the shape of its electron cloud, which can be sculptured by modern lasers,” Bondar said. Using advanced computational theory, the research team developed a method called “spectral dynamic mimicry” that allowed them to calculate the laser pulse shape, which includes timing and wavelength, to produce any desired spectral output. In other words, making any two systems look alike.

Conversely, this spectral control could also be used to make two systems look as different from one another as possible. This differentiation, the researchers suggested, could prove valuable for applications of molecular detections such as identifying toxic versus safe chemicals.

Shaul Mukamel, a chemistry professor at the University of California-Irvine, said that the Princeton research is a step forward in an important and active research field called coherent control, in which light can be manipulated to control behavior at the molecular level. Mukamel, who has collaborated with the Rabitz lab but was not involved in the current work, said that the Rabitz group has had a prominent role in this field for decades, advancing technology such as quantum computing and using light to drive artificial chemical reactivity.

“It’s a very general and nice application of coherent control,” Mukamel said. “It demonstrates that you can, by shaping the optical paths, bring the molecules to do things that you want beforehand — it could potentially be very significant.”

Since the Middle Ages, alchemists have sought to transmute elements, the most famous example being the long quest to turn lead into gold. Now, theorists at Princeton University have proposed a different approach to this ancient ambition — just make one material behave like another, even if just for the smallest fraction of a second. The researchers are, left to right, Renan Cabrera, an associate research scholar in chemistry; Herschel Rabitz, Princeton’s Charles Phelps Smyth ’16 *17 Professor of Chemistry; associate research scholar in chemistry Denys Bondar; and graduate student Andre Campos. (Photo by C. Todd Reichart, Department of Chemistry)

Here’s a link to and a citation for the paper,

How to Make Distinct Dynamical Systems Appear Spectrally Identical by
Andre G. Campos, Denys I. Bondar, Renan Cabrera, and Herschel A. Rabitz.
Phys. Rev. Lett. 118, 083201 (Vol. 118, Iss. 8) DOI:https://doi.org/10.1103/PhysRevLett.118.083201 Published 24 February 2017

© 2017 American Physical Society

This paper is behind a paywall.

Brushing your way to nanofibres

The scientists are using what looks like a hairbrush to create nanofibres ,

Figure 2: Brush-spinning of nanofibers. (Reprinted with permission by Wiley-VCH Verlag)) [downloaded from http://www.nanowerk.com/spotlight/spotid=41398.php]

Figure 2: Brush-spinning of nanofibers. (Reprinted with permission by Wiley-VCH Verlag)) [downloaded from http://www.nanowerk.com/spotlight/spotid=41398.php]

A Sept. 23, 2015 Nanowerk Spotlight article by Michael Berger provides an in depth look at this technique (developed by a joint research team of scientists from the University of Georgia, Princeton University, and Oxford University) which could make producing nanofibers for use in scaffolds (tissue engineering and other applications) more easily and cheaply,

Polymer nanofibers are used in a wide range of applications such as the design of new composite materials, the fabrication of nanostructured biomimetic scaffolds for artificial bones and organs, biosensors, fuel cells or water purification systems.

“The simplest method of nanofiber fabrication is direct drawing from a polymer solution using a glass micropipette,” Alexander Tokarev, Ph.D., a Research Associate in the Nanostructured Materials Laboratory at the University of Georgia, tells Nanowerk. “This method however does not scale up and thus did not find practical applications. In our new work, we introduce a scalable method of nanofiber spinning named touch-spinning.”

James Cook in a Sept. 23, 2015 article for Materials Views provides a description of the technology,

A glass rod is glued to a rotating stage, whose diameter can be chosen over a wide range of a few centimeters to more than 1 m. A polymer solution is supplied, for example, from a needle of a syringe pump that faces the glass rod. The distance between the droplet of polymer solution and the tip of the glass rod is adjusted so that the glass rod contacts the polymer droplet as it rotates.

Following the initial “touch”, the polymer droplet forms a liquid bridge. As the stage rotates the bridge stretches and fiber length increases, with the diameter decreasing due to mass conservation. It was shown that the diameter of the fiber can be precisely controlled down to 40 nm by the speed of the stage rotation.

The method can be easily scaled-up by using a round hairbrush composed of 600 filaments.

When the rotating brush touches the surface of a polymer solution, the brush filaments draw many fibers simultaneously producing hundred kilometers of fibers in minutes.

The drawn fibers are uniform since the fiber diameter depends on only two parameters: polymer concentration and speed of drawing.

Returning to Berger’s Spotlight article, there is an important benefit with this technique,

As the team points out, one important aspect of the method is the drawing of single filament fibers.

These single filament fibers can be easily wound onto spools of different shapes and dimensions so that well aligned one-directional, orthogonal or randomly oriented fiber meshes with a well-controlled average mesh size can be fabricated using this very simple method.

“Owing to simplicity of the method, our set-up could be used in any biomedical lab and facility,” notes Tokarev. “For example, a customized scaffold by size, dimensions and othermorphologic characteristics can be fabricated using donor biomaterials.”

Berger’s and Cook’s articles offer more illustrations and details.

Here’s a link to and a citation for the paper,

Touch- and Brush-Spinning of Nanofibers by Alexander Tokarev, Darya Asheghal, Ian M. Griffiths, Oleksandr Trotsenko, Alexey Gruzd, Xin Lin, Howard A. Stone, and Sergiy Minko. Advanced Materials DOI: 10.1002/adma.201502768ViewFirst published: 23 September 2015

This paper is behind a paywall.

Magnetospinning with an inexpensive magnet

The fridge magnet mentioned in the headline for a May 11, 2015  Nanowerk spotlight aricle by Michael Berger isn’t followed up until the penultimate paragraph but it is worth the wait,

“Our method for spinning of continuous micro- and nanofibers uses a permanent revolving magnet,” Alexander Tokarev, Ph.D., a Research Associate in the Nanostructured Materials Laboratory at the University of Georgia, tells Nanowerk. “This fabrication technique utilizes magnetic forces and hydrodynamic features of stretched threads to produce fine nanofibers.”

“The new method provides excellent control over the fiber diameter and is compatible with a range of polymeric materials and polymer composite materials including biopolymers,” notes Tokarev. “Our research showcases this new technique and demonstrates its advantages to the scientific community.”

Electrospinning is the most popular method to produce nanofibers in labs now. Owing to its simplicity and low costs, a magnetospinning set-up could be installed in any non-specialized laboratory for broader use of magnetospun nanofibers in different methods and technologies. The total cost of a laboratory electrospinning system is above $10,000. In contrast, no special equipment is needed for magnetospinning. It is possible to build a magnetospinning set-up, such as the University of Georgia team utilizes, by just using a $30 rotating motor and a $5 permanent magnet. [emphasis mine]

Berger’s article references a recent paper published by the team,

Magnetospinning of Nano- and Microfibers by Alexander Tokarev, Oleksandr Trotsenko, Ian M. Griffiths, Howard A. Stone, and Sergiy Minko. Advanced Materials First published: 8 May 2015Full publication history DOI: 10.1002/adma.201500374View/save citation

This paper is behind a paywall.

* The headline originally stated that a ‘fridge’ magnet was used. Researcher Alexander Tokarev kindly dropped by correct this misunderstanding on my part and the headline has been changed to read  ‘inexpensive magnet’ on May 14, 2015 at approximately 1400 hundred hours PDT.

A 2nd European roadmap for graphene

About 2.5 years ago there was an article titled, “A roadmap for graphene” (behind a paywall) which Nature magazine published online in Oct. 2012. I see at least two of the 2012 authors, Konstantin (Kostya) Novoselov and Vladimir Fal’ko,, are party to this second, more comprehensive roadmap featured in a Feb. 24, 2015 news item on Nanowerk,

In October 2013, academia and industry came together to form the Graphene Flagship. Now with 142 partners in 23 countries, and a growing number of associate members, the Graphene Flagship was established following a call from the European Commission to address big science and technology challenges of the day through long-term, multidisciplinary R&D efforts.

A Feb.  24, 2015 University of Cambridge news release, which originated the news item, describes the roadmap in more detail,

In an open-access paper published in the Royal Society of Chemistry journal Nanoscale, more than 60 academics and industrialists lay out a science and technology roadmap for graphene, related two-dimensional crystals, other 2D materials, and hybrid systems based on a combination of different 2D crystals and other nanomaterials. The roadmap covers the next ten years and beyond, and its objective is to guide the research community and industry toward the development of products based on graphene and related materials.

The roadmap highlights three broad areas of activity. The first task is to identify new layered materials, assess their potential, and develop reliable, reproducible and safe means of producing them on an industrial scale. Identification of new device concepts enabled by 2D materials is also called for, along with the development of component technologies. The ultimate goal is to integrate components and structures based on 2D materials into systems capable of providing new functionalities and application areas.

Eleven science and technology themes are identified in the roadmap. These are: fundamental science, health and environment, production, electronic devices, spintronics, photonics and optoelectronics, sensors, flexible electronics, energy conversion and storage, composite materials, and biomedical devices. The roadmap addresses each of these areas in turn, with timelines.

Research areas outlined in the roadmap correspond broadly with current flagship work packages, with the addition of a work package devoted to the growing area of biomedical applications, to be included in the next phase of the flagship. A recent independent assessment has confirmed that the Graphene Flagship is firmly on course, with hundreds of research papers, numerous patents and marketable products to its name.

Roadmap timelines predict that, before the end of the ten-year period of the flagship, products will be close to market in the areas of flexible electronics, composites, and energy, as well as advanced prototypes of silicon-integrated photonic devices, sensors, high-speed electronics, and biomedical devices.

“This publication concludes a four-year effort to collect and coordinate state-of-the-art science and technology of graphene and related materials,” says Andrea Ferrari, director of the Cambridge Graphene Centre, and chairman of the Executive Board of the Graphene Flagship. “We hope that this open-access roadmap will serve as the starting point for academia and industry in their efforts to take layered materials and composites from laboratory to market.” Ferrari led the roadmap effort with Italian Institute of Technology physicist Francesco Bonaccorso, who is a Royal Society Newton Fellow of the University of Cambridge, and a Fellow of Hughes Hall.

“We are very proud of the joint effort of the many authors who have produced this roadmap,” says Jari Kinaret, director of the Graphene Flagship. “The roadmap forms a solid foundation for the graphene community in Europe to plan its activities for the coming years. It is not a static document, but will evolve to reflect progress in the field, and new applications identified and pursued by industry.”

I have skimmed through the report briefly (wish I had more time) and have a couple of comments. First, there’s an excellent glossary of terms for anyone who might stumble over chemical abbreviations and/or more technical terminology. Second, they present a very interesting analysis of the intellectual property (patents) landscape (Note: Links have been removed. Incidental numbers are footnote references),

In the graphene area, there has been a particularly rapid increase in patent activity from around 2007.45 Much of this is driven by patent applications made by major corporations and universities in South Korea and USA.53 Additionally, a high level of graphene patent activity in China is also observed.54 These features have led some commentators to conclude that graphene innovations arising in Europe are being mainly exploited elsewhere.55 Nonetheless, an analysis of the Intellectual Property (IP) provides evidence that Europe already has a significant foothold in the graphene patent landscape and significant opportunities to secure future value. As the underlying graphene technology space develops, and the GRM [graphene and related materials] patent landscape matures, re-distribution of the patent landscape seems inevitable and Europe is well positioned to benefit from patent-based commercialisation of GRM research.

Overall, the graphene patent landscape is growing rapidly and already resembles that of sub-segments of the semiconductor and biotechnology industries,56 which experience high levels of patent activity. The patent strategies of the businesses active in such sub-sectors frequently include ‘portfolio maximization’56 and ‘portfolio optimization’56 strategies, and the sub-sectors experience the development of what commentators term ‘patent thickets’56, or multiple overlapping granted patent rights.56 A range of policies, regulatory and business strategies have been developed to limit such patent practices.57 In such circumstances, accurate patent landscaping may provide critical information to policy-makers, investors and individual industry participants, underpinning the development of sound policies, business strategies and research commercialisation plans.

It sounds like a patent thicket is developing (Note: Links have been removed. Incidental numbers are footnote references),,

Fig. 13 provides evidence of a relative increase in graphene patent filings in South Korea from 2007 to 2009 compared to 2004–2006. This could indicate increased commercial interest in graphene technology from around 2007. The period 2010 to 2012 shows a marked relative increase in graphene patent filings in China. It should be noted that a general increase in Chinese patent filings across many ST domains in this period is observed.76 Notwithstanding this general increase in Chinese patent activity, there does appear to be increased commercial interest in graphene in China. It is notable that the European Patent Office contribution as a percentage of all graphene patent filings globally falls from a 8% in the period 2007 to 2009 to 4% in the period 2010 to 2012.

The importance of the US, China and South Korea is emphasised by the top assignees, shown in Fig. 14. The corporation with most graphene patent applications is the Korean multinational Samsung, with over three times as many filings as its nearest rival. It has also patented an unrivalled range of graphene-technology applications, including synthesis procedures,77 transparent display devices,78 composite materials,79 transistors,80 batteries and solar cells.81 Samsung’s patent applications indicate a sustained and heavy investment in graphene R&D, as well as collaboration (co-assignment of patents) with a wide range of academic institutions.82,83

 

image file: c4nr01600a-f14.tif
Fig. 14 Top 10 graphene patent assignees by number and cumulative over all time as of end-July 2014. Number of patents are indicated in the red histograms referred to the left Y axis, while the cumulative percentage is the blue line, referred to the right Y axis.

It is also interesting to note that patent filings by universities and research institutions make up a significant proportion ([similar]50%) of total patent filings: the other half comprises contributions from small and medium-sized enterprises (SMEs) and multinationals.

Europe’s position is shown in Fig. 10, 12 and 14. While Europe makes a good showing in the geographical distribution of publications, it lags behind in patent applications, with only 7% of patent filings as compared to 30% in the US, 25% in China, and 13% in South Korea (Fig. 13) and only 9% of filings by academic institutions assigned in Europe (Fig. 15).

 

image file: c4nr01600a-f15.tif
Fig. 15 Geographical breakdown of academic patent holders as of July 2014.

While Europe is trailing other regions in terms of number of patent filings, it nevertheless has a significant foothold in the patent landscape. Currently, the top European patent holder is Finland’s Nokia, primarily around incorporation of graphene into electrical devices, including resonators and electrodes.72,84,85

This may sound like Europe is trailing behind but that’s not the case according to the roadmap (Note: Links have been removed. Incidental numbers are footnote references),

European Universities also show promise in the graphene patent landscape. We also find evidence of corporate-academic collaborations in Europe, including e.g. co-assignments filed with European research institutions and Germany’s AMO GmbH,86 and chemical giant BASF.87,88 Finally, Europe sees significant patent filings from a number of international corporate and university players including Samsung,77 Vorbeck Materials,89 Princeton University,90–92 and Rice University,93–95 perhaps reflecting the quality of the European ST base around graphene, and its importance as a market for graphene technologies.

There are a number of features in the graphene patent landscape which may lead to a risk of patent thickets96 or ‘multiple overlapping granted patents’ existing around aspects of graphene technology systems. [emphasis mine] There is a relatively high volume of patent activity around graphene, which is an early stage technology space, with applications in patent intensive industry sectors. Often patents claim carbon nano structures other than graphene in graphene patent landscapes, illustrating difficulties around defining ‘graphene’ and mapping the graphene patent landscape. Additionally, the graphene patent nomenclature is not entirely settled. Different patent examiners might grant patents over the same components which the different experts and industry players call by different names.

For anyone new to this blog, I am not a big fan of current patent regimes as they seem to be stifling rather encouraging innovation. Sadly, patents and copyright were originally developed to encourage creativity and innovation by allowing the creators to profit from their ideas. Over time a system designed to encourage innovation has devolved into one that does the opposite. (My Oct. 31, 2011 post titled Patents as weapons and obstacles, details my take on this matter.) I’m not arguing against patents and copyright but suggesting that the system be fixed or replaced with something that delivers on the original intention.

Getting back to the matter at hand, here’s a link to and a citation for the 200 pp. 2015 European Graphene roadmap,

Science and technology roadmap for graphene, related two-dimensional crystals, and hybrid systems by Andrea C. Ferrari, Francesco Bonaccorso, Vladimir Fal’ko, Konstantin S. Novoselov, Stephan Roche, Peter Bøggild, Stefano Borini, Frank H. L. Koppens, Vincenzo Palermo, Nicola Pugno, José A. Garrido, Roman Sordan, Alberto Bianco, Laura Ballerini, Maurizio Prato, Elefterios Lidorikis, Jani Kivioja, Claudio Marinelli, Tapani Ryhänen, Alberto Morpurgo, Jonathan N. Coleman, Valeria Nicolosi, Luigi Colombo, Albert Fert, Mar Garcia-Hernandez, Adrian Bachtold, Grégory F. Schneider, Francisco Guinea, Cees Dekker, Matteo Barbone, Zhipei Sun, Costas Galiotis,  Alexander N. Grigorenko, Gerasimos Konstantatos, Andras Kis, Mikhail Katsnelson, Lieven Vandersypen, Annick Loiseau, Vittorio Morandi, Daniel Neumaier, Emanuele Treossi, Vittorio Pellegrini, Marco Polini, Alessandro Tredicucci, Gareth M. Williams, Byung Hee Hong, Jong-Hyun Ahn, Jong Min Kim, Herbert Zirath, Bart J. van Wees, Herre van der Zant, Luigi Occhipinti, Andrea Di Matteo, Ian A. Kinloch, Thomas Seyller, Etienne Quesnel, Xinliang Feng,  Ken Teo, Nalin Rupesinghe, Pertti Hakonen, Simon R. T. Neil, Quentin Tannock, Tomas Löfwander and Jari Kinaret. Nanoscale, 2015, Advance Article DOI: 10.1039/C4NR01600A First published online 22 Sep 2014

Here’s a diagram illustrating the roadmap process,

Fig. 122 The STRs [science and technology roadmaps] follow a hierarchical structure where the strategic level in a) is connected to the more detailed roadmap shown in b). These general roadmaps are the condensed form of the topical roadmaps presented in the previous sections, and give technological targets for key applications to become commercially competitive and the forecasts for when the targets are predicted to be met.  Courtesy: Researchers and  the Royal Society's journal, Nanoscale

Fig. 122 The STRs [science and technology roadmaps] follow a hierarchical structure where the strategic level in a) is connected to the more detailed roadmap shown in b). These general roadmaps are the condensed form of the topical roadmaps presented in the previous sections, and give technological targets for key applications to become commercially competitive and the forecasts for when the targets are predicted to be met.
Courtesy: Researchers and the Royal Society’s journal, Nanoscale

The image here is not the best quality; the one embedded in the relevant Nanowerk news item is better.

As for the earlier roadmap, here’s my Oct. 11, 2012 post on the topic.

Projecting beams of light from contact lenses courtesy of Princeton University (US)

Princeton University’s 3D printed contact lenses with LED (light-emitting diodes) included are not meant for use by humans or other living beings but they are a flashy demonstration. From a Dec. 10, 2014 news item on phys.org,

As part of a project demonstrating new 3-D printing techniques, Princeton researchers have embedded tiny light-emitting diodes into a standard contact lens, allowing the device to project beams of colored light.

Michael McAlpine, the lead researcher, cautioned that the lens is not designed for actual use—for one, it requires an external power supply. Instead, he said the team created the device to demonstrate the ability to “3-D print” electronics into complex shapes and materials.

“This shows that we can use 3-D printing to create complex electronics including semiconductors,” said McAlpine, an assistant professor of mechanical and aerospace engineering. “We were able to 3-D print an entire device, in this case an LED.”

A Dec. 9, 2014 Princeton University news release by John Sullivan, which originated the news item, describes the 3D lens, the objectives for this project, and an earlier project involving a ‘bionic ear’ in more detail (Note: Links have been removed),

The hard contact lens is made of plastic. The researchers used tiny crystals, called quantum dots, to create the LEDs that generated the colored light. Different size dots can be used to generate various colors.

“We used the quantum dots [also known as nanoparticles] as an ink,” McAlpine said. “We were able to generate two different colors, orange and green.”

The contact lens is also part of an ongoing effort to use 3-D printing to assemble diverse, and often hard-to-combine, materials into functioning devices. In the recent past, a team of Princeton professors including McAlpine created a bionic ear out of living cells with an embedded antenna that could receive radio signals.

Yong Lin Kong, a researcher on both projects, said the bionic ear presented a different type of challenge.

“The main focus of the bionic ear project was to demonstrate the merger of electronics and biological materials,” said Kong, a graduate student in mechanical and aerospace engineering.

Kong, the lead author of the Oct. 31 [2014] article describing the current work in the journal Nano Letters, said that the contact lens project, on the other hand, involved the printing of active electronics using diverse materials. The materials were often mechanically, chemically or thermally incompatible — for example, using heat to shape one material could inadvertently destroy another material in close proximity. The team had to find ways to handle these incompatibilities and also had to develop new methods to print electronics, rather than use the techniques commonly used in the electronics industry.

“For example, it is not trivial to pattern a thin and uniform coating of nanoparticles and polymers without the involvement of conventional microfabrication techniques, yet the thickness and uniformity of the printed films are two of the critical parameters that determine the performance and yield of the printed active device,” Kong said.

To solve these interdisciplinary challenges, the researchers collaborated with Ian Tamargo, who graduated this year with a bachelor’s degree in chemistry; Hyoungsoo Kim, a postdoctoral research associate and fluid dynamics expert in the mechanical and aerospace engineering department; and Barry Rand, an assistant professor of electrical engineering and the Andlinger Center for Energy and the Environment.

McAlpine said that one of 3-D printing’s greatest strengths is its ability to create electronics in complex forms. Unlike traditional electronics manufacturing, which builds circuits in flat assemblies and then stacks them into three dimensions, 3-D printers can create vertical structures as easily as horizontal ones.

“In this case, we had a cube of LEDs,” he said. “Some of the wiring was vertical and some was horizontal.”

To conduct the research, the team built a new type of 3-D printer that McAlpine described as “somewhere between off-the-shelf and really fancy.” Dan Steingart, an assistant professor of mechanical and aerospace engineering and the Andlinger Center, helped design and build the new printer, which McAlpine estimated cost in the neighborhood of $20,000.

McAlpine said that he does not envision 3-D printing replacing traditional manufacturing in electronics any time soon; instead, they are complementary technologies with very different strengths. Traditional manufacturing, which uses lithography to create electronic components, is a fast and efficient way to make multiple copies with a very high reliability. Manufacturers are using 3-D printing, which is slow but easy to change and customize, to create molds and patterns for rapid prototyping.

Prime uses for 3-D printing are situations that demand flexibility and that need to be tailored to a specific use. For example, conventional manufacturing techniques are not practical for medical devices that need to be fit to a patient’s particular shape or devices that require the blending of unusual materials in customized ways.

“Trying to print a cellphone is probably not the way to go,” McAlpine said. “It is customization that gives the power to 3-D printing.”

In this case, the researchers were able to custom 3-D print electronics on a contact lens by first scanning the lens, and feeding the geometric information back into the printer. This allowed for conformal 3-D printing of an LED on the contact lens.

Here’s what the contact lens looks like,

Michael McAlpine, an assistant professor of mechanical and aerospace engineering at Princeton, is leading a research team that uses 3-D printing to create complex electronics devices such as this light-emitting diode printed in a plastic contact lens. (Photos by Frank Wojciechowski)

Michael McAlpine, an assistant professor of mechanical and aerospace engineering at Princeton, is leading a research team that uses 3-D printing to create complex electronics devices such as this light-emitting diode printed in a plastic contact lens. (Photos by Frank Wojciechowski)

Also, here’s a link to and a citation for the research paper,

3D Printed Quantum Dot Light-Emitting Diodes by Yong Lin Kong, Ian A. Tamargo, Hyoungsoo Kim, Blake N. Johnson, Maneesh K. Gupta, Tae-Wook Koh, Huai-An Chin, Daniel A. Steingart, Barry P. Rand, and Michael C. McAlpine. Nano Lett., 2014, 14 (12), pp 7017–7023 DOI: 10.1021/nl5033292 Publication Date (Web): October 31, 2014

Copyright © 2014 American Chemical Society

This paper is behind a paywall.

I’m always a day behind for Dexter Johnson’s postings on the Nanoclast blog (located on the IEEE [institute of Electrical and Electronics Engineers]) so I didn’t see his Dec. 11, 2014 post about these 3Dprinted LED[embedded contact lenses until this morning (Dec. 12, 2014). In any event, I’m excerpting his very nice description of quantum dots,

The LED was made out of the somewhat exotic nanoparticles known as quantum dots. Quantum dots are a nanocrystal that have been fashioned out of semiconductor materials and possess distinct optoelectronic properties, most notably fluorescence, which makes them applicable in this case for the LEDs of the contact lens.

“We used the quantum dots [also known as nanoparticles] as an ink,” McAlpine said. “We were able to generate two different colors, orange and green.”

I encourage you to read Dexter’s post as he provides additional insights based on his long-standing membership within the nanotechnology community.

Materials research and nanotechnology for clean energy at Addis Ababa University (Ethiopia)

Getting to the bottom line of a complex set of  interlinked programs and initiatives, it’s safe to say that a group of US students went to study with research Addis Ababa University (Ethiopia) in the first Materials Research School which was held Dec. 9 -21, 2012.

Rutgers University (New Jersey, US)  student Aleksandra Biedron attended the Materials Research School as a member of a joint Rutgers University-Princeton University Nanotechnology for Clean Energy graduate training program (one of the US National Science Foundation’s Integrative Graduate Education Research Traineeship [IGERT] programs).

In a Summer 2013 (volume 14) issue of Rutgers University’s Chemistry and Chemical Biology News, Biedron describes the experience,

The program brought together approximately 50 graduate students and early-career materials researchers from across the United States and East Africa, as well as 15 internationally recognized instructors, for two weeks of lectures, problem solving, and cultural exchange. “I was interested in meeting young African scientists to discuss energy materials, a universal concern, which is relevant to my research in ionic liquids,” said Biedron, a graduate of Livingston High School [Berkeley Heights, New Jersey]. “I was also excited to see Addis Ababa, Ethiopia, and experience the culture and historical attractions.”

A cornerstone of the Nanotechnology for Clean Energy IGERT program is having the students apply their training in a dynamic educational exchange program with African institutions, promoting development of the students’ global awareness and understanding of the challenges involved in global scientific and economic development. In Addis Ababa, Biedron quickly noticed how different the scope of research was between the African scientists and their international counterparts.

“The African scientists’ research was really solution-based,” said Biedron. “They were looking at how they could use their natural resources to solve some of their region’s most pressing issues, not only for energy, but also health, clean water, and housing. You don’t really see that as much in the U.S. because we are already thinking about the future, 10 or 20 years from now.”

H/T centraljerseycentral.com, Aug. 1, 2013 news item.

I found a little more information about the first Materials Research School on this Columbia University JUAMI (Joint US-Africa Materials Initiative) webpage,

The Joint US-Africa Materials Initiative
Announces its first Materials Research School
To be held in Addis Ababa, Ethiopia, December 9-21, 2012

Theme of the school:

The first school will concentrate on materials research for sustainable energy. Tutorials and seminar topics will range from photocatalysis and photovoltaics to fuel cells and batteries.

Goals of the school:

The initiative aims to build materials science research and collaborations between the United States and Africa, with an initial focus on East Africa, and to develop ties between young materials researchers in both regions in a school taught by top materials researchers. The school will bring together approximately 50 PhD and early career materials researchers from across the US and East Africa, and 15 internationally recognized instructors, for two weeks of lectures, problem solving and cultural exchange in historic Addis Ababa, Ethiopia. Topics include photocatalysis, photovoltaics, thermoelectrics, fuel cells, and batteries.

I also found this on the IGERT homepage,

IGERT Trainees participate in:
  • Interdisciplinary courses in the fundamentals of energy technology, nanotechnology and energy policy.
  • Dissertation research emphasizing nanotechnology and energy.
  • Dynamic educational exchange between U.S. and select African institutions.

Unpredictable beauty at Princeton University

Princeton University recently held an ‘Art of Science’ exhibition, which has now been made available online and here’s the one I liked best of the ones I’ve seen so far,

People's Second Place: Bridging the gap. Credit: Jason Wexler (graduate student) and Howard A. Stone (faculty) Department of Mechanical and Aerospace Engineering When drops of liquid are trapped in a thin gap between two solids, a strong negative pressure develops inside the drops. If the solids are flexible, this pressure deforms the solids to close the gap. In our experiment the solids are transparent, which allows us to image the drops from above. Alternating dark and light lines represent lines of constant gap height, much like the lines on a topological map. Â These lines are caused by light interference, which is the phenomenon responsible for the beautiful rainbow pattern in an oil slick. The blue areas denote the extent of the drops. Since the drops pull the gap closed, the areas of minimum gap height (i.e. maximum deformation) are inside the drops, at the center of the concentric rings.

People’s Second Place: Bridging the gap. Credit: Jason Wexler (graduate student) and Howard A. Stone (faculty)
Department of Mechanical and Aerospace Engineering
When drops of liquid are trapped in a thin gap between two solids, a strong negative pressure develops inside the drops. If the solids are flexible, this pressure deforms the solids to close the gap. In our experiment the solids are transparent, which allows us to image the drops from above. Alternating dark and light lines represent lines of constant gap height, much like the lines on a topological map. These lines are caused by light interference, which is the phenomenon responsible for the beautiful rainbow pattern in an oil slick. The blue areas denote the extent of the drops. Since the drops pull the gap closed, the areas of minimum gap height (i.e. maximum deformation) are inside the drops, at the center of the concentric rings.

There’s more about the real life and online exhibition in the May 16, 2013 Princeton University news release on EurekAlert,

The Princeton University Art of Science 2013 exhibit can now be viewed in a new online gallery. The exhibit consists of 43 images of artistic merit created during the course of scientific research:

http://www.princeton.edu/artofscience/gallery2013/

The gallery features the top three awards in a juried competition as well as the top three “People’s Choice” images.

The physical Art of Science 2013 gallery opened May 10 with a reception attended by about 200 people in the Friend Center on the Princeton University campus. The works were chosen from 170 images submitted from 24 different departments across campus.

“Like art, science and engineering are deeply creative activities,” said Pablo Debenedetti, the recently appointed Dean for Research at Princeton who served as master of ceremonies at the opening reception. “Also like art, science and engineering at their very best are highly unpredictable in their outcomes. The Art of Science exhibit celebrates the beauty of unpredictability and the unpredictability of beauty.” [emphasis mine]

Adam Finkelstein, professor of computer science and one of the exhibit organizers, said that Art of Science spurs debate among artists about the nature of art while opening scientists to new ways of “seeing” their own research. “At the same time,” Finkelstein said, “this striking imagery serves as a democratic window through which non-experts can appreciate the thrill of scientific discovery.”

The top three entrants as chosen by a distinguished jury received cash prizes in amounts calculated by the golden ratio (whose proportions have since antiquity been considered to be aesthetically pleasing): first prize, $250; second prize, $154.51; and third prize, $95.49. [emphasis mine]

The physical exhibit is located in the Friend Center on the Princeton University campus in Princeton, N.J.. The exhibit is free and open to the public, Monday through Friday, from 9 a.m. to 6 p.m.

There are three pages of viewing delight at Princeton’s Art of Science 2013 online gallery. Have a lovely weekend picking your favourites.