Tag Archives: Yale University

Curiosity may not kill the cat but, in science, it might be an antidote to partisanship

I haven’t stumbled across anything from the Cultural Cognition Project at Yale Law School in years so before moving onto their latest news, here’s more about the project,

The Cultural Cognition Project is a group of scholars interested in studying how cultural values shape public risk perceptions and related policy beliefs. Cultural cognition refers to the tendency of individuals to conform their beliefs about disputed matters of fact (e.g., whether global warming is a serious threat; whether the death penalty deters murder; whether gun control makes society more safe or less) to values that define their cultural identities.Project members are using the methods of various disciplines — including social psychology, anthropology, communications, and political science — to chart the impact of this phenomenon and to identify the mechanisms through which it operates. The Project also has an explicit normative objective: to identify processes of democratic decisionmaking by which society can resolve culturally grounded differences in belief in a manner that is both congenial to persons of diverse cultural outlooks and consistent with sound public policymaking.

It’s nice to catch up with some of the project’s latest work, from a Jan. 26, 2017 Yale University news release (also on EurekAlert),

Disputes over science-related policy issues such as climate change or fracking often seem as intractable as other politically charged debates. But in science, at least, simple curiosity might help bridge that partisan divide, according to new research.

In a study slated for publication in the journal Advances in Political Psychology, a Yale-led research team found that people who are curious about science are less polarized in their views on contentious issues than less-curious peers.

In an experiment, they found out why: Science-curious individuals are more willing to engage with surprising information that runs counter to their political predispositions.

“It’s a well-established finding that most people prefer to read or otherwise be exposed to information that fits rather than challenges their political preconceptions,” said research team leader Dan Kahan, Elizabeth K. Dollard Professor of Law and professor of psychology at Yale Law School. “This is called the echo-chamber effect.”

But science-curious individuals are more likely to venture out of that chamber, he said.

“When they are offered the choice to read news articles that support their views or challenge them on the basis of new evidence, science-curious individuals opt for the challenging information,” Kahan said. “For them, surprising pieces of evidence are bright shiny objects — they can’t help but grab at them.”

Kahan and other social scientists previously have shown that information based on scientific evidence can actually intensify — rather than moderate — political polarization on contentious topics such as gun control, climate change, fracking, or the safety of certain vaccines. The new study, which assessed science knowledge among subjects, reiterates the gaping divide separating how conservatives and liberals view science.

Republicans and Democrats with limited knowledge of science were equally likely to agree or disagree with the statement that “there is solid evidence that global warming is caused by human activity. However, the most science-literate conservatives were much more likely to disagree with the statement than less-knowledgeable peers. The most knowledgeable liberals almost universally agreed with the statement.

“Whatever measure of critical reasoning we used, we always observed this depressing pattern: The members of the public most able to make sense of scientific evidence are in fact the most polarized,” Kahan said.

But knowledge of science, and curiosity about science, are not the same thing, the study shows.

The team became interested in curiosity because of its ongoing collaborative research project to improve public engagement with science documentaries involving the Cultural Cognition Project at Yale Law School, the Annenberg Public Policy Center of the University of Pennsylvania, and Tangled Bank Studios at the Howard Hughes Medical Institute.

They noticed that the curious — those who sought out science stories for personal pleasure — not only were more interested in viewing science films on a variety of topics but also did not display political polarization associated with contentious science issues.

The new study found, for instance, that a much higher percentage of curious liberals and conservatives chose to read stories that ran counter to their political beliefs than did their non-curious peers.

“As their science curiosity goes up, the polarizing effects of higher science comprehension dissipate, and people move the same direction on contentious policies like climate change and fracking,” Kahan said.

It is unclear whether curiosity applied to other controversial issues can minimize the partisan rancor that infects other areas of society. But Kahan believes that the curious from both sides of the political and cultural divide should make good ambassadors to the more doctrinaire members of their own groups.

“Politically curious people are a resource who can promote enlightened self-government by sharing scientific information they are naturally inclined to learn and share,” he said.

Here’s my standard link to and citation for the paper,

Science Curiosity and Political Information Processing by Dan M. Kahan, Asheley R Landrum, Katie Carpenter, Laura Helft, and Kathleen Hall Jamieson. Political Psychology Volume 38, Issue Supplement S1 February 2017 Pages 179–199 DOI: 10.1111/pops.12396View First published: 26 January 2017

This paper is open and it can also be accessed here.

I last mentioned Kahan and The Cultural Cognition Project in an April 10, 2014 posting (scroll down about 45% of the way) about responsible science.

Communicating science effectively—a December 2016 book from the US National Academy of Sciences

I stumbled across this Dec. 13, 2016  essay/book announcement by Dr. Andrew Maynard and Dr. Dietram A. Scheufele on The Conversation,

Many scientists and science communicators have grappled with disregard for, or inappropriate use of, scientific evidence for years – especially around contentious issues like the causes of global warming, or the benefits of vaccinating children. A long debunked study on links between vaccinations and autism, for instance, cost the researcher his medical license but continues to keep vaccination rates lower than they should be.

Only recently, however, have people begun to think systematically about what actually works to promote better public discourse and decision-making around what is sometimes controversial science. Of course scientists would like to rely on evidence, generated by research, to gain insights into how to most effectively convey to others what they know and do.

As it turns out, the science on how to best communicate science across different issues, social settings and audiences has not led to easy-to-follow, concrete recommendations.

About a year ago, the National Academies of Sciences, Engineering and Medicine brought together a diverse group of experts and practitioners to address this gap between research and practice. The goal was to apply scientific thinking to the process of how we go about communicating science effectively. Both of us were a part of this group (with Dietram as the vice chair).

The public draft of the group’s findings – “Communicating Science Effectively: A Research Agenda” – has just been published. In it, we take a hard look at what effective science communication means and why it’s important; what makes it so challenging – especially where the science is uncertain or contested; and how researchers and science communicators can increase our knowledge of what works, and under what conditions.

At some level, all science communication has embedded values. Information always comes wrapped in a complex skein of purpose and intent – even when presented as impartial scientific facts. Despite, or maybe because of, this complexity, there remains a need to develop a stronger empirical foundation for effective communication of and about science.

Addressing this, the National Academies draft report makes an extensive number of recommendations. A few in particular stand out:

  • Use a systems approach to guide science communication. In other words, recognize that science communication is part of a larger network of information and influences that affect what people and organizations think and do.
  • Assess the effectiveness of science communication. Yes, researchers try, but often we still engage in communication first and evaluate later. Better to design the best approach to communication based on empirical insights about both audiences and contexts. Very often, the technical risk that scientists think must be communicated have nothing to do with the hopes or concerns public audiences have.
  • Get better at meaningful engagement between scientists and others to enable that “honest, bidirectional dialogue” about the promises and pitfalls of science that our committee chair Alan Leshner and others have called for.
  • Consider social media’s impact – positive and negative.
  • Work toward better understanding when and how to communicate science around issues that are contentious, or potentially so.

The paper version of the book has a cost but you can get a free online version.  Unfortunately,  I cannot copy and paste the book’s table of contents here and was not able to find a book index although there is a handy list of reference texts.

I have taken a very quick look at the book. If you’re in the field, it’s definitely worth a look. It is, however, written for and by academics. If you look at the list of writers and reviewers, you will find over 90% are professors at one university or another. That said, I was happy to see references to Dan Kahan’s work at the Yale Law School’s Culture Cognition Project cited. As happens they weren’t able to cite his latest work [***see my xxx, 2017 curiosity post***], released about a month after “Communicating Science Effectively: A Research Agenda.”

I was unable to find any reference to science communication via popular culture. I’m a little dismayed as I feel that this is a seriously ignored source of information by science communication specialists and academicians but not by the folks at MIT (Massachusetts Institute of Technology) who announced a wireless app in the same week as it was featured in an episode of the US television comedy, The Big Bang Theory. Here’s more from MIT’s emotion detection wireless app in a Feb. 1, 2017 news release (also on EurekAlert),

It’s a fact of nature that a single conversation can be interpreted in very different ways. For people with anxiety or conditions such as Asperger’s, this can make social situations extremely stressful. But what if there was a more objective way to measure and understand our interactions?

Researchers from MIT’s Computer Science and Artificial Intelligence Laboratory (CSAIL) and Institute of Medical Engineering and Science (IMES) say that they’ve gotten closer to a potential solution: an artificially intelligent, wearable system that can predict if a conversation is happy, sad, or neutral based on a person’s speech patterns and vitals.

“Imagine if, at the end of a conversation, you could rewind it and see the moments when the people around you felt the most anxious,” says graduate student Tuka Alhanai, who co-authored a related paper with PhD candidate Mohammad Ghassemi that they will present at next week’s Association for the Advancement of Artificial Intelligence (AAAI) conference in San Francisco. “Our work is a step in this direction, suggesting that we may not be that far away from a world where people can have an AI social coach right in their pocket.”

As a participant tells a story, the system can analyze audio, text transcriptions, and physiological signals to determine the overall tone of the story with 83 percent accuracy. Using deep-learning techniques, the system can also provide a “sentiment score” for specific five-second intervals within a conversation.

“As far as we know, this is the first experiment that collects both physical data and speech data in a passive but robust way, even while subjects are having natural, unstructured interactions,” says Ghassemi. “Our results show that it’s possible to classify the emotional tone of conversations in real-time.”

The researchers say that the system’s performance would be further improved by having multiple people in a conversation use it on their smartwatches, creating more data to be analyzed by their algorithms. The team is keen to point out that they developed the system with privacy strongly in mind: The algorithm runs locally on a user’s device as a way of protecting personal information. (Alhanai says that a consumer version would obviously need clear protocols for getting consent from the people involved in the conversations.)

How it works

Many emotion-detection studies show participants “happy” and “sad” videos, or ask them to artificially act out specific emotive states. But in an effort to elicit more organic emotions, the team instead asked subjects to tell a happy or sad story of their own choosing.

Subjects wore a Samsung Simband, a research device that captures high-resolution physiological waveforms to measure features such as movement, heart rate, blood pressure, blood flow, and skin temperature. The system also captured audio data and text transcripts to analyze the speaker’s tone, pitch, energy, and vocabulary.

“The team’s usage of consumer market devices for collecting physiological data and speech data shows how close we are to having such tools in everyday devices,” says Björn Schuller, professor and chair of Complex and Intelligent Systems at the University of Passau in Germany, who was not involved in the research. “Technology could soon feel much more emotionally intelligent, or even ‘emotional’ itself.”

After capturing 31 different conversations of several minutes each, the team trained two algorithms on the data: One classified the overall nature of a conversation as either happy or sad, while the second classified each five-second block of every conversation as positive, negative, or neutral.

Alhanai notes that, in traditional neural networks, all features about the data are provided to the algorithm at the base of the network. In contrast, her team found that they could improve performance by organizing different features at the various layers of the network.

“The system picks up on how, for example, the sentiment in the text transcription was more abstract than the raw accelerometer data,” says Alhanai. “It’s quite remarkable that a machine could approximate how we humans perceive these interactions, without significant input from us as researchers.”


Indeed, the algorithm’s findings align well with what we humans might expect to observe. For instance, long pauses and monotonous vocal tones were associated with sadder stories, while more energetic, varied speech patterns were associated with happier ones. In terms of body language, sadder stories were also strongly associated with increased fidgeting and cardiovascular activity, as well as certain postures like putting one’s hands on one’s face.

On average, the model could classify the mood of each five-second interval with an accuracy that was approximately 18 percent above chance, and a full 7.5 percent better than existing approaches.

The algorithm is not yet reliable enough to be deployed for social coaching, but Alhanai says that they are actively working toward that goal. For future work the team plans to collect data on a much larger scale, potentially using commercial devices such as the Apple Watch that would allow them to more easily implement the system out in the world.

“Our next step is to improve the algorithm’s emotional granularity so that it is more accurate at calling out boring, tense, and excited moments, rather than just labeling interactions as ‘positive’ or ‘negative,’” says Alhanai. “Developing technology that can take the pulse of human emotions has the potential to dramatically improve how we communicate with each other.”

This research was made possible in part by the Samsung Strategy and Innovation Center.

Episode 14 of season 10 of The Big Bang Theory was titled “The Emotion Detection Automation”  (full episode can be found on this webpage) and broadcast on Feb. 2, 2017. There’s also a Feb. 2, 2017 recap (recapitulation) by Lincee Ray for EW.com (it seems Ray is unaware that there really is such a machine),

Who knew we would see the day when Sheldon and Raj figured out solutions for their social ineptitudes? Only The Big Bang Theory writers would think to tackle our favorite physicists’ lack of social skills with an emotion detector and an ex-girlfriend focus group. It’s been a while since I enjoyed both storylines as much as I did in this episode. That’s no bazinga.

When Raj tells the guys that he is back on the market, he wonders out loud what is wrong with his game. Why do women reject him? Sheldon receives the information like a scientist and runs through many possible answers. Raj shuts him down with a simple, “I’m fine.”

Sheldon is irritated when he learns that this obligatory remark is a mask for what Raj is really feeling. It turns out, Raj is not fine. Sheldon whines, wondering why no one just says exactly what’s on their mind. It’s quite annoying for those who struggle with recognizing emotional cues.

Lo and behold, Bernadette recently read about a gizmo that was created for people who have this exact same anxiety. MIT has a prototype, and because Howard is an alum, he can probably submit Sheldon’s name as a beta tester.

Of course this is a real thing. If anyone can build an emotion detector, it’s a bunch of awkward scientists with zero social skills.

This is the first time I’ve noticed an academic institution’s news release to be almost simultaneous with mention of its research in a popular culture television program, which suggests things have come a long way since I featured news about a webinar by the National Academies ‘ Science and Entertainment Exchange for film and television productions collaborating with scientists in an Aug. 28, 2012 post.

One last science/popular culture moment: Hidden Figures, a movie about African American women who were human computers supporting NASA (US National Aeronautics and Space Agency) efforts during the 1960s space race and getting a man on the moon was (shockingly) no. 1 in the US box office for a few weeks (there’s more about the movie here in my Sept. 2, 2016 post covering then upcoming movies featuring science).  After the movie was released, Mary Elizabeth Williams wrote up a Jan. 23, 2017 interview with the ‘Hidden Figures’ scriptwriter for Salon.com

I [Allison Schroeder] got on the phone with her [co-producer Renee Witt] and Donna  [co-producer Donna Gigliotti] and I said, “You have to hire me for this; I was born to write this.” Donna sort of rolled her eyes and was like, “God, these Hollywood types would say anything.” I said, “No, no, I grew up at Cape Canaveral. My grandmother was a computer programmer at NASA, my grandfather worked on the Mercury prototype, and I interned there all through high school and then the summer after my freshman year at Stanford I interned. I worked at a missile launch company.”

She was like, “OK that’s impressive.” And I said, “No, I literally grew up climbing on the Mercury capsule — hitting all the buttons, trying to launch myself into space.”

She said, “Well do you think you can handle the math?” I said that I had to study a certain amount of math at Stanford for economics degree. She said, “Oh, all right, that sounds pretty good.”

I pitched her a few scenes. I pitched her the end of the movie that you saw with Katherine running the numbers as John Glenn is trying to get up in space. I pitched her the idea of one of the women as a mechanic and to see her legs underneath the engine. You’re used to seeing a guy like that, but what would it be like to see heels and pantyhose and a skirt and she’s a mechanic and fixing something? Those are some of the scenes that I pitched them, and I got the job.

I love that the film begins with setting up their mechanical aptitude. You set up these are women; you set up these women of color. You set up exactly what that means in this moment in history. It’s like you just go from there.

I was on a really tight timeline because this started as an indie film. It was just Donna Gigliotti, Renee Witt, me and the author Margot Lee Shetterly for about a year working on it. I was only given four weeks for research and 12 weeks for writing the first draft. I’m not sure if I hadn’t known NASA and known the culture and just knew what the machines would look like, knew what the prototypes looked like, if I could have done it that quickly. I turned in that draft and Donna was like, “OK you’ve got the math and the science; it’s all here. Now go have fun.” Then I did a few more drafts and that was really enjoyable because I could let go of the fact I did it and make sure that the characters and the drive of the story and everything just fit what needed to happen.

For anyone interested in the science/popular culture connection, David Bruggeman of the Pasco Phronesis blog does a better job than I do of keeping up with the latest doings.

Getting back to ‘Communicating Science Effectively: A Research Agenda’, even with a mention of popular culture, it is a thoughtful book on the topic.

A Moebius strip of moving energy (vibrations)

This research extends a theorem which posits that waves will adapt to slowly changing conditions and return to their original vibration to note that the waves can be manipulated to a new state. A July 25, 2016 news item on ScienceDaily makes the announcement,

Yale physicists have created something similar to a Moebius strip of moving energy between two vibrating objects, opening the door to novel forms of control over waves in acoustics, laser optics, and quantum mechanics.

The discovery also demonstrates that a century-old physics theorem offers much greater freedom than had long been believed. …

A July 25, 2016 Yale University news release (also on EurekAlert) by Jim Shelton, which originated the news item, expands on the theme,

Yale’s experiment is deceptively simple in concept. The researchers set up a pair of connected, vibrating springs and studied the acoustic waves that traveled between them as they manipulated the shape of the springs. Vibrations — as well as other types of energy waves — are able to move, or oscillate, at different frequencies. In this instance, the springs vibrate at frequencies that merge, similar to a Moebius strip that folds in on itself.

The precise spot where the vibrations merge is called an “exceptional point.”

“It’s like a guitar string,” said Jack Harris, a Yale associate professor of physics and applied physics, and the study’s principal investigator. “When you pluck it, it may vibrate in the horizontal plane or the vertical plane. As it vibrates, we turn the tuning peg in a way that reliably converts the horizontal motion into vertical motion, regardless of the details of how the peg is turned.”

Unlike a guitar, however, the experiment required an intricate laser system to precisely control the vibrations, and a cryogenic refrigeration chamber in which the vibrations could be isolated from any unwanted disturbance.

The Yale experiment is significant for two reasons, the researchers said. First, it suggests a very dependable way to control wave signals. Second, it demonstrates an important — and surprising — extension to a long-established theorem of physics, the adiabatic theorem.

The adiabatic theorem says that waves will readily adapt to changing conditions if those changes take place slowly. As a result, if the conditions are gradually returned to their initial configuration, any waves in the system should likewise return to their initial state of vibration. In the Yale experiment, this does not happen; in fact, the waves can be manipulated into a new state.

“This is a very robust and general way to control waves and vibrations that was predicted theoretically in the last decade, but which had never been demonstrated before,” Harris said. “We’ve only scratched the surface here.”

In the same edition of Nature, a team from the Vienna University of Technology also presented research on a system for wave control via exceptional points.

Here’s a link to and a citation for the paper,

Topological energy transfer in an optomechanical system with exceptional points by H. Xu, D. Mason, Luyao Jiang, & J. G. E. Harris. Nature (2016) doi:10.1038/nature18604 Published online 25 July 2016

This paper is behind a paywall.

D-PLACE: an open access database of places, language, culture, and enviroment

In an attempt to be a bit more broad in my interpretation of the ‘society’ part of my commentary I’m including this July 8, 2016 news item on ScienceDaily (Note: A link has been removed),

An international team of researchers has developed a website at d-place.org to help answer long-standing questions about the forces that shaped human cultural diversity.

D-PLACE — the Database of Places, Language, Culture and Environment — is an expandable, open access database that brings together a dispersed body of information on the language, geography, culture and environment of more than 1,400 human societies. It comprises information mainly on pre-industrial societies that were described by ethnographers in the 19th and early 20th centuries.

A July 8, 2016 University of Toronto news release (also on EurekAlert), which originated the news item, expands on the theme,

“Human cultural diversity is expressed in numerous ways: from the foods we eat and the houses we build, to our religious practices and political organisation, to who we marry and the types of games we teach our children,” said Kathryn Kirby, a postdoctoral fellow in the Departments of Ecology & Evolutionary Biology and Geography at the University of Toronto and lead author of the study. “Cultural practices vary across space and time, but the factors and processes that drive cultural change and shape patterns of diversity remain largely unknown.

“D-PLACE will enable a whole new generation of scholars to answer these long-standing questions about the forces that have shaped human cultural diversity.”

Co-author Fiona Jordan, senior lecturer in anthropology at the University of Bristol and one of the project leads said, “Comparative research is critical for understanding the processes behind cultural diversity. Over a century of anthropological research around the globe has given us a rich resource for understanding the diversity of humanity – but bringing different resources and datasets together has been a huge challenge in the past.

“We’ve drawn on the emerging big data sets from ecology, and combined these with cultural and linguistic data so researchers can visualise diversity at a glance, and download data to analyse in their own projects.”

D-PLACE allows users to search by cultural practice (e.g., monogamy vs. polygamy), environmental variable (e.g. elevation, mean annual temperature), language family (e.g. Indo-European, Austronesian), or region (e.g. Siberia). The search results can be displayed on a map, a language tree or in a table, and can also be downloaded for further analysis.

It aims to enable researchers to investigate the extent to which patterns in cultural diversity are shaped by different forces, including shared history, demographics, migration/diffusion, cultural innovations, and environmental and ecological conditions.

D-PLACE was developed by an international team of scientists interested in cross-cultural research. It includes researchers from Max Planck Institute for the Science of Human history in Jena Germany, University of Auckland, Colorado State University, University of Toronto, University of Bristol, Yale, Human Relations Area Files, Washington University in Saint Louis, University of Michigan, American Museum of Natural History, and City University of New York.

The diverse team included: linguists; anthropologists; biogeographers; data scientists; ethnobiologists; and evolutionary ecologists, who employ a variety of research methods including field-based primary data collection; compilation of cross-cultural data sources; and analyses of existing cross-cultural datasets.

“The team’s diversity is reflected in D-PLACE, which is designed to appeal to a broad user base,” said Kirby. “Envisioned users range from members of the public world-wide interested in comparing their cultural practices with those of other groups, to cross-cultural researchers interested in pushing the boundaries of existing research into the drivers of cultural change.”

Here’s a link to and a citation for the paper,

D-PLACE: A Global Database of Cultural, Linguistic and Environmental Diversity by Kathryn R. Kirby, Russell D. Gray, Simon J. Greenhill, Fiona M. Jordan, Stephanie Gomes-Ng, Hans-Jörg Bibiko, Damián E. Blasi, Carlos A. Botero, Claire Bowern, Carol R. Ember, Dan Leehr, Bobbi S. Low, Joe McCarter, William Divale, Michael C. Gavin.  PLOS ONE, 2016; 11 (7): e0158391 DOI: 10.1371/journal.pone.0158391 Published July 8, 2016.

This paper is open access.

You can find D-PLACE here.

While it might not seem like that there would be a close link between anthropology and physics in the 19th and early 20th centuries, that information can be mined for more contemporary applications. For example, someone who wants to make a case for a more diverse scientific community may want to develop a social science approach to the discussion. The situation in my June 16, 2016 post titled: Science literacy, science advice, the US Supreme Court, and Britain’s House of Commons, could  be extended into a discussion and educational process using data from D-Place and other sources to make the point,

Science literacy may not be just for the public, it would seem that US Supreme Court judges may not have a basic understanding of how science works. David Bruggeman’s March 24, 2016 posting (on his Pasco Phronesis blog) describes a then current case before the Supreme Court (Justice Antonin Scalia has since died), Note: Links have been removed,

It’s a case concerning aspects of the University of Texas admissions process for undergraduates and the case is seen as a possible means of restricting race-based considerations for admission.  While I think the arguments in the case will likely revolve around factors far removed from science and or technology, there were comments raised by two Justices that struck a nerve with many scientists and engineers.

Both Justice Antonin Scalia and Chief Justice John Roberts raised questions about the validity of having diversity where science and scientists are concerned [emphasis mine].  Justice Scalia seemed to imply that diversity wasn’t esential for the University of Texas as most African-American scientists didn’t come from schools at the level of the University of Texas (considered the best university in Texas).  Chief Justice Roberts was a bit more plain about not understanding the benefits of diversity.  He stated, “What unique perspective does a black student bring to a class in physics?”

To that end, Dr. S. James Gates, theoretical physicist at the University of Maryland, and member of the President’s Council of Advisers on Science and Technology (and commercial actor) has an editorial in the March 25 [2016] issue of Science explaining that the value of having diversity in science does not accrue *just* to those who are underrepresented.

Dr. Gates relates his personal experience as a researcher and teacher of how people’s background inform their practice of science, and that two different people may use the same scientific method, but think about the problem differently.

I’m guessing that both Scalia and Roberts and possibly others believe that science is the discovery and accumulation of facts. In this worldview science facts such as gravity are waiting for discovery and formulation into a ‘law’. They do not recognize that most science is a collection of beliefs and may be influenced by personal beliefs. For example, we believe we’ve proved the existence of the Higgs boson but no one associated with the research has ever stated unequivocally that it exists.

More generally, with D-PLACE and the recently announced Trans-Atlantic Platform (see my July 15, 2016 post about it), it seems Canada’s humanities and social sciences communities are taking strides toward greater international collaboration and a more profound investment in digital scholarship.

YBC 7289: a 3,800-year-old mathematical text and 3D printing at Yale University

1,300 years before Pythagoras came up with the theorem associated with his name, a school kid in Babylon formed a disc out of clay and scratched out the theorem when the surface was drying.  According to an April 12, 2016 news item on phys.org the Bablyonians got to the theorem first, (Note: A link has been removed),

Thirty-eight hundred years ago, on the hot river plains of what is now southern Iraq, a Babylonian student did a bit of schoolwork that changed our understanding of ancient mathematics. The student scooped up a palm-sized clump of wet clay, formed a disc about the size and shape of a hamburger, and let it dry down a bit in the sun. On the surface of the moist clay the student drew a diagram that showed the people of the Old Babylonian Period (1,900–1,700 B.C.E.) fully understood the principles of the “Pythagorean Theorem” 1300 years before Greek geometer Pythagoras was born, and were also capable of calculating the square root of two to six decimal places.

Today, thanks to the Internet and new digital scanning methods being employed at Yale, this ancient geometry lesson continues to be used in modern classrooms around the world.

Just when you think it’s all about the theorem, the story which originated in an April 11, 2016 Yale University news release by Patrick Lynch takes a turn,

“This geometry tablet is one of the most-reproduced cultural objects that Yale owns — it’s published in mathematics textbooks the world over,” says Professor Benjamin Foster, curator of the Babylonian Collection, which includes the tablet. It’s also a popular teaching tool in Yale classes. “At the Babylonian Collection we have a very active teaching and learning function, and we regard education as one of the core parts of our mission,” says Foster. “We have graduate and undergraduate groups in our collection classroom every week.”

The tablet, formally known as YBC 7289, “Old Babylonian Period Mathematical Text,” came to Yale in 1909 as part of a much larger collection of cuneiform tablets assembled by J. Pierpont Morgan and donated to Yale. In the ancient Mideast cuneiform writing was created by using a sharp stylus pressed into the surface of a soft clay tablet to produce wedge-like impressions representing pictographic words and numbers. Morgan’s donation of tablets and other artifacts formed the nucleus of the Yale Babylonian Collection, which now incorporates 45,000 items from the ancient Mesopotamian kingdoms.

Discoverying [sic] the tablet’s mathematical significance

The importance of the geometry tablet was first recognized by science historians Otto Neugebauer and Abraham Sachs in their 1945 book “Mathematical Cuneiform Texts.”

“Ironically, mathematicians today are much more fascinated with the Babylonians’ ability to accurately calculate irrational numbers like the square root of two than they are with the geometry demonstrations,” notes associate Babylonian Collection curator Agnete Lassen.

“The Old Babylonian Period produced many tablets that show complex mathematics, but it also produced things you might not expect from a culture this old, such as grammars, dictionaries, and word lists,” says Lassen “One of the two main languages spoken in early Babylonia  was dying out, and people were careful to document and save what they could on cuneiform tablets. It’s ironic that almost 4,000 years ago people were thinking about cultural preservation, [emphasis mine] and actively preserving their learning for future generations.”.

This business about ancient peoples trying to preserve culture and learning for future generations suggests that the efforts in Palmyra, Syria (my April 6, 2016 post about 3D printing parts of Palmyra) are born of an age-old impulse. And then the story takes another turn and becomes a 3D printing story (from the Yale University news release),

Today, however, the tablet is a fragile lump of clay that would not survive routine handling in a classroom. In looking for alternatives that might bring the highlights of the Babylonian Collection to a wider audience, the collection’s curators partnered with Yale’s Institute for the Preservation of Cultural Heritage (IPCH) to bring the objects into the digital world.

Scanning at the IPCH

The IPCH Digitization Lab’s first step was to do reflectance transformation imaging (RTI) on each of fourteen Babylonian Collection objects. RTI is a photographic technique that enables a student or researcher to look at a subject with many different lighting angles. That’s particularly important for something like a cuneiform tablet, where there are complex 3D marks incised into the surface. With RTI you can freely manipulate the lighting, and see subtle surface variations that no ordinary photograph would reveal.

Chelsea Graham of the IPCH Digitization Lab and her colleague Yang Ying Yang of the Yale Computer Graphics Group then did laser scanning of the tablet to create a three-dimensional geometric model that can be freely rotated onscreen. The resulting 3D models can be combined with many other types of digital imaging to give researchers and students a virtual tablet onscreen, and the same data can be use to create a 3D printed facsimile that can be freely used in the classroom without risk to the delicate original.
3D printing digital materials

While virtual models on the computer screen have proved to be a valuable teaching and research resource, even the most accurate 3D model on a computer screen doesn’t convey the tactile  impact, and physicality of the real object. Yale’s Center for Engineering Innovation and Design has collaborated with the IPCH on a number of cultural heritage projects, and the center’s assistant director, Joseph Zinter, has used its 3D printing expertise on a wide range of engineering, basic science, and cultural heritage projects.

“Whether it’s a sculpture, a rare skull, or a microscopic neuron or molecule highly magnified, you can pick up a 3D printed model and hold it, and it’s a very different and important way to understand the data. Holding something in your hand is a distinctive learning experience,” notes Zinter.

Sharing cultural heritage projects in the digital world

Once a cultural artifact has entered the digital world there are practical problems with how to share the information with students and scholars. IPCH postdoctoral fellows Goze Akoglu and Eleni Kotoula are working with Yale computer science faculty member Holly Rushmeier to create an integrated collaborative software platform to support the research and sharing of cultural heritage artifacts like the Babylonian tablet.

“Right now cultural heritage professionals must juggle many kinds of software, running several types of specialized 2D and 3D media viewers as well as conventional word processing and graphics programs. Our vision is to create a single virtual environment that accommodates many kinds of media, as well as supporting communication and annotation within the project,” says Kotoula.

The wide sharing and disseminating of cultural artifacts is one advantage of digitizing objects, notes professor Rushmeier, “but the key thing about digital is the power to study large virtual collections. It’s not about scanning and modeling the individual object. When the scanned object becomes part of a large collection of digital data, then machine learning and search analysis tools can be run over the collection, allowing scholars to ask questions and make comparisons that aren’t possible by other means,” says Rushmeier.

Reflecting on the process that brings state-of-the-art digital tools to one of humanity’s oldest forms of writing, Graham said “It strikes me that this tablet has made a very long journey from classroom to classroom. People sometimes think the digital or 3D-printed models are just a novelty, or just for exhibitions, but you can engage and interact much more with the 3D printed object, or 3D model on the screen. I think the creators of this tablet would have appreciated the efforts to bring this fragile object back to the classroom.”

There is also a video highlighting the work,

Split some water molecules and save solar and wind (energy) for a future day

Professor Ted Sargent’s research team at the University of Toronto has a developed a new technique for saving the energy harvested by sun and wind farms according to a March 28, 2016 news item on Nanotechnology Now,

We can’t control when the wind blows and when the sun shines, so finding efficient ways to store energy from alternative sources remains an urgent research problem. Now, a group of researchers led by Professor Ted Sargent at the University of Toronto’s Faculty of Applied Science & Engineering may have a solution inspired by nature.

The team has designed the most efficient catalyst for storing energy in chemical form, by splitting water into hydrogen and oxygen, just like plants do during photosynthesis. Oxygen is released harmlessly into the atmosphere, and hydrogen, as H2, can be converted back into energy using hydrogen fuel cells.

Discovering a better way of storing energy from solar and wind farms is “one of the grand challenges in this field,” Ted Sargent says (photo above by Megan Rosenbloom via flickr) Courtesy: University of Toronto

Discovering a better way of storing energy from solar and wind farms is “one of the grand challenges in this field,” Ted Sargent says (photo above by Megan Rosenbloom via flickr) Courtesy: University of Toronto

A March 24, 2016 University of Toronto news release by Marit Mitchell, which originated the news item, expands on the theme,

“Today on a solar farm or a wind farm, storage is typically provided with batteries. But batteries are expensive, and can typically only store a fixed amount of energy,” says Sargent. “That’s why discovering a more efficient and highly scalable means of storing energy generated by renewables is one of the grand challenges in this field.”

You may have seen the popular high-school science demonstration where the teacher splits water into its component elements, hydrogen and oxygen, by running electricity through it. Today this requires so much electrical input that it’s impractical to store energy this way — too great proportion of the energy generated is lost in the process of storing it.

This new catalyst facilitates the oxygen-evolution portion of the chemical reaction, making the conversion from H2O into O2 and H2 more energy-efficient than ever before. The intrinsic efficiency of the new catalyst material is over three times more efficient than the best state-of-the-art catalyst.

Details are offered in the news release,

The new catalyst is made of abundant and low-cost metals tungsten, iron and cobalt, which are much less expensive than state-of-the-art catalysts based on precious metals. It showed no signs of degradation over more than 500 hours of continuous activity, unlike other efficient but short-lived catalysts. …

“With the aid of theoretical predictions, we became convinced that including tungsten could lead to a better oxygen-evolving catalyst. Unfortunately, prior work did not show how to mix tungsten homogeneously with the active metals such as iron and cobalt,” says one of the study’s lead authors, Dr. Bo Zhang … .

“We invented a new way to distribute the catalyst homogenously in a gel, and as a result built a device that works incredibly efficiently and robustly.”

This research united engineers, chemists, materials scientists, mathematicians, physicists, and computer scientists across three countries. A chief partner in this joint theoretical-experimental studies was a leading team of theorists at Stanford University and SLAC National Accelerator Laboratory under the leadership of Dr. Aleksandra Vojvodic. The international collaboration included researchers at East China University of Science & Technology, Tianjin University, Brookhaven National Laboratory, Canadian Light Source and the Beijing Synchrotron Radiation Facility.

“The team developed a new materials synthesis strategy to mix multiple metals homogeneously — thereby overcoming the propensity of multi-metal mixtures to separate into distinct phases,” said Jeffrey C. Grossman, the Morton and Claire Goulder and Family Professor in Environmental Systems at Massachusetts Institute of Technology. “This work impressively highlights the power of tightly coupled computational materials science with advanced experimental techniques, and sets a high bar for such a combined approach. It opens new avenues to speed progress in efficient materials for energy conversion and storage.”

“This work demonstrates the utility of using theory to guide the development of improved water-oxidation catalysts for further advances in the field of solar fuels,” said Gary Brudvig, a professor in the Department of Chemistry at Yale University and director of the Yale Energy Sciences Institute.

“The intensive research by the Sargent group in the University of Toronto led to the discovery of oxy-hydroxide materials that exhibit electrochemically induced oxygen evolution at the lowest overpotential and show no degradation,” said University Professor Gabor A. Somorjai of the University of California, Berkeley, a leader in this field. “The authors should be complimented on the combined experimental and theoretical studies that led to this very important finding.”

Here’s a link to and a citation for the paper,

Homogeneously dispersed, multimetal oxygen-evolving catalysts by Bo Zhang, Xueli Zheng, Oleksandr Voznyy, Riccardo Comin, Michal Bajdich, Max García-Melchor, Lili Han, Jixian Xu, Min Liu, Lirong Zheng, F. Pelayo García de Arquer, Cao Thang Dinh, Fengjia Fan, Mingjian Yuan, Emre Yassitepe, Ning Chen, Tom Regier, Pengfei Liu, Yuhang Li, Phil De Luna, Alyf Janmohamed, Huolin L. Xin, Huagui Yang, Aleksandra Vojvodic, Edward H. Sargent. Science  24 Mar 2016: DOI: 10.1126/science.aaf1525

This paper is behind a paywall.

Finding a way to prevent sunscreens from penetrating the skin

While nanosunscreens have been singled out for their possible impact on our health, the fact is many sunscreens contain dangerous ingredients penetrating the skin. A Dec. 14, 2015 news item on ScienceDaily describes some research into getting sunscreens to stay on the skin surface avoiding penetration,

A new sunscreen has been developed that encapsulates the UV-blocking compounds inside bio-adhesive nanoparticles, which adhere to the skin well, but do not penetrate beyond the skin’s surface. These properties resulted in highly effective UV protection in a mouse model, without the adverse effects observed with commercial sunscreens, including penetration into the bloodstream and generation of reactive oxygen species, which can damage DNA and lead to cancer.

A US National Institute of Biomedical Imaging and Bioengineering (NIBIB) Dec. 14, 2015 news release, which originated the news item, expands on the theme (Note: Links have been removed),

Commercial sunscreens use compounds that effectively filter out damaging UV light. However, there is concern that these agents have a variety of harmful effects due to penetration past the surface skin. For example, these products have been found in human breast tissue and urine and are known to disrupt the normal function of some hormones. Also, the exposure of the UV filters to light can produce toxic reactive oxygen species that are destructive to cells and tissues and can cause tumors through DNA damage.

“This work applies a novel bioengineering idea to a little known but significant health problem, adds Jessica Tucker, Ph.D., Director of the NIBIB Program in Delivery Systems and Devices for Drugs and Biologics. “While we are all familiar with the benefits of sunscreen, the potential toxicities from sunscreen due to penetration into the body and creation of DNA-damaging agents are not well known. Bioengineering sunscreen to inhibit penetration and keep any DNA-damaging compounds isolated in the nanoparticle and away from the skin is a great example of how a sophisticated technology can be used to solve a problem affecting the health of millions of people.”

Bioengineers and dermatologists at Yale University in New Haven, Connecticut combined their expertise in nanoparticle-based drug delivery and the molecular and cellular characteristics of the skin to address these potential health hazards of current commercial sunscreens.

The news release then goes on to provide some technical details,

The group encapsulated a commonly used sunscreen, padimate O (PO), inside a nanoparticle (a very small molecule often used to transport drugs and other agents into the body). PO is related to the better-known sunscreen PABA.

The bioadhesive nanoparticle containing the sunscreen PO was tested on pigs for penetration into the skin. A control group of pigs received the PO alone, not encapsulated in a nanoparticle. The PO penetrated beyond the surface layers of skin where it could potentially enter the bloodstream through blood vessels that are in the deeper skin layers. However, the PO inside the nanoparticle remained on the surface of the skin and did not penetrate into deeper layers.

Because the bioadhesive nanoparticles, or BNPs are larger than skin pores it was somewhat expected that they could not enter the body by that route. However, skin is full of hair follicles that are larger than BNPs and so could be a way for migration into the body. Surprisingly, BNPs did not pass through the hair follicle openings either. Tests indicated that the adhesive properties of the BNPs caused them to stick to the skin surface, unable to move through the hair follicles.

Further testing showed that the BNPs were water resistant and remained on the skin for a day or more, yet were easily removed by towel wiping. They also disappeared in several days through natural exfoliation of the surface skin.

BNPs enhance the effect of sunscreen

An important test was whether the BNP-encapsulated sunscreen retained its UV filtering properties. The researchers used a mouse model to test whether PO blocked sunburn when encapsulated in the BNPs. The BNP formulation successfully provided the same amount of UV protection as the commercial products applied directly to the skin of the hairless mouse model. Surprisingly, this was achieved even though the BNPs carried only a fraction (5%) of the amount of commercial sunblock applied to the mice.

Finally, the encapsulated sunscreen was tested for the formation of damaging oxygen-carrying molecules known as reactive oxygen species, (ROS) when exposed to UV light. The researchers hypothesized that any ROS created by the sunscreen’s interaction with UV would stay contained inside the BNP, unable to damage surrounding tissue. Following exposure to UV light, no damaging ROS were detected outside of the nanoparticle, indicating that any harmful agents that were formed remained inside of the nanoparticle, unable to make contact with the skin.

“We are extremely pleased with the properties and performance of our BNP formulation,” says senior author Mark Saltzman, Ph.D., Yale School of Engineering and Applied Science. “The sunscreen loaded BNPs combine the best properties of an effective sunscreen with a safety profile that alleviates the potential toxicities of the actual sunscreen product because it is encapsulated and literally never touches the skin.” Adds co-senior author, Michael Girardi, M.D. “Our nanoparticles performed as expected, however, these are preclinical findings. We are now in a position to assess the effects on human skin.”

So, all of this work has been done on animal models, which means that human clinical trials are the likely next step. As we wait, here’s a link to and a citation for this group’s paper,

A sunblock based on bioadhesive nanoparticles by Yang Deng, Asiri Ediriwickrema, Fan Yang, Julia Lewis, Michael Girardi, & W. Mark Saltzman. Nature Materials 14, 1278–1285 (2015) doi:10.1038/nmat4422 Published online 28 September 2015

This paper is behind a paywall.

Safer sunblock and bioadhesive nanoparticles from Yale University

The skin has a lot of protective barriers but it’s always possible to make something better so a sunblock that doesn’t penetrate the* skin at all seems like it might be a good thing. Interestingly, this new sunblock or sunscreen is enabled by nanoparticles but not the metallic nanoparticles found in what are sometimes called nanosunscreens. From a Sept. 29, 2015 news item on Nanowerk,

Researchers at Yale have developed a sunscreen that doesn’t penetrate the skin, eliminating serious health concerns associated with commercial sunscreens.

Most commercial sunblocks are good at preventing sunburn, but they can go below the skin’s surface and enter the bloodstream. As a result, they pose possible hormonal side effects and could even be promoting the kind of skin cancers they’re designed to prevent.

But researchers at Yale have developed a new sunblock, made with bioadhesive nanoparticles, that stays on the surface of the skin.

A Sept. 28, 2015 Yale University news release by William Weir, whch originated the news item, describes the research in more detail,

“We found that when we apply the sunblock to the skin, it doesn’t come off, and more importantly, it doesn’t penetrate any further into the skin,” said the paper’s senior author, Mark Saltzman, the Goizueta Foundation Professor of Biomedical Engineering. “Nanoparticles are large enough to keep from going through the skin’s surface, and our nanoparticles are so adhesive that they don’t even go into hair follicles, which are relatively open.”

Using mouse models, the researchers tested their sunblock against direct ultraviolet rays and their ability to cause sunburn. In this regard, even though it used a significantly smaller amount of the active ingredient than commercial sunscreens, the researchers’ formulation protected equally well against sunburn.

They also looked at an indirect — and much less studied — effect of UV light. When the active ingredients of sunscreen absorb UV light, a chemical change triggers the generation of oxygen-carrying molecules known as reactive oxygen species (ROS). If a sunscreen’s agents penetrate the skin, this chemical change could cause cellular damage, and potentially facilitate skin cancer.

“Commercial chemical sunblock is protective against the direct hazards of ultraviolet damage of DNA, but might not be against the indirect ones,” said co-author Michael Girardi, a professor of dermatology at Yale Medical School. “In fact, the indirect damage was worse when we used the commercial sunblock.”

Girardi, who specializes in skin cancer development and progression, said little research has been done on the ultimate effects of sunblock usage and the generation of ROS, “but obviously, there’s concern there.”

Previous studies have found traces of commercial sunscreen chemicals in users’ bloodstreams, urine, and breast milk. There is evidence that these chemicals cause disruptions with the endocrine system, such as blocking sex hormone receptors.

To test penetration levels, the researchers applied strips of adhesive tape to skin previously treated with sunscreen. The tape was then removed rapidly, along with a thin layer of skin. Repeating this procedure allowed the researchers to remove the majority of the outer skin layer, and measure how deep the chemicals had penetrated into the skin. Traces of the sunscreen chemical administered in a conventional way were found to have soaked deep within the skin. The Yale team’s sunblock came off entirely with the initial tape strips.

Tests also showed that a substantial amount of the Yale team’s sunscreen remained on the skin’s surface for days, even after exposure to water. When wiped repeatedly with a towel, the new sunblock was entirely removed. [emphasis mine]

To make the sunblock, the researchers developed a nanoparticle with a surface coating rich in aldehyde groups, which stick tenaciously to the outer skin layer. The nanoparticle’s hydrophilic layer essentially locks in the active ingredient, a hydrophobic chemical called padimate O.

Some sunscreen solutions that use larger particles of inorganic compounds, such as titanium dioxide or zinc oxide, also don’t penetrate the skin. For aesthetic reasons, though, these opaque sunscreen products aren’t very popular. By using a nanoparticle to encase padimate O, an organic chemical used in many commercial sunscreens, the Yale team’s sunblock is both transparent and stays out of the skin cells and bloodstream.

This seems a little confusing to me and I think clarification may be helpful. My understanding is that the metallic nanoparticles (nano titanium dioxide and nano zinc oxide) engineered for use in commercial sunscreens are also (in addition to the macroscale titanium dioxide and zinc oxide referred to in the Yale news release) too large to pass through the skin. At least that was the understanding in 2010 and I haven’t stumbled across any information that is contradictory. Here’s an excerpt from a July 20, 2010 posting where I featured portions of a debate between Georgia Miller (at that time representing Friends of the Earth) and Dr. Andrew Maynard (at that time director of the University of Michigan Risk Science Center and a longtime participant in the nanotechnology risk discussions),

Three of the scientists whose work was cited by FoE as proof that nanosunscreens are dangerous either posted directly or asked Andrew to post comments which clarified the situation with exquisite care,

Despite FoE’s implications that nanoparticles in sunscreens might cause cancer because they are photoactive, Peter Dobson points out that there are nanomaterials used in sunscreens that are designed not to be photoactive. Brian Gulson, who’s work on zinc skin penetration was cited by FoE, points out that his studies only show conclusively that zinc atoms or ions can pass through the skin, not that nanoparticles can pass through. He also notes that the amount of zinc penetration from zinc-based sunscreens is very much lower than the level of zinc people have in their body in the first place. Tilman Butz, who led one of the largest projects on nanoparticle penetration through skin to date, points out that – based on current understanding – the nanoparticles used in sunscreens are too large to penetrate through the skin.

However, there may be other ingredients which do pass through into the bloodstream and are concerning.

One other thing I’d like to note. Not being able to remove the sunscreen easily ( “When wiped repeatedly with a towel, the new sunblock was entirely removed.”) may prove to be a problem as we need Vitamin D, which is for the most part obtainable by sun exposure.

In any event, here’s a link to and a citation for the paper,

A sunblock based on bioadhesive nanoparticles by Yang Deng, Asiri Ediriwickrema, Fan Yang, Julia Lewis, Michael Girardi, & W. Mark Saltzman. Nature Materials (2015) doi:10.1038/nmat4422 Published online 28 September 2015

This paper is behind a paywall.

*’teh’ changed to ‘the’ on June 6, 2016.

Replace silicon with black phosphorus instead of graphene?

I have two black phosphorus pieces. This first piece of research comes out of ‘La belle province’ or, as it’s more usually called, Québec (Canada).

Foundational research on phosphorene

There’s a lot of interest in replacing silicon for a number of reasons and, increasingly, there’s interest in finding an alternative to graphene.

A July 7, 2015 news item on Nanotechnology Now describes a new material for use as transistors,

As scientists continue to hunt for a material that will make it possible to pack more transistors on a chip, new research from McGill University and Université de Montréal adds to evidence that black phosphorus could emerge as a strong candidate.

In a study published today in Nature Communications, the researchers report that when electrons move in a phosphorus transistor, they do so only in two dimensions. The finding suggests that black phosphorus could help engineers surmount one of the big challenges for future electronics: designing energy-efficient transistors.

A July 7, 2015 McGill University news release on EurekAlert, which originated the news item, describes the field of 2D materials and the research into black phosphorus and its 2D version, phosperene (analogous to graphite and graphene),

“Transistors work more efficiently when they are thin, with electrons moving in only two dimensions,” says Thomas Szkopek, an associate professor in McGill’s Department of Electrical and Computer Engineering and senior author of the new study. “Nothing gets thinner than a single layer of atoms.”

In 2004, physicists at the University of Manchester in the U.K. first isolated and explored the remarkable properties of graphene — a one-atom-thick layer of carbon. Since then scientists have rushed to to investigate a range of other two-dimensional materials. One of those is black phosphorus, a form of phosphorus that is similar to graphite and can be separated easily into single atomic layers, known as phosphorene.

Phosphorene has sparked growing interest because it overcomes many of the challenges of using graphene in electronics. Unlike graphene, which acts like a metal, black phosphorus is a natural semiconductor: it can be readily switched on and off.

“To lower the operating voltage of transistors, and thereby reduce the heat they generate, we have to get closer and closer to designing the transistor at the atomic level,” Szkopek says. “The toolbox of the future for transistor designers will require a variety of atomic-layered materials: an ideal semiconductor, an ideal metal, and an ideal dielectric. All three components must be optimized for a well designed transistor. Black phosphorus fills the semiconducting-material role.”

The work resulted from a multidisciplinary collaboration among Szkopek’s nanoelectronics research group, the nanoscience lab of McGill Physics Prof. Guillaume Gervais, and the nanostructures research group of Prof. Richard Martel in Université de Montréal’s Department of Chemistry.

To examine how the electrons move in a phosphorus transistor, the researchers observed them under the influence of a magnetic field in experiments performed at the National High Magnetic Field Laboratory in Tallahassee, FL, the largest and highest-powered magnet laboratory in the world. This research “provides important insights into the fundamental physics that dictate the behavior of black phosphorus,” says Tim Murphy, DC Field Facility Director at the Florida facility.

“What’s surprising in these results is that the electrons are able to be pulled into a sheet of charge which is two-dimensional, even though they occupy a volume that is several atomic layers in thickness,” Szkopek says. That finding is significant because it could potentially facilitate manufacturing the material — though at this point “no one knows how to manufacture this material on a large scale.”

“There is a great emerging interest around the world in black phosphorus,” Szkopek says. “We are still a long way from seeing atomic layer transistors in a commercial product, but we have now moved one step closer.”

Here’s a link to and a citation for the paper,

Two-dimensional magnetotransport in a black phosphorus naked quantum well by V. Tayari, N. Hemsworth, I. Fakih, A. Favron, E. Gaufrès, G. Gervais, R. Martel & T. Szkopek. Nature Communications 6, Article number: 7702 doi:10.1038/ncomms8702 Published 07 July 2015

This is an open access paper.

The second piece of research into black phosphorus is courtesy of an international collaboration.

A phosporene transistor

A July 9, 2015 Technical University of Munich (TUM) press release (also on EurekAlert) describes the formation of a phosphorene transistor made possible by the introduction of arsenic,

Chemists at the Technische Universität München (TUM) have now developed a semiconducting material in which individual phosphorus atoms are replaced by arsenic. In a collaborative international effort, American colleagues have built the first field-effect transistors from the new material.

For many decades silicon has formed the basis of modern electronics. To date silicon technology could provide ever tinier transistors for smaller and smaller devices. But the size of silicon transistors is reaching its physical limit. Also, consumers would like to have flexible devices, devices that can be incorporated into clothing and the likes. However, silicon is hard and brittle. All this has triggered a race for new materials that might one day replace silicon.

Black arsenic phosphorus might be such a material. Like graphene, which consists of a single layer of carbon atoms, it forms extremely thin layers. The array of possible applications ranges from transistors and sensors to mechanically flexible semiconductor devices. Unlike graphene, whose electronic properties are similar to those of metals, black arsenic phosphorus behaves like a semiconductor.

The press release goes on to provide more detail about the collaboration and the research,

A cooperation between the Technical University of Munich and the University of Regensburg on the German side and the University of Southern California (USC) and Yale University in the United States has now, for the first time, produced a field effect transistor made of black arsenic phosphorus. The compounds were synthesized by Marianne Koepf at the laboratory of the research group for Synthesis and Characterization of Innovative Materials at the TUM. The field effect transistors were built and characterized by a group headed by Professor Zhou and Dr. Liu at the Department of Electrical Engineering at USC.

The new technology developed at TUM allows the synthesis of black arsenic phosphorus without high pressure. This requires less energy and is cheaper. The gap between valence and conduction bands can be precisely controlled by adjusting the arsenic concentration. “This allows us to produce materials with previously unattainable electronic and optical properties in an energy window that was hitherto inaccessible,” says Professor Tom Nilges, head of the research group for Synthesis and Characterization of Innovative Materials.

Detectors for infrared

With an arsenic concentration of 83 percent the material exhibits an extremely small band gap of only 0.15 electron volts, making it predestined for sensors which can detect long wavelength infrared radiation. LiDAR (Light Detection and Ranging) sensors operate in this wavelength range, for example. They are used, among other things, as distance sensors in automobiles. Another application is the measurement of dust particles and trace gases in environmental monitoring.

A further interesting aspect of these new, two-dimensional semiconductors is their anisotropic electronic and optical behavior. The material exhibits different characteristics along the x- and y-axes in the same plane. To produce graphene like films the material can be peeled off in ultra thin layers. The thinnest films obtained so far are only two atomic layers thick.

Here’s a link to and a citation for the paper,

Black Arsenic–Phosphorus: Layered Anisotropic Infrared Semiconductors with Highly Tunable Compositions and Properties by Bilu Liu, Marianne Köpf, Ahmad N. Abbas, Xiaomu Wang, Qiushi Guo, Yichen Jia, Fengnian Xia, Richard Weihrich, Frederik Bachhuber, Florian Pielnhofer, Han Wang, Rohan Dhall, Stephen B. Cronin, Mingyuan Ge1 Xin Fang, Tom Nilges, and Chongwu Zhou. DOI: 10.1002/adma.201501758 Article first published online: 25 JUN 2015

© 2015 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim

This paper is behind a paywall.

Dexter Johnson, on his Nanoclast blog (on the Institute for Electrical and Electronics Engineers website), adds more information about black phosphorus and its electrical properties in his July 9, 2015 posting about the Germany/US collaboration (Note: Links have been removed),

Black phosphorus has been around for about 100 years, but recently it has been synthesized as a two-dimensional material—dubbed phosphorene in reference to its two-dimensional cousin, graphene. Black phosphorus is quite attractive for electronic applications like field-effect transistors because of its inherent band gap and it is one of the few 2-D materials to be a natively p-type semiconductor.

One final comment, I notice the Germany-US work was published weeks prior to the Canadian research suggesting that the TUM July 9, 2015 press release is an attempt to capitalize on the interest generated by the Canadian research. That’s a smart move.

Nanomaterials and safety: Europe’s non-governmental agencies make recommendations; (US) Arizona State University initiative; and Japan’s voluntary carbon nanotube management

I have three news items which have one thing in common, they concern nanomaterials and safety. Two of these of items are fairly recent; the one about Japan has been sitting in my drafts folder for months and I’m including it here because if I don’t do it now, I never will.

First, there’s an April 7, 2014 news item on Nanowerk (h/t) about European non-governmental agencies (CIEL; the Center for International Environmental Law and its partners) and their recommendations regarding nanomaterials and safety. From the CIEL April 2014 news release,

CIEL and European partners* publish position paper on the regulation of nanomaterials at a meeting of EU competent authorities

*ClientEarth, The European Environmental Bureau, European citizen’s Organization for Standardisation, The European consumer voice in Standardisation –ANEC, and Health Care Without Harm, Bureau of European Consumers

… Current EU legislation does not guarantee that all nanomaterials on the market are safe by being assessed separately from the bulk form of the substance. Therefore, we ask the European Commission to come forward with concrete proposals for a comprehensive revision of the existing legal framework addressing the potential risks of nanomaterials.

1. Nanomaterials are different from other substances.

We are concerned that EU law does not take account of the fact that nano forms of a substance are different and have different intrinsic properties from their bulk counterpart. Therefore, we call for this principle to be explicitly established in the REACH, and Classification Labeling and Packaging (CLP) regulations, as well as in all other relevant legislation. To ensure adequate consideration, the submission of comprehensive substance identity and characterization data for all nanomaterials on the market, as defined by the Commission’s proposal for a nanomaterial definition, should be required.

Similarly, we call on the European Commission and EU Member States to ensure that nanomaterials do not benefit from the delays granted under REACH to phase-in substances, on the basis of information collected on their bulk form.

Further, nanomaterials, due to their properties, are generally much more reactive than their bulk counterpart, thereby increasing the risk of harmful impact of nanomaterials compared to an equivalent mass of bulk material. Therefore, the present REACH thresholds for the registration of nanomaterials should be lowered.

Before 2018, all nanomaterials on the market produced in amounts of over 10kg/year must be registered with ECHA on the basis of a full registration dossier specific to the nanoform.

2. Risk from nanomaterials must be assessed

Six years after the entry into force of the REACH registration requirements, only nine substances have been registered as nanomaterials despite the much wider number of substances already on the EU market, as demonstrated by existing inventories. Furthermore, the poor quality of those few nano registration dossiers does not enable their risks to be properly assessed. To confirm the conclusions of the Commission’s nano regulatory review assuming that not all nanomaterials are toxic, relevant EU legislation should be amended to ensure that all nanomaterials are adequately assessed for their hazardous properties.

Given the concerns about novel properties of nanomaterials, under REACH, all registration dossiers of nanomaterials must include a chemical safety assessment and must comply with the same information submission requirements currently required for substances classified as Carcinogenic, Mutagenic or Reprotoxic (CMRs).

3. Nanomaterials should be thoroughly evaluated

Pending the thorough risk assessment of nanomaterials demonstrated by comprehensive and up-to-date registration dossiers for all nanoforms on the market, we call on ECHA to systematically check compliance for all nanoforms, as well as check the compliance of all dossiers which, due to uncertainties in the description of their identity and characterization, are suspected of including substances in the nanoform. Further, the Community Roling Action Plan (CoRAP) list should include all identified substances in the nanoform and evaluation should be carried out without delay.

4. Information on nanomaterials must be collected and disseminated

All EU citizens have the right to know which products contain nanomaterials as well as the right to know about their risks to health and environment and overall level of exposure. Given the uncertainties surrounding nanomaterials, the Commission must guarantee that members of the public are in a position to exercise their right to know and to make informed choices pending thorough risk assessments of nanomaterials on the market.

Therefore, a publicly accessible inventory of nanomaterials and consumer products containing nanomaterials must be established at European level. Moreover, specific nano-labelling or declaration requirements must be established for all nano-containing products (detergents, aerosols, sprays, paints, medical devices, etc.) in addition to those applicable to food, cosmetics and biocides which are required under existing obligations.

5. REACH enforcement activities should tackle nanomaterials

REACH’s fundamental principle of “no data, no market” should be thoroughly implemented. Therefore, nanomaterials that are on the market without a meaningful minimum set of data to allow the assessment of their hazards and risks should be denied market access through enforcement activities. In the meantime, we ask the EU Member States and manufacturers to use a precautionary approach in the assessment, production, use and disposal of nanomaterials

This comes on the heels of CIEL’s March 2014 news release announcing a new three-year joint project concerning nanomaterials and safety and responsible development,

Supported by the VELUX foundations, CIEL and ECOS (the European Citizen’s Organization for Standardization) are launching a three-year project aiming to ensure that risk assessment methodologies and risk management tools help guide regulators towards the adoption of a precaution-based regulatory framework for the responsible development of nanomaterials in the EU and beyond.

Together with our project partner the German Öko-Institut, CIEL and ECOS will participate in the work of the standardization organizations Comité Européen de Normalisation and International Standards Organization, and this work of the OECD [Organization for Economic Cooperation and Development], especially related to health, environmental and safety aspects of nanomaterials and exposure and risk assessment. We will translate progress into understandable information and issue policy recommendations to guide regulators and support environmental NGOs in their campaigns for the safe and sustainable production and use of nanomaterials.

The VILLUM FOUNDATION and the VELUX FOUNDATION are non-profit foundations created by Villum Kann Rasmussen, the founder of the VELUX Group and other entities in the VKR Group, whose mission it is to bring daylight, fresh air and a better environment into people’s everyday lives.

Meanwhile in the US, an April 6, 2014 news item on Nanowerk announces a new research network, based at Arizona State University (ASU), devoted to studying health and environmental risks of nanomaterials,

Arizona State University researchers will lead a multi-university project to aid industry in understanding and predicting the potential health and environmental risks from nanomaterials.

Nanoparticles, which are approximately 1 to 100 nanometers in size, are used in an increasing number of consumer products to provide texture, resiliency and, in some cases, antibacterial protection.

The U.S. Environmental Protection Agency (EPA) has awarded a grant of $5 million over the next four years to support the LCnano Network as part of the Life Cycle of Nanomaterials project, which will focus on helping to ensure the safety of nanomaterials throughout their life cycles – from the manufacture to the use and disposal of the products that contain these engineered materials.

An April 1, 2014 ASU news release, which originated the news item, provides more details and includes information about project partners which I’m happy to note include nanoHUB and the Nanoscale Informal Science Education Network (NISENet) in addition to the other universities,

Paul Westerhoff is the LCnano Network director, as well as the associate dean of research for ASU’s Ira A. Fulton Schools of Engineering and a professor in the School of Sustainable Engineering and the Built Environment.

The project will team engineers, chemists, toxicologists and social scientists from ASU, Johns Hopkins, Duke, Carnegie Mellon, Purdue, Yale, Oregon’s state universities, the Colorado School of Mines and the University of Illinois-Chicago.

Engineered nanomaterials of silver, titanium, silica and carbon are among the most commonly used. They are dispersed in common liquids and food products, embedded in the polymers from which many products are made and attached to textiles, including clothing.

Nanomaterials provide clear benefits for many products, Westerhoff says, but there remains “a big knowledge gap” about how, or if, nanomaterials are released from consumer products into the environment as they move through their life cycles, eventually ending up in soils and water systems.

“We hope to help industry make sure that the kinds of products that engineered nanomaterials enable them to create are safe for the environment,” Westerhoff says.

“We will develop molecular-level fundamental theories to ensure the manufacturing processes for these products is safer,” he explains, “and provide databases of measurements of the properties and behavior of nanomaterials before, during and after their use in consumer products.”

Among the bigger questions the LCnano Network will investigate are whether nanomaterials can become toxic through exposure to other materials or the biological environs they come in contact with over the course of their life cycles, Westerhoff says.

The researchers will collaborate with industry – both large and small companies – and government laboratories to find ways of reducing such uncertainties.

Among the objectives is to provide a framework for product design and manufacturing that preserves the commercial value of the products using nanomaterials, but minimizes potentially adverse environmental and health hazards.

In pursuing that goal, the network team will also be developing technologies to better detect and predict potential nanomaterial impacts.

Beyond that, the LCnano Network also plans to increase awareness about efforts to protect public safety as engineered nanomaterials in products become more prevalent.

The grant will enable the project team to develop educational programs, including a museum exhibit about nanomaterials based on the LCnano Network project. The exhibit will be deployed through a partnership with the Arizona Science Center and researchers who have worked with the Nanoscale Informal Science Education Network.

The team also plans to make information about its research progress available on the nanotechnology industry website Nanohub.org.

“We hope to use Nanohub both as an internal virtual networking tool for the research team, and as a portal to post the outcomes and products of our research for public access,” Westerhoff says.

The grant will also support the participation of graduate students in the Science Outside the Lab program, which educates students on how science and engineering research can help shape public policy.

Other ASU faculty members involved in the LCnano Network project are:

• Pierre Herckes, associate professor, Department of Chemistry and Biochemistry, College of Liberal Arts and Sciences
• Kiril Hristovski, assistant professor, Department of Engineering, College of Technology and Innovation
• Thomas Seager, associate professor, School of Sustainable Engineering and the Built Environment
• David Guston, professor and director, Consortium for Science, Policy and Outcomes
• Ira Bennett, assistant research professor, Consortium for Science, Policy and Outcomes
• Jameson Wetmore, associate professor, Consortium for Science, Policy and Outcomes, and School of Human Evolution and Social Change

I hope to hear more about the LCnano Network as it progresses.

Finally, there was this Nov. 12, 2013 news item on Nanowerk about instituting  voluntary safety protocols for carbon nanotubes in Japan,

Technology Research Association for Single Wall Carbon Nanotubes (TASC)—a consortium of nine companies and the National Institute of Advanced Industrial Science and Technology (AIST) — is developing voluntary safety management techniques for carbon nanotubes (CNTs) under the project (no. P10024) “Innovative carbon nanotubes composite materials project toward achieving a low-carbon society,” which is sponsored by the New Energy and Industrial Technology Development Organization (NEDO).

Lynn Bergeson’s Nov. 15, 2013 posting on nanotech.lawbc.com provides a few more details abut the TASC/AIST carbon nanotube project (Note: A link has been removed),

Japan’s National Institute of Advanced Industrial Science and Technology (AIST) announced in October 2013 a voluntary guidance document on measuring airborne carbon nanotubes (CNT) in workplaces. … The guidance summarizes the available practical methods for measuring airborne CNTs:  (1) on-line aerosol measurement; (2) off-line quantitative analysis (e.g., thermal carbon analysis); and (3) sample collection for electron microscope observation. …

You can  download two protocol documents (Guide to measuring airborne carbon nanotubes in workplaces and/or The protocols of preparation, characterization and in vitro cell based assays for safety testing of carbon nanotubes), another has been published since Nov. 2013, from the AIST’s Developing voluntary safety management techniques for carbon nanotubes (CNTs): Protocol and Guide webpage., Both documents are also available in Japanese and you can link to the Japanese language version of the site from the webpage.