Tag Archives: Germany

Communicating science effectively—a December 2016 book from the US National Academy of Sciences

I stumbled across this Dec. 13, 2016  essay/book announcement by Dr. Andrew Maynard and Dr. Dietram A. Scheufele on The Conversation,

Many scientists and science communicators have grappled with disregard for, or inappropriate use of, scientific evidence for years – especially around contentious issues like the causes of global warming, or the benefits of vaccinating children. A long debunked study on links between vaccinations and autism, for instance, cost the researcher his medical license but continues to keep vaccination rates lower than they should be.

Only recently, however, have people begun to think systematically about what actually works to promote better public discourse and decision-making around what is sometimes controversial science. Of course scientists would like to rely on evidence, generated by research, to gain insights into how to most effectively convey to others what they know and do.

As it turns out, the science on how to best communicate science across different issues, social settings and audiences has not led to easy-to-follow, concrete recommendations.

About a year ago, the National Academies of Sciences, Engineering and Medicine brought together a diverse group of experts and practitioners to address this gap between research and practice. The goal was to apply scientific thinking to the process of how we go about communicating science effectively. Both of us were a part of this group (with Dietram as the vice chair).

The public draft of the group’s findings – “Communicating Science Effectively: A Research Agenda” – has just been published. In it, we take a hard look at what effective science communication means and why it’s important; what makes it so challenging – especially where the science is uncertain or contested; and how researchers and science communicators can increase our knowledge of what works, and under what conditions.

At some level, all science communication has embedded values. Information always comes wrapped in a complex skein of purpose and intent – even when presented as impartial scientific facts. Despite, or maybe because of, this complexity, there remains a need to develop a stronger empirical foundation for effective communication of and about science.

Addressing this, the National Academies draft report makes an extensive number of recommendations. A few in particular stand out:

  • Use a systems approach to guide science communication. In other words, recognize that science communication is part of a larger network of information and influences that affect what people and organizations think and do.
  • Assess the effectiveness of science communication. Yes, researchers try, but often we still engage in communication first and evaluate later. Better to design the best approach to communication based on empirical insights about both audiences and contexts. Very often, the technical risk that scientists think must be communicated have nothing to do with the hopes or concerns public audiences have.
  • Get better at meaningful engagement between scientists and others to enable that “honest, bidirectional dialogue” about the promises and pitfalls of science that our committee chair Alan Leshner and others have called for.
  • Consider social media’s impact – positive and negative.
  • Work toward better understanding when and how to communicate science around issues that are contentious, or potentially so.

The paper version of the book has a cost but you can get a free online version.  Unfortunately,  I cannot copy and paste the book’s table of contents here and was not able to find a book index although there is a handy list of reference texts.

I have taken a very quick look at the book. If you’re in the field, it’s definitely worth a look. It is, however, written for and by academics. If you look at the list of writers and reviewers, you will find over 90% are professors at one university or another. That said, I was happy to see references to Dan Kahan’s work at the Yale Law School’s Culture Cognition Project cited. As happens they weren’t able to cite his latest work [***see my xxx, 2017 curiosity post***], released about a month after “Communicating Science Effectively: A Research Agenda.”

I was unable to find any reference to science communication via popular culture. I’m a little dismayed as I feel that this is a seriously ignored source of information by science communication specialists and academicians but not by the folks at MIT (Massachusetts Institute of Technology) who announced a wireless app in the same week as it was featured in an episode of the US television comedy, The Big Bang Theory. Here’s more from MIT’s emotion detection wireless app in a Feb. 1, 2017 news release (also on EurekAlert),

It’s a fact of nature that a single conversation can be interpreted in very different ways. For people with anxiety or conditions such as Asperger’s, this can make social situations extremely stressful. But what if there was a more objective way to measure and understand our interactions?

Researchers from MIT’s Computer Science and Artificial Intelligence Laboratory (CSAIL) and Institute of Medical Engineering and Science (IMES) say that they’ve gotten closer to a potential solution: an artificially intelligent, wearable system that can predict if a conversation is happy, sad, or neutral based on a person’s speech patterns and vitals.

“Imagine if, at the end of a conversation, you could rewind it and see the moments when the people around you felt the most anxious,” says graduate student Tuka Alhanai, who co-authored a related paper with PhD candidate Mohammad Ghassemi that they will present at next week’s Association for the Advancement of Artificial Intelligence (AAAI) conference in San Francisco. “Our work is a step in this direction, suggesting that we may not be that far away from a world where people can have an AI social coach right in their pocket.”

As a participant tells a story, the system can analyze audio, text transcriptions, and physiological signals to determine the overall tone of the story with 83 percent accuracy. Using deep-learning techniques, the system can also provide a “sentiment score” for specific five-second intervals within a conversation.

“As far as we know, this is the first experiment that collects both physical data and speech data in a passive but robust way, even while subjects are having natural, unstructured interactions,” says Ghassemi. “Our results show that it’s possible to classify the emotional tone of conversations in real-time.”

The researchers say that the system’s performance would be further improved by having multiple people in a conversation use it on their smartwatches, creating more data to be analyzed by their algorithms. The team is keen to point out that they developed the system with privacy strongly in mind: The algorithm runs locally on a user’s device as a way of protecting personal information. (Alhanai says that a consumer version would obviously need clear protocols for getting consent from the people involved in the conversations.)

How it works

Many emotion-detection studies show participants “happy” and “sad” videos, or ask them to artificially act out specific emotive states. But in an effort to elicit more organic emotions, the team instead asked subjects to tell a happy or sad story of their own choosing.

Subjects wore a Samsung Simband, a research device that captures high-resolution physiological waveforms to measure features such as movement, heart rate, blood pressure, blood flow, and skin temperature. The system also captured audio data and text transcripts to analyze the speaker’s tone, pitch, energy, and vocabulary.

“The team’s usage of consumer market devices for collecting physiological data and speech data shows how close we are to having such tools in everyday devices,” says Björn Schuller, professor and chair of Complex and Intelligent Systems at the University of Passau in Germany, who was not involved in the research. “Technology could soon feel much more emotionally intelligent, or even ‘emotional’ itself.”

After capturing 31 different conversations of several minutes each, the team trained two algorithms on the data: One classified the overall nature of a conversation as either happy or sad, while the second classified each five-second block of every conversation as positive, negative, or neutral.

Alhanai notes that, in traditional neural networks, all features about the data are provided to the algorithm at the base of the network. In contrast, her team found that they could improve performance by organizing different features at the various layers of the network.

“The system picks up on how, for example, the sentiment in the text transcription was more abstract than the raw accelerometer data,” says Alhanai. “It’s quite remarkable that a machine could approximate how we humans perceive these interactions, without significant input from us as researchers.”

Results

Indeed, the algorithm’s findings align well with what we humans might expect to observe. For instance, long pauses and monotonous vocal tones were associated with sadder stories, while more energetic, varied speech patterns were associated with happier ones. In terms of body language, sadder stories were also strongly associated with increased fidgeting and cardiovascular activity, as well as certain postures like putting one’s hands on one’s face.

On average, the model could classify the mood of each five-second interval with an accuracy that was approximately 18 percent above chance, and a full 7.5 percent better than existing approaches.

The algorithm is not yet reliable enough to be deployed for social coaching, but Alhanai says that they are actively working toward that goal. For future work the team plans to collect data on a much larger scale, potentially using commercial devices such as the Apple Watch that would allow them to more easily implement the system out in the world.

“Our next step is to improve the algorithm’s emotional granularity so that it is more accurate at calling out boring, tense, and excited moments, rather than just labeling interactions as ‘positive’ or ‘negative,’” says Alhanai. “Developing technology that can take the pulse of human emotions has the potential to dramatically improve how we communicate with each other.”

This research was made possible in part by the Samsung Strategy and Innovation Center.

Episode 14 of season 10 of The Big Bang Theory was titled “The Emotion Detection Automation”  (full episode can be found on this webpage) and broadcast on Feb. 2, 2017. There’s also a Feb. 2, 2017 recap (recapitulation) by Lincee Ray for EW.com (it seems Ray is unaware that there really is such a machine),

Who knew we would see the day when Sheldon and Raj figured out solutions for their social ineptitudes? Only The Big Bang Theory writers would think to tackle our favorite physicists’ lack of social skills with an emotion detector and an ex-girlfriend focus group. It’s been a while since I enjoyed both storylines as much as I did in this episode. That’s no bazinga.

When Raj tells the guys that he is back on the market, he wonders out loud what is wrong with his game. Why do women reject him? Sheldon receives the information like a scientist and runs through many possible answers. Raj shuts him down with a simple, “I’m fine.”

Sheldon is irritated when he learns that this obligatory remark is a mask for what Raj is really feeling. It turns out, Raj is not fine. Sheldon whines, wondering why no one just says exactly what’s on their mind. It’s quite annoying for those who struggle with recognizing emotional cues.

Lo and behold, Bernadette recently read about a gizmo that was created for people who have this exact same anxiety. MIT has a prototype, and because Howard is an alum, he can probably submit Sheldon’s name as a beta tester.

Of course this is a real thing. If anyone can build an emotion detector, it’s a bunch of awkward scientists with zero social skills.

This is the first time I’ve noticed an academic institution’s news release to be almost simultaneous with mention of its research in a popular culture television program, which suggests things have come a long way since I featured news about a webinar by the National Academies ‘ Science and Entertainment Exchange for film and television productions collaborating with scientists in an Aug. 28, 2012 post.

One last science/popular culture moment: Hidden Figures, a movie about African American women who were human computers supporting NASA (US National Aeronautics and Space Agency) efforts during the 1960s space race and getting a man on the moon was (shockingly) no. 1 in the US box office for a few weeks (there’s more about the movie here in my Sept. 2, 2016 post covering then upcoming movies featuring science).  After the movie was released, Mary Elizabeth Williams wrote up a Jan. 23, 2017 interview with the ‘Hidden Figures’ scriptwriter for Salon.com

I [Allison Schroeder] got on the phone with her [co-producer Renee Witt] and Donna  [co-producer Donna Gigliotti] and I said, “You have to hire me for this; I was born to write this.” Donna sort of rolled her eyes and was like, “God, these Hollywood types would say anything.” I said, “No, no, I grew up at Cape Canaveral. My grandmother was a computer programmer at NASA, my grandfather worked on the Mercury prototype, and I interned there all through high school and then the summer after my freshman year at Stanford I interned. I worked at a missile launch company.”

She was like, “OK that’s impressive.” And I said, “No, I literally grew up climbing on the Mercury capsule — hitting all the buttons, trying to launch myself into space.”

She said, “Well do you think you can handle the math?” I said that I had to study a certain amount of math at Stanford for economics degree. She said, “Oh, all right, that sounds pretty good.”

I pitched her a few scenes. I pitched her the end of the movie that you saw with Katherine running the numbers as John Glenn is trying to get up in space. I pitched her the idea of one of the women as a mechanic and to see her legs underneath the engine. You’re used to seeing a guy like that, but what would it be like to see heels and pantyhose and a skirt and she’s a mechanic and fixing something? Those are some of the scenes that I pitched them, and I got the job.

I love that the film begins with setting up their mechanical aptitude. You set up these are women; you set up these women of color. You set up exactly what that means in this moment in history. It’s like you just go from there.

I was on a really tight timeline because this started as an indie film. It was just Donna Gigliotti, Renee Witt, me and the author Margot Lee Shetterly for about a year working on it. I was only given four weeks for research and 12 weeks for writing the first draft. I’m not sure if I hadn’t known NASA and known the culture and just knew what the machines would look like, knew what the prototypes looked like, if I could have done it that quickly. I turned in that draft and Donna was like, “OK you’ve got the math and the science; it’s all here. Now go have fun.” Then I did a few more drafts and that was really enjoyable because I could let go of the fact I did it and make sure that the characters and the drive of the story and everything just fit what needed to happen.

For anyone interested in the science/popular culture connection, David Bruggeman of the Pasco Phronesis blog does a better job than I do of keeping up with the latest doings.

Getting back to ‘Communicating Science Effectively: A Research Agenda’, even with a mention of popular culture, it is a thoughtful book on the topic.

Developing cortical implants for future speech neural prostheses

I’m guessing that graphene will feature in these proposed cortical implants since the project leader is a member of the Graphene Flagship’s Biomedical Technologies Work Package. (For those who don’t know, the Graphene Flagship is one of two major funding initiatives each receiving funding of 1B Euros over 10 years from the European Commission as part of their FET [Future and Emerging Technologies)] Initiative.)  A Jan. 12, 2017 news item on Nanowerk announces the new project (Note: A link has been removed),

BrainCom is a FET Proactive project, funded by the European Commission with 8.35M€ [8.3 million Euros] for the next 5 years, holding its Kick-off meeting on January 12-13 at ICN2 (Catalan Institute of Nanoscience and Nanotechnology) and the UAB [ Universitat Autònoma de Barcelona]. This project, coordinated by ICREA [Catalan Institution for Research and Advanced Studies] Research Prof. Jose A. Garrido from ICN2, will permit significant advances in understanding of cortical speech networks and the development of speech rehabilitation solutions using innovative brain-computer interfaces.

A Jan. 12, 2017 ICN2 press release, which originated the news item expands on the theme (it is a bit repetitive),

More than 5 million people worldwide suffer annually from aphasia, an extremely invalidating condition in which patients lose the ability to comprehend and formulate language after brain damage or in the course of neurodegenerative disorders. Brain-computer interfaces (BCIs), enabled by forefront technologies and materials, are a promising approach to treat patients with aphasia. The principle of BCIs is to collect neural activity at its source and decode it by means of electrodes implanted directly in the brain. However, neurorehabilitation of higher cognitive functions such as language raises serious issues. The current challenge is to design neural implants that cover sufficiently large areas of the brain to allow for reliable decoding of detailed neuronal activity distributed in various brain regions that are key for language processing.

BrainCom is a FET Proactive project funded by the European Commission with 8.35M€ for the next 5 years. This interdisciplinary initiative involves 10 partners including technologists, engineers, biologists, clinicians, and ethics experts. They aim to develop a new generation of neuroprosthetic cortical devices enabling large-scale recordings and stimulation of cortical activity to study high level cognitive functions. Ultimately, the BraimCom project will seed a novel line of knowledge and technologies aimed at developing the future generation of speech neural prostheses. It will cover different levels of the value chain: from technology and engineering to basic and language neuroscience, and from preclinical research in animals to clinical studies in humans.

This recently funded project is coordinated by ICREA Prof. Jose A. Garrido, Group Leader of the Advanced Electronic Materials and Devices Group at the Institut Català de Nanociència i Nanotecnologia (Catalan Institute of Nanoscience and Nanotechnology – ICN2) and deputy leader of the Biomedical Technologies Work Package presented last year in Barcelona by the Graphene Flagship. The BrainCom Kick-Off meeting is held on January 12-13 at ICN2 and the Universitat Autònoma de Barcelona (UAB).

Recent developments show that it is possible to record cortical signals from a small region of the motor cortex and decode them to allow tetraplegic [also known as, quadriplegic] people to activate a robotic arm to perform everyday life actions. Brain-computer interfaces have also been successfully used to help tetraplegic patients unable to speak to communicate their thoughts by selecting letters on a computer screen using non-invasive electroencephalographic (EEG) recordings. The performance of such technologies can be dramatically increased using more detailed cortical neural information.

BrainCom project proposes a radically new electrocorticography technology taking advantage of unique mechanical and electrical properties of novel nanomaterials such as graphene, 2D materials and organic semiconductors.  The consortium members will fabricate ultra-flexible cortical and intracortical implants, which will be placed right on the surface of the brain, enabling high density recording and stimulation sites over a large area. This approach will allow the parallel stimulation and decoding of cortical activity with unprecedented spatial and temporal resolution.

These technologies will help to advance the basic understanding of cortical speech networks and to develop rehabilitation solutions to restore speech using innovative brain-computer paradigms. The technology innovations developed in the project will also find applications in the study of other high cognitive functions of the brain such as learning and memory, as well as other clinical applications such as epilepsy monitoring.

The BrainCom project Consortium members are:

  • Catalan Institute of Nanoscience and Nanotechnology (ICN2) – Spain (Coordinator)
  • Institute of Microelectronics of Barcelona (CNM-IMB-CSIC) – Spain
  • University Grenoble Alpes – France
  • ARMINES/ Ecole des Mines de St. Etienne – France
  • Centre Hospitalier Universitaire de Grenoble – France
  • Multichannel Systems – Germany
  • University of Geneva – Switzerland
  • University of Oxford – United Kingdom
  • Ludwig-Maximilians-Universität München – Germany
  • Wavestone – Luxembourg

There doesn’t seem to be a website for the project but there is a BrainCom webpage on the European Commission’s CORDIS (Community Research and Development Information Service) website.

Nanoview report published by Germany’s Federal Institute for Risk Assessment

According to a Dec. 13, 2016 posting by Lynn L. Bergeson and Carla N. Hutton for the National Law Review blog the German government has released a report on nanotechnology, perceptions of risk, and communication strategies,

On November 15, 2016, Germany’s Federal Institute for Risk Assessment (BfR) published a report, in English, entitled Nanoview — Influencing factors on the perception of nanotechnology and target group-specific risk communication strategies. In 2007, BfR conducted a survey concerning the public perception of nanotechnology. Given the newness of nanotechnology and that large sections of the population did not have any definite opinions or knowledge of it, BfR conducted a follow-up survey, Nanoview, in 2012. Nanoview also included the additional question of which communication measures for conveying risk information regarding nanotechnology are best suited to reach the majority of the population. …  The report states that, given the findings from the 2007 representative survey, which confirmed gender-specific differences in the perception of nanotechnology, ideal-typical male and ideal-typical female concepts were developed. Focus groups then reviewed and optimized the conceptual considerations.  According to the report, the ideal-typical male concept met the expectations of the male target groups (nano-types “supporters” and “cautious observers”).

…  According to the report, the conceptual approach of the ideal-typical female concept met the expectations of the female target groups (nano-types “sceptics” and “cautious observers”), as well as catering to the information needs of some men (“cautious observers”).  …

The report concludes that, with regard to the central communication measure, creating an information portal on the Internet appears to be the most meaningful strategy. .. The report states: “The ideal-typical male concept is geared towards the provision of information on scientific, technical and application-related aspects of nanotechnology, for example.  The ideal-typical female concept focuses on the provision of information on application-related aspects of nanotechnology and support for everyday (purchase) decisions.”

I have quickly gone through the report and it’s interesting to note that the age range surveyed in 2012 was 16 to 60. Presumably Germany is in a similar position to other European countries, Canada, the US, and others in that the main portion of the population is ageing and that population is living longer; consequently, it seems odd to have excluded people over the age of 60.

I found more details about the gender differences expressed regarding nanotechnology, from Nanoview — Influencing factors on the perception of nanotechnology and target group-specific risk communication strategies,

For the following findings, there were numerous significant differences for the variables gender and age:
 Women are on the whole more sceptical towards nanotechnology than men; i.e.
– men tend to be more in favour of nano applications than women
– men  take  a  more  positive  view  than  women  of the  risk-benefit  ratio  in  general  and  in connection with specific applications
– men have a far better feeling about nanotechnology than women
– when  it  comes  to  information  about  nanotechnology, men  have  more faith  in  the government than women; women have more faith than men in environmental organisations as well as health and work safety authorities
– in  some  areas,  men  have  a  far  more  positive  attitude  towards  nanotechnology than women
 Younger  people  are  on  the  whole  more  open-minded  about  nanotechnology than older people; i.e.
-younger people tend to be more in favour of nano applications than older people. The cohort of 16 to 30-year-olds is in some cases far more open-minded than the population overall
– younger people take a (slightly) more positive view than older people of the risk-benefit ratio in general and in connection with specific applications
– in some areas, younger people have a far more positive attitude towards nanotechnology than older people

In  contrast,  there  are few  to  hardly  any  significant  differences for  the  variables  “education”, “size of household”, “income” and “migration background”. [p. 77]

I also found this to be of interest,

In recent years, there has been little or no change in awareness levels among the general population with regard to nanotechnology. This is shown by a comparison of the representative Germany-wide surveys on the risk perception of nanotechnology among the population conducted in 2007 and 2012 (cf. Chapter 0). In response to the open question regarding nanotechnology, around 40% of respondents in the 2012 survey say they had not previously heard of nanotechnology or nanomaterials (cf. Chapter 4.2.2). At the same time, however, those  respondents  who did know about the topic were able to make fairly differentiated statements on individual issues and applications. The risk-benefit ratio of nanotechnology is seen slightly more critically than five years previously, and the general attitude towards nanotechnology has become less favourable. The subjective feeling of being informed about the issue is also still less pronounced than is the case with other innovative technologies. From the point of view  of  consumers,  therefore, this means that an information deficit still exists when it comes to nanotechnology. (p. 83)

It seems to be true everywhere. Awareness of nanotechnology does not seem to change much.

This is a 162 pp. report, which recommends risk communication strategies for nanotechnology,

The findings of the representative survey underline the need to inform the public at the earliest possible date about scientific knowledge as well as the potential and possible risks of nanotechnology. For this reason, the challenge was to develop two alternative target group-specific risk communication concepts. The drafting of these concepts was a two-phase process and took account not only of the prior work done in the research project but also of the insights gained from two group discussions with consumers (focus groups). Against the backdrop of the findings from the representative survey, which  confirmed the gender-specific differences in the perception of nanotechnology, it was decided in consultation with the client to develop an ideal-typical male and an ideal-typical female concept. … (p. 100)

This returns us to the beginning with the Bergeson/Hutton post. For more details you do need to read the report. By the way, the literature survey is quite broad and interesting bringing together more than 20 surveys to provide an international (largely Eurocentric) perspective.

How does ice melt? Layer by layer!

A Dec. 12, 2016 news item on ScienceDaily announces the answer to a problem scientists have been investigating for over a century but first, here are the questions,

We all know that water melts at 0°C. However, 150 years ago the famous physicist Michael Faraday discovered that at the surface of frozen ice, well below 0°C, a thin film of liquid-like water is present. This thin film makes ice slippery and is crucial for the motion of glaciers.

Since Faraday’s discovery, the properties of this water-like layer have been the research topic of scientists all over the world, which has entailed considerable controversy: at what temperature does the surface become liquid-like? How does the thickness of the layer dependent on temperature? How does the thickness of the layer increases with temperature? Continuously? Stepwise? Experiments to date have generally shown a very thin layer, which continuously grows in thickness up to 45 nm right below the bulk melting point at 0°C. This also illustrates why it has been so challenging to study this layer of liquid-like water on ice: 45 nm is about 1/1000th part of a human hair and is not discernible by eye.

Scientists of the Max Planck Institute for Polymer Research (MPI-P), in a collaboration with researchers from the Netherlands, the USA and Japan, have succeeded to study the properties of this quasi-liquid layer on ice at the molecular level using advanced surface-specific spectroscopy and computer simulations. The results are published in the latest edition of the scientific journal Proceedings of the National Academy of Science (PNAS).

Caption: Ice melts as described in the text layer by layer. Credit: © MPIP

A Dec. 12, 2016 Max Planck Institute for Polymer Research press release (also on EurekAlert), which originated the news item, goes on to answer the questions,

The team of scientists around Ellen Backus, group leader at MPI-P, investigated how the thin liquid layer is formed on ice, how it grows with increasing temperature, and if it is distinguishable from normal liquid water. These studies required well-defined ice crystal surfaces. Therefore much effort was put into creating ~10 cm large single crystals of ice, which could be cut in such a way that the surface structure was precisely known. To investigate whether the surface was solid or liquid, the team made use of the fact that water molecules in the liquid have a weaker interaction with each other compared to water molecules in ice. Using their interfacial spectroscopy, combined with the controlled heating of the ice crystal, the researchers were able to quantify the change in the interaction between water molecules directly at the interface between ice and air.

The experimental results, combined with the simulations, showed that the first molecular layer at the ice surface has already molten at temperatures as low as -38° C (235 K), the lowest temperature the researchers could experimentally investigate. Increasing the temperature to -16° C (257 K), the second layer becomes liquid. Contrary to popular belief, the surface melting of ice is not a continuous process, but occurs in a discontinuous, layer-by-layer fashion.

“A further important question for us was, whether one could distinguish between the properties of the quasi-liquid layer and those of normal water” says Mischa Bonn, co-author of the paper and director at the MPI-P. And indeed, the quasi-liquid layer at -4° C (269 K) shows a different spectroscopic response than supercooled water at the same temperature; in the quasi-liquid layer, the water molecules seem to interact more strongly than in liquid water.

The results are not only important for a fundamental understanding of ice, but also for climate science, where much research takes place on catalytic reactions on ice surfaces, for which the understanding of the ice surface structure is crucial.

Here’s a link to and a citation for the paper,

Experimental and theoretical evidence for bilayer-by-bilayer surface melting of crystalline ice by M. Alejandra Sánchez, Tanja Kling, Tatsuya Ishiyama, Marc-Jan van Zadel, Patrick J. Bisson, Markus Mezger, Mara N. Jochum, Jenée D. Cyran, Wilbert J. Smit, Huib J. Bakker, Mary Jane Shultz, Akihiro Morita, Davide Donadio, Yuki Nagata, Mischa Bonn, and Ellen H. G. Backus. Proceedings of the National Academy of Science, 2016 DOI: 10.1073/pnas.1612893114 Published online before print December 12, 2016

This paper appears to be open access.

More on the blue tarantula noniridescent photonics

Covered in an Oct. 19, 2016 posting here, some new details have been released about noniridescent photonics and blue tarantulas, this time from the Karlsruhe Institute of Technology (KIT) in a Nov. 17, 2016 (?) press release (also on EurekAlert; h/t Nanowerk Nov. 17, 2016 news item) ,

Colors are produced in a variety of ways. The best known colors are pigments. However, the very bright colors of the blue tarantula or peacock feathers do not result from pigments, but from nanostructures that cause the reflected light waves to overlap. This produces extraordinarily dynamic color effects. Scientists from Karlsruhe Institute of Technology (KIT), in cooperation with international colleagues, have now succeeded in replicating nanostructures that generate the same color irrespective of the viewing angle. DOI: 10.1002/adom.201600599

In contrast to pigments, structural colors are non-toxic, more vibrant and durable. In industrial production, however, they have the drawback of being strongly iridescent, which means that the color perceived depends on the viewing angle. An example is the rear side of a CD. Hence, such colors cannot be used for all applications. Bright colors of animals, by contrast, are often independent of the angle of view. Feathers of the kingfisher always appear blue, no matter from which angle we look. The reason lies in the nanostructures: While regular structures are iridescent, amorphous or irregular structures always produce the same color. Yet, industry can only produce regular nanostructures in an economically efficient way.

Radwanul Hasan Siddique, researcher at KIT in collaboration with scientists from USA and Belgium has now discovered that the blue tarantula does not exhibit iridescence in spite of periodic structures on its hairs. First, their study revealed that the hairs are multi-layered, flower-like structure. Then, the researchers analyzed its reflection behavior with the help of computer simulations. In parallel, they built models of these structures using nano-3D printers and optimized the models with the help of the simulations. In the end, they produced a flower-like structure that generates the same color over a viewing angle of 160 degrees. This is the largest viewing angle of any synthetic structural color reached so far.


Flower-shaped nanostructures generate the color of the blue tarantula. (Graphics: Bill Hsiung, University of Akron)

 


The 3D print of the optimized flower structure is only 15 µm in dimension. A human hair is about three times as thick. (Photo: Bill Hsiung, Universtiy of Akron)

Apart from the multi-layered structure and rotational symmetry, it is the hierarchical structure from micro to nano that ensures homogeneous reflection intensity and prevents color changes.

Via the size of the “flower,” the resulting color can be adjusted, which makes this coloring method interesting for industry. “This could be a key first step towards a future where structural colorants replace the toxic pigments currently used in textile, packaging, and cosmetic industries,” says Radwanul Hasan Siddique of KIT’s Institute of Microstructure Technology, who now works at the California Institute of Technology. He considers short-term application in textile industry feasible.


The synthetically generated flower structure inspired by the blue tarantula reflects light in the same color over a viewing angle of 160 degrees. (Graphics: Derek Miller)  

Dr. Hendrik Hölscher thinks that the scalability of nano-3D printing is the biggest challenge on the way towards industrial use. Only few companies in the world are able to produce such prints. In his opinion, however, rapid development in this field will certainly solve this problem in the near future.

Once again, here’s a link to and a citation for the paper,

Tarantula-Inspired Noniridescent Photonics with Long-Range Order by Bor-Kai Hsiung, Radwanul Hasan Siddique, Lijia Jiang, Ying Liu, Yongfeng Lu, Matthew D. Shawkey, and Todd A. Blackledge. Advanced Materials DOI: 10.1002/adom.201600599 Version of Record online: 11 OCT 2016

© 2016 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim

The paper is behind a paywall. You can see the original Oct. 19, 2016 posting for my comments and some excerpts from the paper.

International news bits: Israel and Germany and Cuba and Iran

I have three news bits today.

Germany

From a Nov. 14, 2016 posting by Lynn L. Bergeson and Carla N. Hutton for The National Law Review (Note: A link has been removed),

The German Federal Ministry of Education and Research (BMBF) recently published an English version of its Action Plan Nanotechnology 2020. Based on the success of the Action Plan Nanotechnology over the previous ten years, the federal government will continue the Action Plan Nanotechnology for the next five years.  Action Plan Nanotechnology 2020 is geared towards the priorities of the federal government’s new “High-Tech Strategy” (HTS), which has as its objective the solution of societal challenges by promoting research.  According to Action Plan Nanotechnology 2020, the results of a number of research projects “have shown that nanomaterials are not per se linked with a risk for people and the environment due to their nanoscale properties.”  Instead, this is influenced more by structure, chemical composition, and other factors, and is thus dependent on the respective material and its application.

A Nov. 16, 2016 posting on Out-Law.com provides mores detail about the plan (Note: A link has been removed),

Eight ministries have been responsible for producing a joint plan on nanotechnology every five years since 2006, the Ministry said. The ministries develop a common approach that pools strategies for action and fields of application for nanotechnology, it [Germany’s Federal Ministry of Education and Research] said.

The German public sector currently spends more than €600 million a year on nanotechnology related developments, and 2,200 organisations from industry, services, research and associations are registered in the Ministry’s nanotechnology competence map, the report said.

“There are currently also some 1,100 companies in Germany engaged [in] the use of nanotechnology in the fields of research and development as well as the marketing of commercial products and services. The proportion of SMEs [small to medium enterprises?] is around 75%,” it said.

Nanotechnology-based product innovations play “an increasingly important role in many areas of life, such as health and nutrition, the workplace, mobility and energy production”, and the plan “thus pursues the objective of continuing to exploit the opportunities and potential of nanotechnology in Germany, without disregarding any potential risks to humans and the environment.”, the Ministry said.

Technology law expert Florian von Baum of Pinsent Masons, the law firm behind Out-Law.com said: “The action plan aims to achieve and secure Germany’s critical lead in the still new nanotechnology field and to recognise and use the full potential of nanotechnology while taking into account possible risks and dangers of this new technology.”

..

“With the rapid pace of development and the new applications that emerge every day, the government needs to ensure that the dangers and risks are sufficiently recognised and considered. Nanotechnology will provide great and long-awaited breakthroughs in health and ecological areas, but ethical, legal and socio-economic issues must be assessed and evaluated at all stages of the innovation chain,” von Baum said.

You can find Germany’s Action Plan Nanotechnology 2020 here, all 64 pp.of it.

Israel and Germany

A Nov. 16, 2016 article by Shoshanna Solomon for The Times of Israel announces a new joint (Israel-Germany) nanotechnology fund,

Tsrael and Germany have set up a new three-year, €30 million plan to promote joint nanotechnology initiatives and are calling on companies and entities in both countries to submit proposals for funding for projects in this field.

“Nanotech is the industry of the future in global hi-tech and Israel has set a goal of becoming a leader of this field, while cooperating with leading European countries,” Ilan Peled, manager of Technological Infrastructure Arena at the Israel Innovation Authority, said in a statement announcing the plan.

In the past decade nanotechnology, seen by many as the tech field of the future, has focused mainly on research. Now, however, Israel’s Innovation Authority, which has set up the joint program with Germany, believes the next decade will focus on the application of this research into products — and countries are keen to set up the right ecosystem that will draw companies operating in this field to them.

Over the last decade, the country has focused on creating a “robust research foundation that can support a large industry,” the authority said, with six academic research institutes that are among the world’s most advanced.

In addition, the authority said, there are about 200 new startups that were established over the last decade in the field, many in the development stage.

I know it’s been over 70 years since the events of World War II but this does seem like an unexpected coupling. It is heartening to see that people can resolve the unimaginable within the space of a few generations.

Iran and Cuba

A Nov. 16, 2016 Mehr News Agency press release announces a new laboratory in Cuba,

Iran is ready to build a laboratory center equipped with nanotechnology in one of nano institutes in Cuba, Iran’s VP for Science and Technology Sorena Sattari said Tuesday [Nov. 15, 2016].

Sorena Sattari, Vice-President for Science and Technology, made the remark in a meeting with Fidel Castro Diaz-Balart, scientific adviser to the Cuban president, in Tehran on Tuesday [November 15, 2016], adding that Iran is also ready to present Cuba with a gifted package including educational services related to how to operate the equipment at the lab.

During the meeting, Sattari noted Iran’s various technological achievements including exports of biotechnological medicine to Russia, the extensive nanotechnology plans for high school and university students as well as companies, the presence of about 160 companies active in the field of nanotechnology and the country’s achievements in the field of water treatment.

“We have sealed good nano agreements with Cuba, and are ready to develop our technological cooperation with this country in the field of vaccines and recombinant drugs,” he said.

Sattari maintained that the biggest e-commerce company in the Middle East is situated in Iran, adding “the company which was only established six years ago now sales over $3.5 million in a day, and is even bigger than similar companies in Russia.”

The Cuban official, for his part, welcomed any kind of cooperation with Iran, and thanked the Islamic Republic for its generous proposal on establishing a nanotechnology laboratory in his country.

This coupling is not quite so unexpected as Iran has been cozying up to all kinds of countries in its drive to establish itself as a nanotechnology leader.

DNA-based nanowires in your computer?

In the quest for smaller and smaller, DNA (deoxyribonucleic acid) is being exploited as never before. From a Nov. 9, 2016 news item on phys.org,

Tinier than the AIDS virus—that is currently the circumference of the smallest transistors. The industry has shrunk the central elements of their computer chips to fourteen nanometers in the last sixty years. Conventional methods, however, are hitting physical boundaries. Researchers around the world are looking for alternatives. One method could be the self-organization of complex components from molecules and atoms. Scientists at the Helmholtz-Zentrum Dresden-Rossendorf (HZDR) and Paderborn University have now made an important advance: the physicists conducted a current through gold-plated nanowires, which independently assembled themselves from single DNA strands. …

A Nov. 9, 2016 HZDR press release (also on EurekAlert), which originated the news item, provides more information,

At first glance, it resembles wormy lines in front of a black background. But what the electron microscope shows up close is that the nanometer-sized structures connect two electrical contacts. Dr. Artur Erbe from the Institute of Ion Beam Physics and Materials Research is pleased about what he sees. “Our measurements have shown that an electrical current is conducted through these tiny wires.” This is not necessarily self-evident, the physicist stresses. We are, after all, dealing with components made of modified DNA. In order to produce the nanowires, the researchers combined a long single strand of genetic material with shorter DNA segments through the base pairs to form a stable double strand. Using this method, the structures independently take on the desired form.

“With the help of this approach, which resembles the Japanese paper folding technique origami and is therefore referred to as DNA-origami, we can create tiny patterns,” explains the HZDR researcher. “Extremely small circuits made of molecules and atoms are also conceivable here.” This strategy, which scientists call the “bottom-up” method, aims to turn conventional production of electronic components on its head. “The industry has thus far been using what is known as the ‘top-down’ method. Large portions are cut away from the base material until the desired structure is achieved. Soon this will no longer be possible due to continual miniaturization.” The new approach is instead oriented on nature: molecules that develop complex structures through self-assembling processes.

Golden Bridges Between Electrodes

The elements that thereby develop would be substantially smaller than today’s tiniest computer chip components. Smaller circuits could theoretically be produced with less effort. There is, however, a problem: “Genetic matter doesn’t conduct a current particularly well,” points out Erbe. He and his colleagues have therefore placed gold-plated nanoparticles on the DNA wires using chemical bonds. Using a “top-down” method – electron beam lithography — they subsequently make contact with the individual wires electronically. “This connection between the substantially larger electrodes and the individual DNA structures have come up against technical difficulties until now. By combining the two methods, we can resolve this issue. We could thus very precisely determine the charge transport through individual wires for the first time,” adds Erbe.

As the tests of the Dresden researchers have shown, a current is actually conducted through the gold-plated wires — it is, however, dependent on the ambient temperature. “The charge transport is simultaneously reduced as the temperature decreases,” describes Erbe. “At normal room temperature, the wires function well, even if the electrons must partially jump from one gold particle to the next because they haven’t completely melded together. The distance, however, is so small that it currently doesn’t even show up using the most advanced microscopes.” In order to improve the conduction, Artur Erbe’s team aims to incorporate conductive polymers between the gold particles. The physicist believes the metallization process could also still be improved.

He is, however, generally pleased with the results: “We could demonstrate that the gold-plated DNA wires conduct energy. We are actually still in the basic research phase, which is why we are using gold rather than a more cost-efficient metal. We have, nevertheless, made an important stride, which could make electronic devices based on DNA possible in the future.”

Here’s a link to and a citation for the paper,

Temperature-Dependent Charge Transport through Individually Contacted DNA Origami-Based Au Nanowires by Bezu Teschome, Stefan Facsko, Tommy Schönherr, Jochen Kerbusch, Adrian Keller, and Artur Erbe. Langmuir, 2016, 32 (40), pp 10159–10165, DOI: 10.1021/acs.langmuir.6b01961, Publication Date (Web): September 14, 2016

Copyright © 2016 American Chemical Society

This paper is behind a paywall.

Sustainable Nanotechnologies (SUN) project draws to a close in March 2017

Two Oct. 31, 2016 news item on Nanowerk signal the impending sunset date for the European Union’s Sustainable Nanotechnologies (SUN) project. The first Oct. 31, 2016 news item on Nanowerk describes the projects latest achievements,

The results from the 3rd SUN annual meeting showed great advancement of the project. The meeting was held in Edinburgh, Scotland, UK on 4-5 October 2016 where the project partners presented the results obtained during the second reporting period of the project.

SUN is a three and a half year EU project, running from 2013 to 2017, with a budget of about €14 million. Its main goal is to evaluate the risks along the supply chain of engineered nanomaterials and incorporate the results into tools and guidelines for sustainable manufacturing.

The ultimate goal of the SUN Project is the development of an online software Decision Support System – SUNDS – aimed at estimating and managing occupational, consumer, environmental and public health risks from nanomaterials in real industrial products along their lifecycles. The SUNDS beta prototype has been released last October, 2015, and since then the main focus has been on refining the methodologies and testing them on selected case studies i.e. nano-copper oxide based wood preserving paint and nano- sized colourants for plastic car part: organic pigment and carbon black. Obtained results and open issues were discussed during the third annual meeting in order collect feedbacks from the consortium that will inform, in the next months, the implementation of the final version of the SUNDS software system, due by March 2017.

An Oct. 27, 2016 SUN project press release, which originated the news item, adds more information,

Significant interest has been payed towards the results obtained in WP2 (Lifecycle Thinking) which main objectives are to assess the environmental impacts arising from each life cycle stage of the SUN case studies (i.e. Nano-WC-Cobalt (Tungsten Carbide-cobalt) sintered ceramics, Nanocopper wood preservatives, Carbon Nano Tube (CNT) in plastics, Silicon Dioxide (SiO2) as food additive, Nano-Titanium Dioxide (TiO2) air filter system, Organic pigment in plastics and Nanosilver (Ag) in textiles), and compare them to conventional products with similar uses and functionality, in order to develop and validate criteria and guiding principles for green nano-manufacturing. Specifically, the consortium partner COLOROBBIA CONSULTING S.r.l. expressed its willingness to exploit the results obtained from the life cycle assessment analysis related to nanoTiO2 in their industrial applications.

On 6th October [2016], the discussions about the SUNDS advancement continued during a Stakeholder Workshop, where representatives from industry, regulatory and insurance sectors shared their feedback on the use of the decision support system. The recommendations collected during the workshop will be used for the further refinement and implemented in the final version of the software which will be released by March 2017.

The second Oct. 31, 2016 news item on Nanowerk led me to this Oct. 27, 2016 SUN project press release about the activities in the upcoming final months,

The project has designed its final events to serve as an effective platform to communicate the main results achieved in its course within the Nanosafety community and bridge them to a wider audience addressing the emerging risks of Key Enabling Technologies (KETs).

The series of events include the New Tools and Approaches for Nanomaterial Safety Assessment: A joint conference organized by NANOSOLUTIONS, SUN, NanoMILE, GUIDEnano and eNanoMapper to be held on 7 – 9 February 2017 in Malaga, Spain, the SUN-CaLIBRAte Stakeholders workshop to be held on 28 February – 1 March 2017 in Venice, Italy and the SRA Policy Forum: Risk Governance for Key Enabling Technologies to be held on 1- 3 March in Venice, Italy.

Jointly organized by the Society for Risk Analysis (SRA) and the SUN Project, the SRA Policy Forum will address current efforts put towards refining the risk governance of emerging technologies through the integration of traditional risk analytic tools alongside considerations of social and economic concerns. The parallel sessions will be organized in 4 tracks:  Risk analysis of engineered nanomaterials along product lifecycle, Risks and benefits of emerging technologies used in medical applications, Challenges of governing SynBio and Biotech, and Methods and tools for risk governance.

The SRA Policy Forum has announced its speakers and preliminary Programme. Confirmed speakers include:

  • Keld Alstrup Jensen (National Research Centre for the Working Environment, Denmark)
  • Elke Anklam (European Commission, Belgium)
  • Adam Arkin (University of California, Berkeley, USA)
  • Phil Demokritou (Harvard University, USA)
  • Gerard Escher (École polytechnique fédérale de Lausanne, Switzerland)
  • Lisa Friedersdor (National Nanotechnology Initiative, USA)
  • James Lambert (President, Society for Risk Analysis, USA)
  • Andre Nel (The University of California, Los Angeles, USA)
  • Bernd Nowack (EMPA, Switzerland)
  • Ortwin Renn (University of Stuttgart, Germany)
  • Vicki Stone (Heriot-Watt University, UK)
  • Theo Vermeire (National Institute for Public Health and the Environment (RIVM), Netherlands)
  • Tom van Teunenbroek (Ministry of Infrastructure and Environment, The Netherlands)
  • Wendel Wohlleben (BASF, Germany)

The New Tools and Approaches for Nanomaterial Safety Assessment (NMSA) conference aims at presenting the main results achieved in the course of the organizing projects fostering a discussion about their impact in the nanosafety field and possibilities for future research programmes.  The conference welcomes consortium partners, as well as representatives from other EU projects, industry, government, civil society and media. Accordingly, the conference topics include: Hazard assessment along the life cycle of nano-enabled products, Exposure assessment along the life cycle of nano-enabled products, Risk assessment & management, Systems biology approaches in nanosafety, Categorization & grouping of nanomaterials, Nanosafety infrastructure, Safe by design. The NMSA conference key note speakers include:

  • Harri Alenius (University of Helsinki, Finland,)
  • Antonio Marcomini (Ca’ Foscari University of Venice, Italy)
  • Wendel Wohlleben (BASF, Germany)
  • Danail Hristozov (Ca’ Foscari University of Venice, Italy)
  • Eva Valsami-Jones (University of Birmingham, UK)
  • Socorro Vázquez-Campos (LEITAT Technolоgical Center, Spain)
  • Barry Hardy (Douglas Connect GmbH, Switzerland)
  • Egon Willighagen (Maastricht University, Netherlands)
  • Nina Jeliazkova (IDEAconsult Ltd., Bulgaria)
  • Haralambos Sarimveis (The National Technical University of Athens, Greece)

During the SUN-caLIBRAte Stakeholder workshop the final version of the SUN user-friendly, software-based Decision Support System (SUNDS) for managing the environmental, economic and social impacts of nanotechnologies will be presented and discussed with its end users: industries, regulators and insurance sector representatives. The results from the discussion will be used as a foundation of the development of the caLIBRAte’s Risk Governance framework for assessment and management of human and environmental risks of MN and MN-enabled products.

The SRA Policy Forum: Risk Governance for Key Enabling Technologies and the New Tools and Approaches for Nanomaterial Safety Assessment conference are now open for registration. Abstracts for the SRA Policy Forum can be submitted till 15th November 2016.
For further information go to:
www.sra.org/riskgovernanceforum2017
http://www.nmsaconference.eu/

There you have it.

Creeping gel does ‘The Loco-Motion’

Now it’s the creeping gel’s turn, from an Oct. 24, 2016 news item on phys.org,

Directed motion seems simple to us, but the coordinated interplay of complex processes is needed, even for seemingly simple crawling motions of worms or snails. By using a gel that periodically swells and shrinks, researchers developed a model for the waves of muscular contraction and relaxation involved in crawling. As reported in the journal Angewandte Chemie, they were able to produce two types of crawling motion by using inhomogeneous irradiation.

 

Courtesy: Angewandte Chemie

Courtesy: Angewandte Chemie

An Oct. 24, 2016 Angewandte Chemie (Wiley) press release (also on EurekAlert), which originated the news item, explains further,

Crawling comes from waves that travel through muscle. These waves can travel in the same direction as the animal is crawling (direct waves), from the tail end toward the head, or in the opposite direction (retrograde waves), from the head toward the tail. While land snails use the former type of wave, earthworms and limpets use the latter. Chitons (polyplacophora) can switch between both types of movement.

With the aid of a chemical model in the form of a self-oscillating gel, researchers working with Qingyu Gao at the China University of Mining and Technology (Jiangsu, China) and Irving R. Epstein at Brandeis University (Waltham, Massachusetts, USA) have been able to answer some of the many questions about these crawling processes.

A gel is a molecular network with liquid bound in the gaps. In this case, the liquid contains all of the ingredients needed for an oscillating chemical reaction (“chemical clock”). The researchers incorporated one component of their reaction system into the network: a ruthenium complex. During the reaction, the ruthenium periodically switches between two oxidation states, Ru2+ and Ru3+. This switch changes the gel so that in one state it can hold more liquid than the other, so the gel swells and shrinks periodically. Like the chemical clock, these regions propagate in waves, similar to the waves of muscle contractions in crawling.

The complex used in this gel also changes oxidation state when irradiated with light. When the right half of the gel is irradiated more strongly than the left, the waves move from right to left, i.e., from a high- to a low-frequency region of gel oscillations. Once the difference in intensity of irradiation reaches a certain threshold, it causes a wormlike motion of the gel from left to right, retrograde wave locomotion. If the difference is increased further, the gel comes to a stop. A further increase in the difference causes the gel to move again, but in the opposite direction, i.e., direct wave locomotion. The nonuniform illumination plays a role analogous to that of anchoring segments and appendages (such as limbs and wings) during cell migration and animal locomotion, which control the direction of locomotion by strengthening direct movement and/or inhibiting the opposite movement.

By using computational models, the researchers were able to describe these processes. Within the gel, there are regions where pulling forces predominate; pushing forces predominate in other areas. Variations in the intensity of the irradiation lead to different changes in the friction forces and the tensions in the gel. When these effects are added up, it is possible to predict in which direction a particular grid element of the gel will move.

One important finding from this model: special changes in the viscoelastic properties of the slime excreted by the snails and worms as they crawl are not required for locomotion, whether retrograde or direct.

Here’s a link to and a citation for the paper,

Retrograde and Direct Wave Locomotion in a Photosensitive Self-Oscillating Gel by Lin Ren, Weibing She, Prof. Dr. Qingyu Gao, Dr. Changwei Pan, Dr. Chen Ji, and Prof. Dr. Irving R. Epstein. Angewandte Chemie International Edition DOI: 10.1002/anie.201608367 Version of Record online: 13 OCT 2016

© 2016 Wiley-VCH Verlag GmbH & Co. KGaA, Weinheim

This paper is behind a paywall.

For anyone curious about the song, there’s this from its Wikipedia entry (Note: Links have been removed),

“The Loco-Motion” is a 1962 pop song written by American songwriters Gerry Goffin and Carole King. “The Loco-Motion” was originally written for Dee Dee Sharp but Sharp turned the song down.[1] The song is notable for appearing in the American Top 5 three times – each time in a different decade, performed by artists from three different cultures: originally African American pop singer Little Eva in 1962 (U.S. No. 1);[2] then American band Grand Funk Railroad in 1974 (U.S. No. 1);[3] and finally Australian singer Kylie Minogue in 1988 (U.S. No. 3).[4]

The song is a popular and enduring example of the dance-song genre: much of the lyrics are devoted to a description of the dance itself, usually done as a type of line dance. However, the song came before the dance.

“The Loco-Motion” was also the second song to reach No. 1 by two different musical acts. The earlier song to do this was “Go Away Little Girl”, also written by Goffin and King. It is one of only nine songs to achieve this

I had not realized this song had such a storied past; there’s a lot more about it in the Wikipedia entry.

Unbreakable encrypted message with key that’s shorter than the message

A Sept. 5, 2016 University of Rochester (NY state, US) news release (also on EurekAlert), makes an intriguing announcement,

Researchers at the University of Rochester have moved beyond the theoretical in demonstrating that an unbreakable encrypted message can be sent with a key that’s far shorter than the message—the first time that has ever been done.

Until now, unbreakable encrypted messages were transmitted via a system envisioned by American mathematician Claude Shannon, considered the “father of information theory.” Shannon combined his knowledge of algebra and electrical circuitry to come up with a binary system of transmitting messages that are secure, under three conditions: the key is random, used only once, and is at least as long as the message itself.

The findings by Daniel Lum, a graduate student in physics, and John Howell, a professor of physics, have been published in the journal Physical Review A.

“Daniel’s research amounts to an important step forward, not just for encryption, but for the field of quantum data locking,” said Howell.

Quantum data locking is a method of encryption advanced by Seth Lloyd, a professor of quantum information at Massachusetts Institute of Technology, that uses photons—the smallest particles associated with light—to carry a message. Quantum data locking was thought to have limitations for securely encrypting messages, but Lloyd figured out how to make additional assumptions—namely those involving the boundary between light and matter—to make it a more secure method of sending data.  While a binary system allows for only an on or off position with each bit of information, photon waves can be altered in many more ways: the angle of tilt can be changed, the wavelength can be made longer or shorter, and the size of the amplitude can be modified. Since a photon has more variables—and there are fundamental uncertainties when it comes to quantum measurements—the quantum key for encrypting and deciphering a message can be shorter that the message itself.

Lloyd’s system remained theoretical until this year, when Lum and his team developed a device—a quantum enigma machine—that would put the theory into practice. The device takes its name from the encryption machine used by Germany during World War II, which employed a coding method that the British and Polish intelligence agencies were secretly able to crack.

Let’s assume that Alice wants to send an encrypted message to Bob. She uses the machine to generate photons that travel through free space and into a spatial light modulator (SLM) that alters the properties of the individual photons (e.g. amplitude, tilt) to properly encode the message into flat but tilted wavefronts that can be focused to unique points dictated by the tilt. But the SLM does one more thing: it distorts the shapes of the photons into random patterns, such that the wavefront is no longer flat which means it no longer has a well-defined focus. Alice and Bob both know the keys which identify the implemented scrambling operations, so Bob is able to use his own SLM to flatten the wavefront, re-focus the photons, and translate the altered properties into the distinct elements of the message.

Along with modifying the shape of the photons, Lum and the team made use of the uncertainty principle, which states that the more we know about one property of a particle, the less we know about another of its properties. Because of that, the researchers were able to securely lock in six bits of classical information using only one bit of an encryption key—an operation called data locking.

“While our device is not 100 percent secure, due to photon loss,” said Lum, “it does show that data locking in message encryption is far more than a theory.”

The ultimate goal of the quantum enigma machine is to prevent a third party—for example, someone named Eve—from intercepting and deciphering the message. A crucial principle of quantum theory is that the mere act of measuring a quantum system changes the system. As a result, Eve has only one shot at obtaining and translating the encrypted message—something that is virtually impossible, given the nearly limitless number of patterns that exist for each photon.

The paper by Lum and Howell was one of two papers published simultaneously on the same topic. The other paper, “Quantum data locking,” was from a team led by Chinese physicist Jian-Wei Pan.

“It’s highly unlikely that our free-space implementation will be useful through atmospheric conditions,” said Lum. “Instead, we have identified the use of optic fiber as a more practical route for data locking, a path Pan’s group actually started with. Regardless, the field is still in its infancy with a great deal more research needed.”

Here’s a link to and a citation for the paper,

Quantum enigma machine: Experimentally demonstrating quantum data locking by Daniel J. Lum, John C. Howell, M. S. Allman, Thomas Gerrits, Varun B. Verma, Sae Woo Nam, Cosmo Lupo, and Seth Lloyd. Phys. Rev. A, Vol. 94, Iss. 2 — August 2016 DOI: http://dx.doi.org/10.1103/PhysRevA.94.022315

©2016 American Physical Society

This paper is behind a paywall.

There is an earlier open access version of the paper by the Chinese researchers on arXiv.org,

Experimental quantum data locking by Yang Liu, Zhu Cao, Cheng Wu, Daiji Fukuda, Lixing You, Jiaqiang Zhong, Takayuki Numata, Sijing Chen, Weijun Zhang, Sheng-Cai Shi, Chao-Yang Lu, Zhen Wang, Xiongfeng Ma, Jingyun Fan, Qiang Zhang, Jian-Wei Pan. arXiv.org > quant-ph > arXiv:1605.04030

The Chinese team’s later version of the paper is available here,

Experimental quantum data locking by Yang Liu, Zhu Cao, Cheng Wu, Daiji Fukuda, Lixing You, Jiaqiang Zhong, Takayuki Numata, Sijing Chen, Weijun Zhang, Sheng-Cai Shi, Chao-Yang Lu, Zhen Wang, Xiongfeng Ma, Jingyun Fan, Qiang Zhang, and Jian-Wei Pan. Phys. Rev. A, Vol. 94, Iss. 2 — August 2016 DOI: http://dx.doi.org/10.1103/PhysRevA.94.020301

©2016 American Physical Society

This version is behind a paywall.

Getting back to the folks at the University of Rochester, they have provided this image to illustrate their work,

The quantum enigma machine developed by researchers at the University of Rochester, MIT, and the National Institute of Standards and Technology. (Image by Daniel Lum/University of Rochester)

The quantum enigma machine developed by researchers at the University of Rochester, MIT, and the National Institute of Standards and Technology. (Image by Daniel Lum/University of Rochester)