Tag Archives: Andrew Maynard

Emerging technology and the law

I have three news bits about legal issues that are arising as a consequence of emerging technologies.

Deep neural networks, art, and copyright

Caption: The rise of automated art opens new creative avenues, coupled with new problems for copyright protection. Credit: Provided by: Alexander Mordvintsev, Christopher Olah and Mike Tyka

Presumably this artwork is a demonstration of automated art although they never really do explain how in the news item/news release. An April 26, 2017 news item on ScienceDaily announces research into copyright and the latest in using neural networks to create art,

In 1968, sociologist Jean Baudrillard wrote on automatism that “contained within it is the dream of a dominated world […] that serves an inert and dreamy humanity.”

With the growing popularity of Deep Neural Networks (DNN’s), this dream is fast becoming a reality.

Dr. Jean-Marc Deltorn, researcher at the Centre d’études internationales de la propriété intellectuelle in Strasbourg, argues that we must remain a responsive and responsible force in this process of automation — not inert dominators. As he demonstrates in a recent Frontiers in Digital Humanities paper, the dream of automation demands a careful study of the legal problems linked to copyright.

An April 26, 2017 Frontiers (publishing) news release on EurekAlert, which originated the news item, describes the research in more detail,

For more than half a century, artists have looked to computational processes as a way of expanding their vision. DNN’s are the culmination of this cross-pollination: by learning to identify a complex number of patterns, they can generate new creations.

These systems are made up of complex algorithms modeled on the transmission of signals between neurons in the brain.

DNN creations rely in equal measure on human inputs and the non-human algorithmic networks that process them.

Inputs are fed into the system, which is layered. Each layer provides an opportunity for a more refined knowledge of the inputs (shape, color, lines). Neural networks compare actual outputs to expected ones, and correct the predictive error through repetition and optimization. They train their own pattern recognition, thereby optimizing their learning curve and producing increasingly accurate outputs.

The deeper the layers are, the higher the level of abstraction. The highest layers are able to identify the contents of a given input with reasonable accuracy, after extended periods of training.

Creation thus becomes increasingly automated through what Deltorn calls “the arcane traceries of deep architecture”. The results are sufficiently abstracted from their sources to produce original creations that have been exhibited in galleries, sold at auction and performed at concerts.

The originality of DNN’s is a combined product of technological automation on one hand, human inputs and decisions on the other.

DNN’s are gaining popularity. Various platforms (such as DeepDream) now allow internet users to generate their very own new creations . This popularization of the automation process calls for a comprehensive legal framework that ensures a creator’s economic and moral rights with regards to his work – copyright protection.

Form, originality and attribution are the three requirements for copyright. And while DNN creations satisfy the first of these three, the claim to originality and attribution will depend largely on a given country legislation and on the traceability of the human creator.

Legislation usually sets a low threshold to originality. As DNN creations could in theory be able to create an endless number of riffs on source materials, the uncurbed creation of original works could inflate the existing number of copyright protections.

Additionally, a small number of national copyright laws confers attribution to what UK legislation defines loosely as “the person by whom the arrangements necessary for the creation of the work are undertaken.” In the case of DNN’s, this could mean anybody from the programmer to the user of a DNN interface.

Combined with an overly supple take on originality, this view on attribution would further increase the number of copyrightable works.

The risk, in both cases, is that artists will be less willing to publish their own works, for fear of infringement of DNN copyright protections.

In order to promote creativity – one seminal aim of copyright protection – the issue must be limited to creations that manifest a personal voice “and not just the electric glint of a computational engine,” to quote Deltorn. A delicate act of discernment.

DNN’s promise new avenues of creative expression for artists – with potential caveats. Copyright protection – a “catalyst to creativity” – must be contained. Many of us gently bask in the glow of an increasingly automated form of technology. But if we want to safeguard the ineffable quality that defines much art, it might be a good idea to hone in more closely on the differences between the electric and the creative spark.

This research is and be will part of a broader Frontiers Research Topic collection of articles on Deep Learning and Digital Humanities.

Here’s a link to and a citation for the paper,

Deep Creations: Intellectual Property and the Automata by Jean-Marc Deltorn. Front. Digit. Humanit., 01 February 2017 | https://doi.org/10.3389/fdigh.2017.00003

This paper is open access.

Conference on governance of emerging technologies

I received an April 17, 2017 notice via email about this upcoming conference. Here’s more from the Fifth Annual Conference on Governance of Emerging Technologies: Law, Policy and Ethics webpage,

The Fifth Annual Conference on Governance of Emerging Technologies:

Law, Policy and Ethics held at the new

Beus Center for Law & Society in Phoenix, AZ

May 17-19, 2017!

Call for Abstracts – Now Closed

The conference will consist of plenary and session presentations and discussions on regulatory, governance, legal, policy, social and ethical aspects of emerging technologies, including (but not limited to) nanotechnology, synthetic biology, gene editing, biotechnology, genomics, personalized medicine, human enhancement technologies, telecommunications, information technologies, surveillance technologies, geoengineering, neuroscience, artificial intelligence, and robotics. The conference is premised on the belief that there is much to be learned and shared from and across the governance experience and proposals for these various emerging technologies.

Keynote Speakers:

Gillian HadfieldRichard L. and Antoinette Schamoi Kirtland Professor of Law and Professor of Economics USC [University of Southern California] Gould School of Law

Shobita Parthasarathy, Associate Professor of Public Policy and Women’s Studies, Director, Science, Technology, and Public Policy Program University of Michigan

Stuart Russell, Professor at [University of California] Berkeley, is a computer scientist known for his contributions to artificial intelligence

Craig Shank, Vice President for Corporate Standards Group in Microsoft’s Corporate, External and Legal Affairs (CELA)

Plenary Panels:

Innovation – Responsible and/or Permissionless

Ellen-Marie Forsberg, Senior Researcher/Research Manager at Oslo and Akershus University College of Applied Sciences

Adam Thierer, Senior Research Fellow with the Technology Policy Program at the Mercatus Center at George Mason University

Wendell Wallach, Consultant, ethicist, and scholar at Yale University’s Interdisciplinary Center for Bioethics

 Gene Drives, Trade and International Regulations

Greg Kaebnick, Director, Editorial Department; Editor, Hastings Center Report; Research Scholar, Hastings Center

Jennifer Kuzma, Goodnight-North Carolina GlaxoSmithKline Foundation Distinguished Professor in Social Sciences in the School of Public and International Affairs (SPIA) and co-director of the Genetic Engineering and Society (GES) Center at North Carolina State University

Andrew Maynard, Senior Sustainability Scholar, Julie Ann Wrigley Global Institute of Sustainability Director, Risk Innovation Lab, School for the Future of Innovation in Society Professor, School for the Future of Innovation in Society, Arizona State University

Gary Marchant, Regents’ Professor of Law, Professor of Law Faculty Director and Faculty Fellow, Center for Law, Science & Innovation, Arizona State University

Marc Saner, Inaugural Director of the Institute for Science, Society and Policy, and Associate Professor, University of Ottawa Department of Geography

Big Data

Anupam Chander, Martin Luther King, Jr. Professor of Law and Director, California International Law Center, UC Davis School of Law

Pilar Ossorio, Professor of Law and Bioethics, University of Wisconsin, School of Law and School of Medicine and Public Health; Morgridge Institute for Research, Ethics Scholar-in-Residence

George Poste, Chief Scientist, Complex Adaptive Systems Initiative (CASI) (http://www.casi.asu.edu/), Regents’ Professor and Del E. Webb Chair in Health Innovation, Arizona State University

Emily Shuckburgh, climate scientist and deputy head of the Polar Oceans Team at the British Antarctic Survey, University of Cambridge

 Responsible Development of AI

Spring Berman, Ira A. Fulton Schools of Engineering, Arizona State University

John Havens, The IEEE [Institute of Electrical and Electronics Engineers] Global Initiative for Ethical Considerations in Artificial Intelligence and Autonomous Systems

Subbarao Kambhampati, Senior Sustainability Scientist, Julie Ann Wrigley Global Institute of Sustainability, Professor, School of Computing, Informatics and Decision Systems Engineering, Ira A. Fulton Schools of Engineering, Arizona State University

Wendell Wallach, Consultant, Ethicist, and Scholar at Yale University’s Interdisciplinary Center for Bioethics

Existential and Catastrophic Ricks [sic]

Tony Barrett, Co-Founder and Director of Research of the Global Catastrophic Risk Institute

Haydn Belfield,  Academic Project Administrator, Centre for the Study of Existential Risk at the University of Cambridge

Margaret E. Kosal Associate Director, Sam Nunn School of International Affairs, Georgia Institute of Technology

Catherine Rhodes,  Academic Project Manager, Centre for the Study of Existential Risk at CSER, University of Cambridge

These were the panels that are of interest to me; there are others on the homepage.

Here’s some information from the Conference registration webpage,

Early Bird Registration – $50 off until May 1! Enter discount code: earlybirdGETs50

New: Group Discount – Register 2+ attendees together and receive an additional 20% off for all group members!

Click Here to Register!

Conference registration fees are as follows:

  • General (non-CLE) Registration: $150.00
  • CLE Registration: $350.00
  • *Current Student / ASU Law Alumni Registration: $50.00
  • ^Cybsersecurity sessions only (May 19): $100 CLE / $50 General / Free for students (registration info coming soon)

There you have it.

Neuro-techno future laws

I’m pretty sure this isn’t the first exploration of potential legal issues arising from research into neuroscience although it’s the first one I’ve stumbled across. From an April 25, 2017 news item on phys.org,

New human rights laws to prepare for advances in neurotechnology that put the ‘freedom of the mind’ at risk have been proposed today in the open access journal Life Sciences, Society and Policy.

The authors of the study suggest four new human rights laws could emerge in the near future to protect against exploitation and loss of privacy. The four laws are: the right to cognitive liberty, the right to mental privacy, the right to mental integrity and the right to psychological continuity.

An April 25, 2017 Biomed Central news release on EurekAlert, which originated the news item, describes the work in more detail,

Marcello Ienca, lead author and PhD student at the Institute for Biomedical Ethics at the University of Basel, said: “The mind is considered to be the last refuge of personal freedom and self-determination, but advances in neural engineering, brain imaging and neurotechnology put the freedom of the mind at risk. Our proposed laws would give people the right to refuse coercive and invasive neurotechnology, protect the privacy of data collected by neurotechnology, and protect the physical and psychological aspects of the mind from damage by the misuse of neurotechnology.”

Advances in neurotechnology, such as sophisticated brain imaging and the development of brain-computer interfaces, have led to these technologies moving away from a clinical setting and into the consumer domain. While these advances may be beneficial for individuals and society, there is a risk that the technology could be misused and create unprecedented threats to personal freedom.

Professor Roberto Andorno, co-author of the research, explained: “Brain imaging technology has already reached a point where there is discussion over its legitimacy in criminal court, for example as a tool for assessing criminal responsibility or even the risk of reoffending. Consumer companies are using brain imaging for ‘neuromarketing’, to understand consumer behaviour and elicit desired responses from customers. There are also tools such as ‘brain decoders’ which can turn brain imaging data into images, text or sound. All of these could pose a threat to personal freedom which we sought to address with the development of four new human rights laws.”

The authors explain that as neurotechnology improves and becomes commonplace, there is a risk that the technology could be hacked, allowing a third-party to ‘eavesdrop’ on someone’s mind. In the future, a brain-computer interface used to control consumer technology could put the user at risk of physical and psychological damage caused by a third-party attack on the technology. There are also ethical and legal concerns over the protection of data generated by these devices that need to be considered.

International human rights laws make no specific mention to neuroscience, although advances in biomedicine have become intertwined with laws, such as those concerning human genetic data. Similar to the historical trajectory of the genetic revolution, the authors state that the on-going neurorevolution will force a reconceptualization of human rights laws and even the creation of new ones.

Marcello Ienca added: “Science-fiction can teach us a lot about the potential threat of technology. Neurotechnology featured in famous stories has in some cases already become a reality, while others are inching ever closer, or exist as military and commercial prototypes. We need to be prepared to deal with the impact these technologies will have on our personal freedom.”

Here’s a link to and a citation for the paper,

Towards new human rights in the age of neuroscience and neurotechnology by Marcello Ienca and Roberto Andorno. Life Sciences, Society and Policy201713:5 DOI: 10.1186/s40504-017-0050-1 Published: 26 April 2017

©  The Author(s). 2017

This paper is open access.

New principles for AI (artificial intelligence) research along with some history and a plea for a democratic discussion

For almost a month I’ve been meaning to get to this Feb. 1, 2017 essay by Andrew Maynard (director of Risk Innovation Lab at Arizona State University) and Jack Stilgoe (science policy lecturer at University College London [UCL]) on the topic of artificial intelligence and principles (Note: Links have been removed). First, a walk down memory lane,

Today [Feb. 1, 2017] in Washington DC, leading US and UK scientists are meeting to share dispatches from the frontiers of machine learning – an area of research that is creating new breakthroughs in artificial intelligence (AI). Their meeting follows the publication of a set of principles for beneficial AI that emerged from a conference earlier this year at a place with an important history.

In February 1975, 140 people – mostly scientists, with a few assorted lawyers, journalists and others – gathered at a conference centre on the California coast. A magazine article from the time by Michael Rogers, one of the few journalists allowed in, reported that most of the four days’ discussion was about the scientific possibilities of genetic modification. Two years earlier, scientists had begun using recombinant DNA to genetically modify viruses. The Promethean nature of this new tool prompted scientists to impose a moratorium on such experiments until they had worked out the risks. By the time of the Asilomar conference, the pent-up excitement was ready to burst. It was only towards the end of the conference when a lawyer stood up to raise the possibility of a multimillion-dollar lawsuit that the scientists focussed on the task at hand – creating a set of principles to govern their experiments.

The 1975 Asilomar meeting is still held up as a beacon of scientific responsibility. However, the story told by Rogers, and subsequently by historians, is of scientists motivated by a desire to head-off top down regulation with a promise of self-governance. Geneticist Stanley Cohen said at the time, ‘If the collected wisdom of this group doesn’t result in recommendations, the recommendations may come from other groups less well qualified’. The mayor of Cambridge, Massachusetts was a prominent critic of the biotechnology experiments then taking place in his city. He said, ‘I don’t think these scientists are thinking about mankind at all. I think that they’re getting the thrills and the excitement and the passion to dig in and keep digging to see what the hell they can do’.

The concern in 1975 was with safety and containment in research, not with the futures that biotechnology might bring about. A year after Asilomar, Cohen’s colleague Herbert Boyer founded Genentech, one of the first biotechnology companies. Corporate interests barely figured in the conversations of the mainly university scientists.

Fast-forward 42 years and it is clear that machine learning, natural language processing and other technologies that come under the AI umbrella are becoming big business. The cast list of the 2017 Asilomar meeting included corporate wunderkinds from Google, Facebook and Tesla as well as researchers, philosophers, and other academics. The group was more intellectually diverse than their 1975 equivalents, but there were some notable absences – no public and their concerns, no journalists, and few experts in the responsible development of new technologies.

Maynard and Stilgoe offer a critique of the latest principles,

The principles that came out of the meeting are, at least at first glance, a comforting affirmation that AI should be ‘for the people’, and not to be developed in ways that could cause harm. They promote the idea of beneficial and secure AI, development for the common good, and the importance of upholding human values and shared prosperity.

This is good stuff. But it’s all rather Motherhood and Apple Pie: comforting and hard to argue against, but lacking substance. The principles are short on accountability, and there are notable absences, including the need to engage with a broader set of stakeholders and the public. At the early stages of developing new technologies, public concerns are often seen as an inconvenience. In a world in which populism appears to be trampling expertise into the dirt, it is easy to understand why scientists may be defensive.

I encourage you to read this thoughtful essay in its entirety although I do have one nit to pick:  Why only US and UK scientists? I imagine the answer may lie in funding and logistics issues but I find it surprising that the critique makes no mention of the international community as a nod to inclusion.

For anyone interested in the Asolimar AI principles (2017), you can find them here. You can also find videos of the two-day workshop (Jan. 31 – Feb. 1, 2017 workshop titled The Frontiers of Machine Learning (a Raymond and Beverly Sackler USA-UK Scientific Forum [US National Academy of Sciences]) here (videos for each session are available on Youtube).

Communicating science effectively—a December 2016 book from the US National Academy of Sciences

I stumbled across this Dec. 13, 2016  essay/book announcement by Dr. Andrew Maynard and Dr. Dietram A. Scheufele on The Conversation,

Many scientists and science communicators have grappled with disregard for, or inappropriate use of, scientific evidence for years – especially around contentious issues like the causes of global warming, or the benefits of vaccinating children. A long debunked study on links between vaccinations and autism, for instance, cost the researcher his medical license but continues to keep vaccination rates lower than they should be.

Only recently, however, have people begun to think systematically about what actually works to promote better public discourse and decision-making around what is sometimes controversial science. Of course scientists would like to rely on evidence, generated by research, to gain insights into how to most effectively convey to others what they know and do.

As it turns out, the science on how to best communicate science across different issues, social settings and audiences has not led to easy-to-follow, concrete recommendations.

About a year ago, the National Academies of Sciences, Engineering and Medicine brought together a diverse group of experts and practitioners to address this gap between research and practice. The goal was to apply scientific thinking to the process of how we go about communicating science effectively. Both of us were a part of this group (with Dietram as the vice chair).

The public draft of the group’s findings – “Communicating Science Effectively: A Research Agenda” – has just been published. In it, we take a hard look at what effective science communication means and why it’s important; what makes it so challenging – especially where the science is uncertain or contested; and how researchers and science communicators can increase our knowledge of what works, and under what conditions.

At some level, all science communication has embedded values. Information always comes wrapped in a complex skein of purpose and intent – even when presented as impartial scientific facts. Despite, or maybe because of, this complexity, there remains a need to develop a stronger empirical foundation for effective communication of and about science.

Addressing this, the National Academies draft report makes an extensive number of recommendations. A few in particular stand out:

  • Use a systems approach to guide science communication. In other words, recognize that science communication is part of a larger network of information and influences that affect what people and organizations think and do.
  • Assess the effectiveness of science communication. Yes, researchers try, but often we still engage in communication first and evaluate later. Better to design the best approach to communication based on empirical insights about both audiences and contexts. Very often, the technical risk that scientists think must be communicated have nothing to do with the hopes or concerns public audiences have.
  • Get better at meaningful engagement between scientists and others to enable that “honest, bidirectional dialogue” about the promises and pitfalls of science that our committee chair Alan Leshner and others have called for.
  • Consider social media’s impact – positive and negative.
  • Work toward better understanding when and how to communicate science around issues that are contentious, or potentially so.

The paper version of the book has a cost but you can get a free online version.  Unfortunately,  I cannot copy and paste the book’s table of contents here and was not able to find a book index although there is a handy list of reference texts.

I have taken a very quick look at the book. If you’re in the field, it’s definitely worth a look. It is, however, written for and by academics. If you look at the list of writers and reviewers, you will find over 90% are professors at one university or another. That said, I was happy to see references to Dan Kahan’s work at the Yale Law School’s Culture Cognition Project cited. As happens they weren’t able to cite his latest work [***see my xxx, 2017 curiosity post***], released about a month after “Communicating Science Effectively: A Research Agenda.”

I was unable to find any reference to science communication via popular culture. I’m a little dismayed as I feel that this is a seriously ignored source of information by science communication specialists and academicians but not by the folks at MIT (Massachusetts Institute of Technology) who announced a wireless app in the same week as it was featured in an episode of the US television comedy, The Big Bang Theory. Here’s more from MIT’s emotion detection wireless app in a Feb. 1, 2017 news release (also on EurekAlert),

It’s a fact of nature that a single conversation can be interpreted in very different ways. For people with anxiety or conditions such as Asperger’s, this can make social situations extremely stressful. But what if there was a more objective way to measure and understand our interactions?

Researchers from MIT’s Computer Science and Artificial Intelligence Laboratory (CSAIL) and Institute of Medical Engineering and Science (IMES) say that they’ve gotten closer to a potential solution: an artificially intelligent, wearable system that can predict if a conversation is happy, sad, or neutral based on a person’s speech patterns and vitals.

“Imagine if, at the end of a conversation, you could rewind it and see the moments when the people around you felt the most anxious,” says graduate student Tuka Alhanai, who co-authored a related paper with PhD candidate Mohammad Ghassemi that they will present at next week’s Association for the Advancement of Artificial Intelligence (AAAI) conference in San Francisco. “Our work is a step in this direction, suggesting that we may not be that far away from a world where people can have an AI social coach right in their pocket.”

As a participant tells a story, the system can analyze audio, text transcriptions, and physiological signals to determine the overall tone of the story with 83 percent accuracy. Using deep-learning techniques, the system can also provide a “sentiment score” for specific five-second intervals within a conversation.

“As far as we know, this is the first experiment that collects both physical data and speech data in a passive but robust way, even while subjects are having natural, unstructured interactions,” says Ghassemi. “Our results show that it’s possible to classify the emotional tone of conversations in real-time.”

The researchers say that the system’s performance would be further improved by having multiple people in a conversation use it on their smartwatches, creating more data to be analyzed by their algorithms. The team is keen to point out that they developed the system with privacy strongly in mind: The algorithm runs locally on a user’s device as a way of protecting personal information. (Alhanai says that a consumer version would obviously need clear protocols for getting consent from the people involved in the conversations.)

How it works

Many emotion-detection studies show participants “happy” and “sad” videos, or ask them to artificially act out specific emotive states. But in an effort to elicit more organic emotions, the team instead asked subjects to tell a happy or sad story of their own choosing.

Subjects wore a Samsung Simband, a research device that captures high-resolution physiological waveforms to measure features such as movement, heart rate, blood pressure, blood flow, and skin temperature. The system also captured audio data and text transcripts to analyze the speaker’s tone, pitch, energy, and vocabulary.

“The team’s usage of consumer market devices for collecting physiological data and speech data shows how close we are to having such tools in everyday devices,” says Björn Schuller, professor and chair of Complex and Intelligent Systems at the University of Passau in Germany, who was not involved in the research. “Technology could soon feel much more emotionally intelligent, or even ‘emotional’ itself.”

After capturing 31 different conversations of several minutes each, the team trained two algorithms on the data: One classified the overall nature of a conversation as either happy or sad, while the second classified each five-second block of every conversation as positive, negative, or neutral.

Alhanai notes that, in traditional neural networks, all features about the data are provided to the algorithm at the base of the network. In contrast, her team found that they could improve performance by organizing different features at the various layers of the network.

“The system picks up on how, for example, the sentiment in the text transcription was more abstract than the raw accelerometer data,” says Alhanai. “It’s quite remarkable that a machine could approximate how we humans perceive these interactions, without significant input from us as researchers.”

Results

Indeed, the algorithm’s findings align well with what we humans might expect to observe. For instance, long pauses and monotonous vocal tones were associated with sadder stories, while more energetic, varied speech patterns were associated with happier ones. In terms of body language, sadder stories were also strongly associated with increased fidgeting and cardiovascular activity, as well as certain postures like putting one’s hands on one’s face.

On average, the model could classify the mood of each five-second interval with an accuracy that was approximately 18 percent above chance, and a full 7.5 percent better than existing approaches.

The algorithm is not yet reliable enough to be deployed for social coaching, but Alhanai says that they are actively working toward that goal. For future work the team plans to collect data on a much larger scale, potentially using commercial devices such as the Apple Watch that would allow them to more easily implement the system out in the world.

“Our next step is to improve the algorithm’s emotional granularity so that it is more accurate at calling out boring, tense, and excited moments, rather than just labeling interactions as ‘positive’ or ‘negative,’” says Alhanai. “Developing technology that can take the pulse of human emotions has the potential to dramatically improve how we communicate with each other.”

This research was made possible in part by the Samsung Strategy and Innovation Center.

Episode 14 of season 10 of The Big Bang Theory was titled “The Emotion Detection Automation”  (full episode can be found on this webpage) and broadcast on Feb. 2, 2017. There’s also a Feb. 2, 2017 recap (recapitulation) by Lincee Ray for EW.com (it seems Ray is unaware that there really is such a machine),

Who knew we would see the day when Sheldon and Raj figured out solutions for their social ineptitudes? Only The Big Bang Theory writers would think to tackle our favorite physicists’ lack of social skills with an emotion detector and an ex-girlfriend focus group. It’s been a while since I enjoyed both storylines as much as I did in this episode. That’s no bazinga.

When Raj tells the guys that he is back on the market, he wonders out loud what is wrong with his game. Why do women reject him? Sheldon receives the information like a scientist and runs through many possible answers. Raj shuts him down with a simple, “I’m fine.”

Sheldon is irritated when he learns that this obligatory remark is a mask for what Raj is really feeling. It turns out, Raj is not fine. Sheldon whines, wondering why no one just says exactly what’s on their mind. It’s quite annoying for those who struggle with recognizing emotional cues.

Lo and behold, Bernadette recently read about a gizmo that was created for people who have this exact same anxiety. MIT has a prototype, and because Howard is an alum, he can probably submit Sheldon’s name as a beta tester.

Of course this is a real thing. If anyone can build an emotion detector, it’s a bunch of awkward scientists with zero social skills.

This is the first time I’ve noticed an academic institution’s news release to be almost simultaneous with mention of its research in a popular culture television program, which suggests things have come a long way since I featured news about a webinar by the National Academies ‘ Science and Entertainment Exchange for film and television productions collaborating with scientists in an Aug. 28, 2012 post.

One last science/popular culture moment: Hidden Figures, a movie about African American women who were human computers supporting NASA (US National Aeronautics and Space Agency) efforts during the 1960s space race and getting a man on the moon was (shockingly) no. 1 in the US box office for a few weeks (there’s more about the movie here in my Sept. 2, 2016 post covering then upcoming movies featuring science).  After the movie was released, Mary Elizabeth Williams wrote up a Jan. 23, 2017 interview with the ‘Hidden Figures’ scriptwriter for Salon.com

I [Allison Schroeder] got on the phone with her [co-producer Renee Witt] and Donna  [co-producer Donna Gigliotti] and I said, “You have to hire me for this; I was born to write this.” Donna sort of rolled her eyes and was like, “God, these Hollywood types would say anything.” I said, “No, no, I grew up at Cape Canaveral. My grandmother was a computer programmer at NASA, my grandfather worked on the Mercury prototype, and I interned there all through high school and then the summer after my freshman year at Stanford I interned. I worked at a missile launch company.”

She was like, “OK that’s impressive.” And I said, “No, I literally grew up climbing on the Mercury capsule — hitting all the buttons, trying to launch myself into space.”

She said, “Well do you think you can handle the math?” I said that I had to study a certain amount of math at Stanford for economics degree. She said, “Oh, all right, that sounds pretty good.”

I pitched her a few scenes. I pitched her the end of the movie that you saw with Katherine running the numbers as John Glenn is trying to get up in space. I pitched her the idea of one of the women as a mechanic and to see her legs underneath the engine. You’re used to seeing a guy like that, but what would it be like to see heels and pantyhose and a skirt and she’s a mechanic and fixing something? Those are some of the scenes that I pitched them, and I got the job.

I love that the film begins with setting up their mechanical aptitude. You set up these are women; you set up these women of color. You set up exactly what that means in this moment in history. It’s like you just go from there.

I was on a really tight timeline because this started as an indie film. It was just Donna Gigliotti, Renee Witt, me and the author Margot Lee Shetterly for about a year working on it. I was only given four weeks for research and 12 weeks for writing the first draft. I’m not sure if I hadn’t known NASA and known the culture and just knew what the machines would look like, knew what the prototypes looked like, if I could have done it that quickly. I turned in that draft and Donna was like, “OK you’ve got the math and the science; it’s all here. Now go have fun.” Then I did a few more drafts and that was really enjoyable because I could let go of the fact I did it and make sure that the characters and the drive of the story and everything just fit what needed to happen.

For anyone interested in the science/popular culture connection, David Bruggeman of the Pasco Phronesis blog does a better job than I do of keeping up with the latest doings.

Getting back to ‘Communicating Science Effectively: A Research Agenda’, even with a mention of popular culture, it is a thoughtful book on the topic.

2016 thoughts and 2017 hopes from FrogHeart

This is the 4900th post on this blog and as FrogHeart moves forward to 5000, I’m thinking there will be some changes although I’m not sure what they’ll be. In the meantime, here are some random thoughts on the year that was in Canadian science and on the FrogHeart blog.

Changeover to Liberal government: year one

Hopes were high after the Trudeau government was elected. Certainly, there seems to have been a loosening where science communication policies have been concerned although it may not have been quite the open and transparent process people dreamed of. On the plus side, it’s been easier to participate in public consultations but there has been no move (perceptible to me) towards open government science or better access to government-funded science papers.

Open Science in Québec

As far as I know, la crème de la crème of open science (internationally) is the Montreal Neurological Institute (Montreal Neuro; affiliated with McGill University. They bookended the year with two announcements. In January 2016, Montreal Neuro announced it was going to be an “Open Science institution (my Jan. 22, 2016 posting),

The Montreal Neurological Institute (MNI) in Québec, Canada, known informally and widely as Montreal Neuro, has ‘opened’ its science research to the world. David Bruggeman tells the story in a Jan. 21, 2016 posting on his Pasco Phronesis blog (Note: Links have been removed),

The Montreal Neurological Institute (MNI) at McGill University announced that it will be the first academic research institute to become what it calls ‘Open Science.’  As Science is reporting, the MNI will make available all research results and research data at the time of publication.  Additionally it will not seek patents on any of the discoveries made on research at the Institute.

Will this catch on?  I have no idea if this particular combination of open access research data and results with no patents will spread to other university research institutes.  But I do believe that those elements will continue to spread.  More universities and federal agencies are pursuing open access options for research they support.  Elon Musk has opted to not pursue patent litigation for any of Tesla Motors’ patents, and has not pursued patents for SpaceX technology (though it has pursued litigation over patents in rocket technology). …

Then, there’s my Dec. 19, 2016 posting about this Montreal Neuro announcement,

It’s one heck of a Christmas present. Canadian businessmen Larry Tannenbaum and his wife Judy have given the Montreal Neurological Institute (Montreal Neuro), which is affiliated with McGill University, a $20M donation. From a Dec. 16, 2016 McGill University news release,

The Prime Minister of Canada, Justin Trudeau, was present today at the Montreal Neurological Institute and Hospital (MNI) for the announcement of an important donation of $20 million by the Larry and Judy Tanenbaum family. This transformative gift will help to establish the Tanenbaum Open Science Institute, a bold initiative that will facilitate the sharing of neuroscience findings worldwide to accelerate the discovery of leading edge therapeutics to treat patients suffering from neurological diseases.

‟Today, we take an important step forward in opening up new horizons in neuroscience research and discovery,” said Mr. Larry Tanenbaum. ‟Our digital world provides for unprecedented opportunities to leverage advances in technology to the benefit of science.  That is what we are celebrating here today: the transformation of research, the removal of barriers, the breaking of silos and, most of all, the courage of researchers to put patients and progress ahead of all other considerations.”

Neuroscience has reached a new frontier, and advances in technology now allow scientists to better understand the brain and all its complexities in ways that were previously deemed impossible. The sharing of research findings amongst scientists is critical, not only due to the sheer scale of data involved, but also because diseases of the brain and the nervous system are amongst the most compelling unmet medical needs of our time.

Neurological diseases, mental illnesses, addictions, and brain and spinal cord injuries directly impact 1 in 3 Canadians, representing approximately 11 million people across the country.

“As internationally-recognized leaders in the field of brain research, we are uniquely placed to deliver on this ambitious initiative and reinforce our reputation as an institution that drives innovation, discovery and advanced patient care,” said Dr. Guy Rouleau, Director of the Montreal Neurological Institute and Hospital and Chair of McGill University’s Department of Neurology and Neurosurgery. “Part of the Tanenbaum family’s donation will be used to incentivize other Canadian researchers and institutions to adopt an Open Science model, thus strengthening the network of like-minded institutes working in this field.”

Chief Science Advisor

Getting back to the federal government, we’re still waiting for a Chief Science Advisor. Should you be interested in the job, apply here. The job search was launched in early Dec. 2016 (see my Dec. 7, 2016 posting for details) a little over a year after the Liberal government was elected. I’m not sure why the process is taking so long. It’s not like the Canadian government is inventing a position or trailblazing in this regard. Many, many countries and jurisdictions have chief science advisors. Heck the European Union managed to find their first chief science advisor in considerably less time than we’ve spent on the project. My guess, it just wasn’t a priority.

Prime Minister Trudeau, quantum, nano, and Canada’s 150th birthday

In April 2016, Prime Minister Justin Trudeau stunned many when he was able to answer, in an articulate and informed manner, a question about quantum physics during a press conference at the Perimeter Institute in Waterloo, Ontario (my April 18, 2016 post discussing that incident and the so called ‘quantum valley’ in Ontario).

In Sept. 2016, the University of Waterloo publicized the world’s smallest Canadian flag to celebrate the country’s upcoming 150th birthday and to announce its presence in QUANTUM: The Exhibition (a show which will tour across Canada). Here’s more from my Sept. 20, 2016 posting,

The record-setting flag was unveiled at IQC’s [Institute of Quantum Computing at the University of Waterloo] open house on September 17 [2016], which attracted nearly 1,000 visitors. It will also be on display in QUANTUM: The Exhibition, a Canada 150 Fund Signature Initiative, and part of Innovation150, a consortium of five leading Canadian science-outreach organizations. QUANTUM: The Exhibition is a 4,000-square-foot, interactive, travelling exhibit IQC developed highlighting Canada’s leadership in quantum information science and technology.

“I’m delighted that IQC is celebrating Canadian innovation through QUANTUM: The Exhibition and Innovation150,” said Raymond Laflamme, executive director of IQC. “It’s an opportunity to share the transformative technologies resulting from Canadian research and bring quantum computing to fellow Canadians from coast to coast to coast.”

The first of its kind, the exhibition will open at THEMUSEUM in downtown Kitchener on October 14 [2016], and then travel to science centres across the country throughout 2017.

You can find the English language version of QUANTUM: The Exhibition website here and the French language version of QUANTUM: The Exhibition website here.

There are currently four other venues for the show once finishes its run in Waterloo. From QUANTUM’S Join the Celebration webpage,

2017

  • Science World at TELUS World of Science, Vancouver
  • TELUS Spark, Calgary
  • Discovery Centre, Halifax
  • Canada Science and Technology Museum, Ottawa

I gather they’re still looking for other venues to host the exhibition. If interested, there’s this: Contact us.

Other than the flag which is both nanoscale and microscale, they haven’t revealed what else will be included in their 4000 square foot exhibit but it will be “bilingual, accessible, and interactive.” Also, there will be stories.

Hmm. The exhibition is opening in roughly three weeks and they have no details. Strategy or disorganization? Only time will tell.

Calgary and quantum teleportation

This is one of my favourite stories of the year. Scientists at the University of Calgary teleported photons six kilometers from the university to city hall breaking the teleportation record. What I found particularly interesting was the support for science from Calgary City Hall. Here’s more from my Sept. 21, 2016 post,

Through a collaboration between the University of Calgary, The City of Calgary and researchers in the United States, a group of physicists led by Wolfgang Tittel, professor in the Department of Physics and Astronomy at the University of Calgary have successfully demonstrated teleportation of a photon (an elementary particle of light) over a straight-line distance of six kilometres using The City of Calgary’s fibre optic cable infrastructure. The project began with an Urban Alliance seed grant in 2014.

This accomplishment, which set a new record for distance of transferring a quantum state by teleportation, has landed the researchers a spot in the prestigious Nature Photonics scientific journal. The finding was published back-to-back with a similar demonstration by a group of Chinese researchers.

The research could not be possible without access to the proper technology. One of the critical pieces of infrastructure that support quantum networking is accessible dark fibre. Dark fibre, so named because of its composition — a single optical cable with no electronics or network equipment on the alignment — doesn’t interfere with quantum technology.

The City of Calgary is building and provisioning dark fibre to enable next-generation municipal services today and for the future.

“By opening The City’s dark fibre infrastructure to the private and public sector, non-profit companies, and academia, we help enable the development of projects like quantum encryption and create opportunities for further research, innovation and economic growth in Calgary,” said Tyler Andruschak, project manager with Innovation and Collaboration at The City of Calgary.

As for the science of it (also from my post),

A Sept. 20, 2016 article by Robson Fletcher for CBC (Canadian Broadcasting News) online provides a bit more insight from the lead researcher (Note: A link has been removed),

“What is remarkable about this is that this information transfer happens in what we call a disembodied manner,” said physics professor Wolfgang Tittel, whose team’s work was published this week in the journal Nature Photonics.

“Our transfer happens without any need for an object to move between these two particles.”

A Sept. 20, 2016 University of Calgary news release by Drew Scherban, which originated the news item, provides more insight into the research,

“Such a network will enable secure communication without having to worry about eavesdropping, and allow distant quantum computers to connect,” says Tittel.

Experiment draws on ‘spooky action at a distance’

The experiment is based on the entanglement property of quantum mechanics, also known as “spooky action at a distance” — a property so mysterious that not even Einstein could come to terms with it.

“Being entangled means that the two photons that form an entangled pair have properties that are linked regardless of how far the two are separated,” explains Tittel. “When one of the photons was sent over to City Hall, it remained entangled with the photon that stayed at the University of Calgary.”

Next, the photon whose state was teleported to the university was generated in a third location in Calgary and then also travelled to City Hall where it met the photon that was part of the entangled pair.

“What happened is the instantaneous and disembodied transfer of the photon’s quantum state onto the remaining photon of the entangled pair, which is the one that remained six kilometres away at the university,” says Tittel.

Council of Canadian Academies and The State of Science and Technology and Industrial Research and Development in Canada

Preliminary data was released by the CCA’s expert panel in mid-December 2016. I reviewed that material briefly in my Dec. 15, 2016 post but am eagerly awaiting the full report due late 2017 when, hopefully, I’ll have the time to critique the material, and which I hope will have more surprises and offer greater insights than the preliminary report did.

Colleagues

Thank you to my online colleagues. While we don’t interact much it’s impossible to estimate how encouraging it is to know that these people continually participate and help create the nano and/or science blogosphere.

David Bruggeman at his Pasco Phronesis blog keeps me up-to-date on science policy both in the US, Canada, and internationally, as well as, keeping me abreast of the performing arts/science scene. Also, kudos to David for raising my (and his audience’s) awareness of just how much science is discussed on late night US television. Also, I don’t know how he does it but he keeps scooping me on Canadian science policy matters. Thankfully, I’m not bitter and hope he continues to scoop me which will mean that I will get the information from somewhere since it won’t be from the Canadian government.

Tim Harper of Cientifica Research keeps me on my toes as he keeps shifting his focus. Most lately, it’s been on smart textiles and wearables. You can download his latest White Paper titled, Fashion, Smart Textiles, Wearables and Disappearables, from his website. Tim consults on nanotechnology and other emerging technologies at the international level.

Dexter Johnson of the Nanoclast blog on the IEEE (Institute of Electrical and Electronics Engineers) website consistently provides informed insight into how a particular piece of research fits into the nano scene and often provides historical details that you’re not likely to get from anyone else.

Dr. Andrew Maynard is currently the founding Director of the Risk Innovation Lab at the University of Arizona. I know him through his 2020 Science blog where he posts text and videos on many topics including emerging technologies, nanotechnologies, risk, science communication, and much more. Do check out 2020 Science as it is a treasure trove.

2017 hopes and dreams

I hope Canada’s Chief Science Advisor brings some fresh thinking to science in government and that the Council of Canadian Academies’ upcoming assessment on The State of Science and Technology and Industrial Research and Development in Canada is visionary. Also, let’s send up some collective prayers for the Canada Science and Technology Museum which has been closed since 2014 (?) due to black mold (?). It would be lovely to see it open in time for Canada’s 150th anniversary.

I’d like to see the nanotechnology promise come closer to a reality, which benefits as many people as possible.

As for me and FrogHeart, I’m not sure about the future. I do know there’s one more Steep project (I’m working with Raewyn Turner on a multiple project endeavour known as Steep; this project will involve sound and gold nanoparticles).

Should anything sparkling occur to me, I will add it at a future date.

In the meantime, Happy New Year and thank you from the bottom of my heart for reading this blog!

Breathing nanoparticles into your brain

Thanks to Dexter Johnson and his Sept. 8, 2016 posting (on the Nanoclast blog on the IEEE [Institute for Electrical and Electronics Engineers]) for bringing this news about nanoparticles in the brain to my attention (Note: Links have been removed),

An international team of researchers, led by Barbara Maher, a professor at Lancaster University, in England, has found evidence that suggests that the nanoparticles that were first detected in the human brain over 20 years ago may have an external rather an internal source.

These magnetite nanoparticles are an airborne particulate that are abundant in urban environments and formed by combustion or friction-derived heating. In other words, they have been part of the pollution in the air of our cities since the dawn of the Industrial Revolution.

However, according to Andrew Maynard, a professor at Arizona State University, and a noted expert on the risks associated with nanomaterials,  the research indicates that this finding extends beyond magnetite to any airborne nanoscale particles—including those deliberately manufactured.

“The findings further support the possibility of these particles entering the brain via the olfactory nerve if inhaled.  In this respect, they are certainly relevant to our understanding of the possible risks presented by engineered nanomaterials—especially those that are iron-based and have magnetic properties,” said Maynard in an e-mail interview with IEEE Spectrum. “However, ambient exposures to airborne nanoparticles will typically be much higher than those associated with engineered nanoparticles, simply because engineered nanoparticles will usually be manufactured and handled under conditions designed to avoid release and exposure.”

A Sept. 5, 2016 University of Lancaster press release made the research announcement,

Researchers at Lancaster University found abundant magnetite nanoparticles in the brain tissue from 37 individuals aged three to 92-years-old who lived in Mexico City and Manchester. This strongly magnetic mineral is toxic and has been implicated in the production of reactive oxygen species (free radicals) in the human brain, which are associated with neurodegenerative diseases including Alzheimer’s disease.

Professor Barbara Maher, from Lancaster Environment Centre, and colleagues (from Oxford, Glasgow, Manchester and Mexico City) used spectroscopic analysis to identify the particles as magnetite. Unlike angular magnetite particles that are believed to form naturally within the brain, most of the observed particles were spherical, with diameters up to 150 nm, some with fused surfaces, all characteristic of high-temperature formation – such as from vehicle (particularly diesel) engines or open fires.

The spherical particles are often accompanied by nanoparticles containing other metals, such as platinum, nickel, and cobalt.

Professor Maher said: “The particles we found are strikingly similar to the magnetite nanospheres that are abundant in the airborne pollution found in urban settings, especially next to busy roads, and which are formed by combustion or frictional heating from vehicle engines or brakes.”

Other sources of magnetite nanoparticles include open fires and poorly sealed stoves within homes. Particles smaller than 200 nm are small enough to enter the brain directly through the olfactory nerve after breathing air pollution through the nose.

“Our results indicate that magnetite nanoparticles in the atmosphere can enter the human brain, where they might pose a risk to human health, including conditions such as Alzheimer’s disease,” added Professor Maher.

Leading Alzheimer’s researcher Professor David Allsop, of Lancaster University’s Faculty of Health and Medicine, said: “This finding opens up a whole new avenue for research into a possible environmental risk factor for a range of different brain diseases.”

Damian Carrington’s Sept. 5, 2016 article for the Guardian provides a few more details,

“They [the troubling magnetite particles] are abundant,” she [Maher] said. “For every one of [the crystal shaped particles] we saw about 100 of the pollution particles. The thing about magnetite is it is everywhere.” An analysis of roadside air in Lancaster found 200m magnetite particles per cubic metre.

Other scientists told the Guardian the new work provided strong evidence that most of the magnetite in the brain samples come from air pollution but that the link to Alzheimer’s disease remained speculative.

For anyone who might be concerned about health risks, there’s this from Andrew Maynard’s comments in Dexter Johnson’s Sept. 8, 2016 posting,

“In most workplaces, exposure to intentionally made nanoparticles is likely be small compared to ambient nanoparticles, and so it’s reasonable to assume—at least without further data—that this isn’t a priority concern for engineered nanomaterial production,” said Maynard.

While deliberate nanoscale manufacturing may not carry much risk, Maynard does believe that the research raises serious questions about other manufacturing processes where exposure to high concentrations of airborne nanoscale iron particles is common—such as welding, gouging, or working with molten ore and steel.

It seems everyone is agreed that the findings are concerning but I think it might be good to remember that the percentage of people who develop Alzheimer’s Disease is much smaller than the population of people who have crystals in their brains. In other words, these crystals might (they don’t know) be a factor and likely there would have to be one or more factors to create the condition for developing Alzheimer’s.

Here’s a link to and a citation for the paper,

Magnetite pollution nanoparticles in the human brain by Barbara A. Maher, Imad A. M. Ahmed, Vassil Karloukovski, Donald A. MacLaren, Penelope G. Fouldsd, David Allsop, David M. A. Mann, Ricardo Torres-Jardón, and Lilian Calderon-Garciduenas. PNAS [Proceedings of the National Academy of Sciences] doi: 10.1073/pnas.1605941113

This paper is behind a paywall but Dexter’s posting offers more detail for those who are still curious.

June 2016: time for a post on nanosunscreens—risks and perceptions

In the years since this blog began (2006), there’ve been pretty regular postings about nanosunscreens. While there are always concerns about nanoparticles and health, there has been no evidence to support a ban (personal or governmental) on nanosunscreens. A June 2016 report  by Paul FA Wright (full reference information to follow) in an Australian medical journal provides the latest insights on safety and nanosunscreens. Wright first offers a general introduction to risks and nanomaterials (Note: Links have been removed),

In reality, a one-size-fits-all approach to evaluating the potential risks and benefits of nanotechnology for human health is not possible because it is both impractical and would be misguided. There are many types of engineered nanomaterials, and not all are alike or potential hazards. Many factors should be considered when evaluating the potential risks associated with an engineered nanomaterial: the likelihood of being exposed to nanoparticles (ranging in size from 1 to 100 nanometres, about one-thousandth of the width of a human hair) that may be shed by the nanomaterial; whether there are any hotspots of potential exposure to shed nanoparticles over the whole of the nanomaterial’s life cycle; identifying who or what may be exposed; the eventual fate of the shed nanoparticles; and whether there is a likelihood of adverse biological effects arising from these exposure scenarios.1

The intrinsic toxic properties of compounds contained in the nanoparticle are also important, as well as particle size, shape, surface charge and physico-chemical characteristics, as these greatly influence their uptake by cells and the potential for subsequent biological effects. In summary, nanoparticles are more likely to have higher toxicity than bulk material if they are insoluble, penetrate biological membranes, persist in the body, or (where exposure is by inhalation) are long and fibre-like.1 Ideally, nanomaterial development should incorporate a safety-by-design approach, as there is a marketing edge for nano-enabled products with a reduced potential impact on health and the environment.1

Wright also covers some of nanotechnology’s hoped for benefits but it’s the nanosunscreen which is the main focus of this paper (Note: Links have been removed),

Public perception of the potential risks posed by nanotechnology is very different in certain regions. In Asia, where there is a very positive perception of nanotechnology, some products have been marketed as being nano-enabled to justify charging a premium price. This has resulted in at least four Asian economies adopting state-operated, user-financed product testing schemes to verify nano-related marketing claims, such as the original “nanoMark” certification system in Taiwan.4

In contrast, the negative perception of nanotechnology in some other regions may result in questionable marketing decisions; for example, reducing the levels of zinc oxide nanoparticles included as the active ingredient in sunscreens. This is despite their use in sunscreens having been extensively and repeatedly assessed for safety by regulatory authorities around the world, leading to their being widely accepted as safe to use in sunscreens and lip products.5

Wright goes on to describe the situation in Australia (Note: Links have been removed),

Weighing the potential risks and benefits of using sunscreens with UV-filtering nanoparticles is an important issue for public health in Australia, which has the highest rate of skin cancer in the world as the result of excessive UV exposure. Some consumers are concerned about using these nano-sunscreens,6 despite their many advantages over conventional organic chemical UV filters, which can cause skin irritation and allergies, need to be re-applied more frequently, and are absorbed by the skin to a much greater extent (including some with potentially endocrine-disrupting activity). Zinc oxide nanoparticles are highly suitable for use in sunscreens as a physical broad spectrum UV filter because of their UV stability, non-irritating nature, hypo-allergenicity and visible transparency, while also having a greater UV-attenuating capacity than bulk material (particles larger than 100 nm in diameter) on a per weight basis.7

Concerns about nano-sunscreens began in 2008 with a report that nanoparticles in some could bleach the painted surfaces of coated steel.8 This is a completely different exposure situation to the actual use of nano-sunscreen by people; here they are formulated to remain on the skin’s surface, which is constantly shedding its outer layer of dead cells (the stratum corneum). Many studies have shown that metal oxide nanoparticles do not readily penetrate the stratum corneum of human skin, including a hallmark Australian investigation by Gulson and co-workers of sunscreens containing only a less abundant stable isotope of zinc that allowed precise tracking of the fate of sunscreen zinc.9 The researchers found that there was little difference between nanoparticle and bulk zinc oxide sunscreens in the amount of zinc absorbed into the body after repeated skin application during beach trials. The amount absorbed was also extremely small when compared with the normal levels of zinc required as an essential mineral for human nutrition, and the rate of skin absorption was much lower than that of the more commonly used chemical UV filters.9 Animal studies generally find much higher skin absorption of zinc from dermal application of zinc oxide sunscreens than do human studies, including the meticulous studies in hairless mice conducted by the Commonwealth Scientific and Industrial Research Organisation (CSIRO) using both nanoparticle and bulk zinc oxide sunscreens that contained the less abundant stable zinc isotope.10 These researchers reported that the zinc absorbed from sunscreen was distributed throughout several major organs, but it did not alter their total zinc concentrations, and that overall zinc homeostasis was maintained.10

He then discusses titanium dioxide nanoparticles (also used in nanosunscreens, Note: Links have been removed),

The other metal oxide UV filter is titanium dioxide. Two distinct crystalline forms have been used: the photo-active anatase form and the much less photo-active rutile form,7 which is preferable for sunscreen formulations. While these insoluble nanoparticles may penetrate deeper into the stratum corneum than zinc oxide, they are also widely accepted as being safe to use in non-sprayable sunscreens.11

Investigation of their direct effects on human skin and immune cells have shown that sunscreen nanoparticles of zinc oxide and rutile titanium dioxide are as well tolerated as zinc ions and conventional organic chemical UV filters in human cell test systems.12 Synchrotron X-ray fluorescence imaging has also shown that human immune cells break down zinc oxide nanoparticles similar to those in nano-sunscreens, indicating that immune cells can handle such particles.13 Cytotoxicity occurred only at very high concentrations of zinc oxide nanoparticles, after cellular uptake and intracellular dissolution,14 and further modification of the nanoparticle surface can be used to reduce both uptake by cells and consequent cytotoxicity.15

The ongoing debate about the safety of nanoparticles in sunscreens raised concerns that they may potentially increase free radical levels in human skin during co-exposure to UV light.6 On the contrary, we have seen that zinc oxide and rutile titanium dioxide nanoparticles directly reduce the quantity of damaging free radicals in human immune cells in vitro when they are co-exposed to the more penetrating UV-A wavelengths of sunlight.16 We also identified zinc-containing nanoparticles that form immediately when dissolved zinc ions are added to cell culture media and pure serum, which suggests that they may even play a role in natural zinc transport.17

Here’s a link to and a citation for Wright’s paper,

Potential risks and benefits of nanotechnology: perceptions of risk in sunscreens by Paul FA Wright. Med J Aust 2016; 204 (10): 369-370. doi:10.5694/mja15.01128 Published June 6, 2016

This paper appears to be open access.

The situation regarding perceptions of nanosunscreens in Australia was rather unfortunate as I noted in my Feb. 9, 2012 posting about a then recent government study which showed that some Australians were avoiding all sunscreens due to fears about nanoparticles. Since then Friends of the Earth seems to have moderated its stance on nanosunscreens but there is a July 20, 2010 posting (includes links to a back-and-forth exchange between Dr. Andrew Maynard and Friends of the Earth representatives) which provides insight into the ‘debate’ prior to the 2012 ‘debacle’. For a briefer overview of the situation you could check out my Oct. 4, 2012 posting.

Nanoparticles in baby formula

Needle-like particles of hydroxyapatite found in infant formula by ASU researchers. Westerhoff and Schoepf/ASU, CC BY-ND

Needle-like particles of hydroxyapatite found in infant formula by ASU [Arizona State University] researchers. Westerhoff and Schoepf/ASU, CC BY-ND

Nanowerk is featuring an essay about hydroxyapatite nanoparticles in baby formula written by Dr. Andrew Maynard in a May 17, 2016 news item (Note: A link has been removed),

There’s a lot of stuff you’d expect to find in baby formula: proteins, carbs, vitamins, essential minerals. But parents probably wouldn’t anticipate finding extremely small, needle-like particles. Yet this is exactly what a team of scientists here at Arizona State University [ASU] recently discovered.

The research, commissioned and published by Friends of the Earth (FoE) – an environmental advocacy group – analyzed six commonly available off-the-shelf baby formulas (liquid and powder) and found nanometer-scale needle-like particles in three of them. The particles were made of hydroxyapatite – a poorly soluble calcium-rich mineral. Manufacturers use it to regulate acidity in some foods, and it’s also available as a dietary supplement.

Andrew’s May 17, 2016 essay first appeared on The Conversation website,

Looking at these particles at super-high magnification, it’s hard not to feel a little anxious about feeding them to a baby. They appear sharp and dangerous – not the sort of thing that has any place around infants. …

… questions like “should infants be ingesting them?” make a lot of sense. However, as is so often the case, the answers are not quite so straightforward.

Andrew begins by explaining about calcium and hydroxyapatite (from The Conversation),

Calcium is an essential part of a growing infant’s diet, and is a legally required component in formula. But not necessarily in the form of hydroxyapatite nanoparticles.

Hydroxyapatite is a tough, durable mineral. It’s naturally made in our bodies as an essential part of bones and teeth – it’s what makes them so strong. So it’s tempting to assume the substance is safe to eat. But just because our bones and teeth are made of the mineral doesn’t automatically make it safe to ingest outright.

The issue here is what the hydroxyapatite in formula might do before it’s digested, dissolved and reconstituted inside babies’ bodies. The size and shape of the particles ingested has a lot to do with how they behave within a living system.

He then discusses size and shape, which are important at the nanoscale,

Size and shape can make a difference between safe and unsafe when it comes to particles in our food. Small particles aren’t necessarily bad. But they can potentially get to parts of our body that larger ones can’t reach. Think through the gut wall, into the bloodstream, and into organs and cells. Ingested nanoscale particles may be able to interfere with cells – even beneficial gut microbes – in ways that larger particles don’t.

These possibilities don’t necessarily make nanoparticles harmful. Our bodies are pretty well adapted to handling naturally occurring nanoscale particles – you probably ate some last time you had burnt toast (carbon nanoparticles), or poorly washed vegetables (clay nanoparticles from the soil). And of course, how much of a material we’re exposed to is at least as important as how potentially hazardous it is.

Yet there’s a lot we still don’t know about the safety of intentionally engineered nanoparticles in food. Toxicologists have started paying close attention to such particles, just in case their tiny size makes them more harmful than otherwise expected.

Currently, hydroxyapatite is considered safe at the macroscale by the US Food and Drug Administration (FDA). However, the agency has indicated that nanoscale versions of safe materials such as hydroxyapatite may not be safe food additives. From Andrew’s May 17, 2016 essay,

Hydroxyapatite is a tough, durable mineral. It’s naturally made in our bodies as an essential part of bones and teeth – it’s what makes them so strong. So it’s tempting to assume the substance is safe to eat. But just because our bones and teeth are made of the mineral doesn’t automatically make it safe to ingest outright.

The issue here is what the hydroxyapatite in formula might do before it’s digested, dissolved and reconstituted inside babies’ bodies. The size and shape of the particles ingested has a lot to do with how they behave within a living system. Size and shape can make a difference between safe and unsafe when it comes to particles in our food. Small particles aren’t necessarily bad. But they can potentially get to parts of our body that larger ones can’t reach. Think through the gut wall, into the bloodstream, and into organs and cells. Ingested nanoscale particles may be able to interfere with cells – even beneficial gut microbes – in ways that larger particles don’t.These possibilities don’t necessarily make nanoparticles harmful. Our bodies are pretty well adapted to handling naturally occurring nanoscale particles – you probably ate some last time you had burnt toast (carbon nanoparticles), or poorly washed vegetables (clay nanoparticles from the soil). And of course, how much of a material we’re exposed to is at least as important as how potentially hazardous it is.Yet there’s a lot we still don’t know about the safety of intentionally engineered nanoparticles in food. Toxicologists have started paying close attention to such particles, just in case their tiny size makes them more harmful than otherwise expected.

Putting particle size to one side for a moment, hydroxyapatite is classified by the US Food and Drug Administration (FDA) as “Generally Regarded As Safe.” That means it considers the material safe for use in food products – at least in a non-nano form. However, the agency has raised concerns that nanoscale versions of food ingredients may not be as safe as their larger counterparts.Some manufacturers may be interested in the potential benefits of “nanosizing” – such as increasing the uptake of vitamins and minerals, or altering the physical, textural and sensory properties of foods. But because decreasing particle size may also affect product safety, the FDA indicates that intentionally nanosizing already regulated food ingredients could require regulatory reevaluation.In other words, even though non-nanoscale hydroxyapatite is “Generally Regarded As Safe,” according to the FDA, the safety of any nanoscale form of the substance would need to be reevaluated before being added to food products.Despite this size-safety relationship, the FDA confirmed to me that the agency is unaware of any food substance intentionally engineered at the nanoscale that has enough generally available safety data to determine it should be “Generally Regarded As Safe.”Casting further uncertainty on the use of nanoscale hydroxyapatite in food, a 2015 report from the European Scientific Committee on Consumer Safety (SCCS) suggests there may be some cause for concern when it comes to this particular nanomaterial.Prompted by the use of nanoscale hydroxyapatite in dental products to strengthen teeth (which they consider “cosmetic products”), the SCCS reviewed published research on the material’s potential to cause harm. Their conclusion?

The available information indicates that nano-hydroxyapatite in needle-shaped form is of concern in relation to potential toxicity. Therefore, needle-shaped nano-hydroxyapatite should not be used in cosmetic products.

This recommendation was based on a handful of studies, none of which involved exposing people to the substance. Researchers injected hydroxyapatite needles directly into the bloodstream of rats. Others exposed cells outside the body to the material and observed the effects. In each case, there were tantalizing hints that the small particles interfered in some way with normal biological functions. But the results were insufficient to indicate whether the effects were meaningful in people.

As Andrew also notes in his essay, none of the studies examined by the SCCS OEuropean Scientific Committee on Consumer Safety) looked at what happens to nano-hydroxyapatite once it enters your gut and that is what the researchers at Arizona State University were considering (from the May 17, 2016 essay),

The good news is that, according to preliminary studies from ASU researchers, hydroxyapatite needles don’t last long in the digestive system.

This research is still being reviewed for publication. But early indications are that as soon as the needle-like nanoparticles hit the highly acidic fluid in the stomach, they begin to dissolve. So fast in fact, that by the time they leave the stomach – an exceedingly hostile environment – they are no longer the nanoparticles they started out as.

These findings make sense since we know hydroxyapatite dissolves in acids, and small particles typically dissolve faster than larger ones. So maybe nanoscale hydroxyapatite needles in food are safer than they sound.

This doesn’t mean that the nano-needles are completely off the hook, as some of them may get past the stomach intact and reach more vulnerable parts of the gut. But the findings do suggest these ultra-small needle-like particles could be an effective source of dietary calcium – possibly more so than larger or less needle-like particles that may not dissolve as quickly.

Intriguingly, recent research has indicated that calcium phosphate nanoparticles form naturally in our stomachs and go on to be an important part of our immune system. It’s possible that rapidly dissolving hydroxyapatite nano-needles are actually a boon, providing raw material for these natural and essential nanoparticles.

While it’s comforting to know that preliminary research suggests that the hydroxyapatite nanoparticles are likely safe for use in food products, Andrew points out that more needs to be done to insure safety (from the May 17, 2016 essay),

And yet, even if these needle-like hydroxyapatite nanoparticles in infant formula are ultimately a good thing, the FoE report raises a number of unresolved questions. Did the manufacturers knowingly add the nanoparticles to their products? How are they and the FDA ensuring the products’ safety? Do consumers have a right to know when they’re feeding their babies nanoparticles?

Whether the manufacturers knowingly added these particles to their formula is not clear. At this point, it’s not even clear why they might have been added, as hydroxyapatite does not appear to be a substantial source of calcium in most formula. …

And regardless of the benefits and risks of nanoparticles in infant formula, parents have a right to know what’s in the products they’re feeding their children. In Europe, food ingredients must be legally labeled if they are nanoscale. In the U.S., there is no such requirement, leaving American parents to feel somewhat left in the dark by producers, the FDA and policy makers.

As far as I’m aware, the Canadian situation is much the same as the US. If the material is considered safe at the macroscale, there is no requirement to indicate that a nanoscale version of the material is in the product.

I encourage you to read Andrew’s essay in its entirety. As for the FoE report (Nanoparticles in baby formula: Tiny new ingredients are a big concern), that is here.

Not enough talk about nano risks?

It’s not often that a controversy amongst visual artists intersects with a story about carbon nanotubes, risk, and the roles that  scientists play in public discourse.

Nano risks

Dr. Andrew Maynard, Director of the Risk Innovation Lab at Arizona State University, opens the discussion in a March 29, 2016 article for the appropriately named website, The Conversation (Note: Links have been removed),

Back in 2008, carbon nanotubes – exceptionally fine tubes made up of carbon atoms – were making headlines. A new study from the U.K. had just shown that, under some conditions, these long, slender fiber-like tubes could cause harm in mice in the same way that some asbestos fibers do.

As a collaborator in that study, I was at the time heavily involved in exploring the risks and benefits of novel nanoscale materials. Back then, there was intense interest in understanding how materials like this could be dangerous, and how they might be made safer.

Fast forward to a few weeks ago, when carbon nanotubes were in the news again, but for a very different reason. This time, there was outrage not over potential risks, but because the artist Anish Kapoor had been given exclusive rights to a carbon nanotube-based pigment – claimed to be one of the blackest pigments ever made.

The worries that even nanotech proponents had in the early 2000s about possible health and environmental risks – and their impact on investor and consumer confidence – seem to have evaporated.

I had covered the carbon nanotube-based coating in a March 14, 2016 posting here,

Surrey NanoSystems (UK) is billing their Vantablack as the world’s blackest coating and they now have a new product in that line according to a March 10, 2016 company press release (received via email),

A whole range of products can now take advantage of Vantablack’s astonishing characteristics, thanks to the development of a new spray version of the world’s blackest coating material. The new substance, Vantablack S-VIS, is easily applied at large scale to virtually any surface, whilst still delivering the proven performance of Vantablack.

Oddly, the company news release notes Vantablack S-VIS could be used in consumer products while including the recommendation that it not be used in products where physical contact or abrasion is possible,

… Its ability to deceive the eye also opens up a range of design possibilities to enhance styling and appearance in luxury goods and jewellery [emphasis mine].

… “We are continuing to develop the technology, and the new sprayable version really does open up the possibility of applying super-black coatings in many more types of airborne or terrestrial applications. Possibilities include commercial products such as cameras, [emphasis mine] equipment requiring improved performance in a smaller form factor, as well as differentiating the look of products by means of the coating’s unique aesthetic appearance. It’s a major step forward compared with today’s commercial absorber coatings.”

The structured surface of Vantablack S-VIS means that it is not recommended for applications where it is subject to physical contact or abrasion. [emphasis mine] Ideally, it should be applied to surfaces that are protected, either within a packaged product, or behind a glass or other protective layer.

Presumably Surrey NanoSystems is looking at ways to make its Vantablack S-VIS capable of being used in products such as jewellery, cameras, and other consumers products where physical contact and abrasions are a strong possibility.

Andrew has pointed questions about using Vantablack S-VIS in new applications (from his March 29, 2016 article; Note: Links have been removed),

The original Vantablack was a specialty carbon nanotube coating designed for use in space, to reduce the amount of stray light entering space-based optical instruments. It was this far remove from any people that made Vantablack seem pretty safe. Whatever its toxicity, the chances of it getting into someone’s body were vanishingly small. It wasn’t nontoxic, but the risk of exposure was minuscule.

In contrast, Vantablack S-VIS is designed to be used where people might touch it, inhale it, or even (unintentionally) ingest it.

To be clear, Vantablack S-VIS is not comparable to asbestos – the carbon nanotubes it relies on are too short, and too tightly bound together to behave like needle-like asbestos fibers. Yet its combination of novelty, low density and high surface area, together with the possibility of human exposure, still raise serious risk questions.

For instance, as an expert in nanomaterial safety, I would want to know how readily the spray – or bits of material dislodged from surfaces – can be inhaled or otherwise get into the body; what these particles look like; what is known about how their size, shape, surface area, porosity and chemistry affect their ability to damage cells; whether they can act as “Trojan horses” and carry more toxic materials into the body; and what is known about what happens when they get out into the environment.

Risk and the roles that scientists play

Andrew makes his point and holds various groups to account (from his March 29, 2016 article; Note: Links have been removed),

… in the case of Vantablack S-VIS, there’s been a conspicuous absence of such nanotechnology safety experts in media coverage.

This lack of engagement isn’t too surprising – publicly commenting on emerging topics is something we rarely train, or even encourage, our scientists to do.

And yet, where technologies are being commercialized at the same time their safety is being researched, there’s a need for clear lines of communication between scientists, users, journalists and other influencers. Otherwise, how else are people to know what questions they should be asking, and where the answers might lie?

In 2008, initiatives existed such as those at the Center for Biological and Environmental Nanotechnology (CBEN) at Rice University and the Project on Emerging Nanotechnologies (PEN) at the Woodrow Wilson International Center for Scholars (where I served as science advisor) that took this role seriously. These and similar programs worked closely with journalists and others to ensure an informed public dialogue around the safe, responsible and beneficial uses of nanotechnology.

In 2016, there are no comparable programs, to my knowledge – both CBEN and PEN came to the end of their funding some years ago.

Some of the onus here lies with scientists themselves to make appropriate connections with developers, consumers and others. But to do this, they need the support of the institutions they work in, as well as the organizations who fund them. This is not a new idea – there is of course a long and ongoing debate about how to ensure academic research can benefit ordinary people.

Media and risk

As mainstream media such as newspapers and broadcast news continue to suffer losses in audience numbers, the situation vis à vis science journalism has changed considerably since 2008. Finding information is more of a challenge even for the interested.

As for those who might be interested, the chances of catching their attention are considerably more challenging. For example, some years ago scientists claimed to have achieved ‘cold fusion’ and there were television interviews (on the 60 minutes tv programme, amongst others) and cover stories in Time magazine and Newsweek magazine, which you could find in the grocery checkout line. You didn’t have to look for it. In fact, it was difficult to avoid the story. Sadly, the scientists had oversold and misrepresented their findings and that too was extensively covered in mainstream media. The news cycle went on for months. Something similar happened in 2010 with ‘arsenic life’. There was much excitement and then it became clear that scientists had overstated and misrepresented their findings. That news cycle was completed within three or fewer weeks and most members of the public were unaware. Media saturation is no longer what it used to be.

Innovative outreach needs to be part of the discussion and perhaps the Vantablack S-VIS controversy amongst artists can be viewed through that lens.

Anish Kapoor and his exclusive rights to Vantablack

According to a Feb. 29, 2016 article by Henri Neuendorf for artnet news, there is some consternation regarding internationally known artist, Anish Kapoor and a deal he has made with Surrey Nanosystems, the makers of Vantablack in all its iterations (Note: Links have been removed),

Anish Kapoor provoked the fury of fellow artists by acquiring the exclusive rights to the blackest black in the world.

The Indian-born British artist has been working and experimenting with the “super black” paint since 2014 and has recently acquired exclusive rights to the pigment according to reports by the Daily Mail.

The artist clearly knows the value of this innovation for his work. “I’ve been working in this area for the last 30 years or so with all kinds of materials but conventional materials, and here’s one that does something completely different,” he said, adding “I’ve always been drawn to rather exotic materials.”

This description from his Wikipedia entry gives some idea of Kapoor’s stature (Note: Links have been removed),

Sir Anish Kapoor, CBE RA (Hindi: अनीश कपूर, Punjabi: ਅਨੀਸ਼ ਕਪੂਰ), (born 12 March 1954) is a British-Indian sculptor. Born in Bombay,[1][2] Kapoor has lived and worked in London since the early 1970s when he moved to study art, first at the Hornsey College of Art and later at the Chelsea School of Art and Design.

He represented Britain in the XLIV Venice Biennale in 1990, when he was awarded the Premio Duemila Prize. In 1991 he received the Turner Prize and in 2002 received the Unilever Commission for the Turbine Hall at Tate Modern. Notable public sculptures include Cloud Gate (colloquially known as “the Bean”) in Chicago’s Millennium Park; Sky Mirror, exhibited at the Rockefeller Center in New York City in 2006 and Kensington Gardens in London in 2010;[3] Temenos, at Middlehaven, Middlesbrough; Leviathan,[4] at the Grand Palais in Paris in 2011; and ArcelorMittal Orbit, commissioned as a permanent artwork for London’s Olympic Park and completed in 2012.[5]

Kapoor received a Knighthood in the 2013 Birthday Honours for services to visual arts. He was awarded an honorary doctorate degree from the University of Oxford in 2014.[6] [7] In 2012 he was awarded Padma Bhushan by Congress led Indian government which is India’s 3rd highest civilian award.[8]

Artists can be cutthroat but they can also be prankish. Take a look at this image of Kapoor and note the blue background,

Artist Anish Kapoor is known for the rich pigments he uses in his work. (Image: Andrew Winning/Reuters)

Artist Anish Kapoor is known for the rich pigments he uses in his work. (Image: Andrew Winning/Reuters)

I don’t know why or when this image (used to illustrate Andrew’s essay) was taken so it may be coincidental but the background for the image brings to mind, Yves Klein and his International Klein Blue (IKB) pigment. From the IKB Wikipedia entry,

L'accord bleu (RE 10), 1960, mixed media piece by Yves Klein featuring IKB pigment on canvas and sponges Jaredzimmerman (WMF) - Foundation Stedelijk Museum Amsterdam Collection

L’accord bleu (RE 10), 1960, mixed media piece by Yves Klein featuring IKB pigment on canvas and sponges Jaredzimmerman (WMF) – Foundation Stedelijk Museum Amsterdam Collection

Here’s more from the IKB Wikipedia entry (Note: Links have been removed),

International Klein Blue (IKB) was developed by Yves Klein in collaboration with Edouard Adam, a Parisian art paint supplier whose shop is still in business on the Boulevard Edgar-Quinet in Montparnasse.[1] The uniqueness of IKB does not derive from the ultramarine pigment, but rather from the matte, synthetic resin binder in which the color is suspended, and which allows the pigment to maintain as much of its original qualities and intensity of color as possible.[citation needed] The synthetic resin used in the binder is a polyvinyl acetate developed and marketed at the time under the name Rhodopas M or M60A by the French pharmaceutical company Rhône-Poulenc.[2] Adam still sells the binder under the name “Médium Adam 25.”[1]

In May 1960, Klein deposited a Soleau envelope, registering the paint formula under the name International Klein Blue (IKB) at the Institut national de la propriété industrielle (INPI),[3] but he never patented IKB. Only valid under French law, a soleau enveloppe registers the date of invention, according to the depositor, prior to any legal patent application. The copy held by the INPI was destroyed in 1965. Klein’s own copy, which the INPI returned to him duly stamped is still extant.[4]

In short, it’s not the first time an artist has ‘owned’ a colour. Kapoor is not a performance artist as was Klein but his sculptural work lends itself to spectacle and to stimulating public discourse. As to whether or not, this is a prank, I cannot say but it has stimulated a discourse which ranges from intellectual property and artists to the risks of carbon nanotubes and the role scientists could play in the discourse about the risks associated with emerging technologies.

Regardless of how is was intended, bravo to Kapoor.

More reading

Andrew’s March 29, 2016 article has also been reproduced on Nanowerk and Slate.

Johathan Jones has written about Kapoor and the Vantablack  controversy in a Feb. 29, 2016 article for The Guardian titled: Can an artist ever really own a colour?

Swinging from 2015 to 2016 with FrogHeart

On Thursday, Dec. 31, 2015, the bear ate me (borrowed from Joan Armatrading’s song “Eating the bear”) or, if you prefer this phrase, I had a meltdown when I lost more than 1/2 of a post that I’d worked on for hours.

There’s been a problem dogging me for some months. I will write up something and save it as a draft only to find that most of the text has been replaced by a single URL repeated several times. I have not been able to source the problem which is intermittent. (sigh)

Moving on to happier thoughts, it’s a new year. Happy 2016!

As a way of swinging into the new year, here’s a brief wrap up for 2015.

International colleagues

As always, I thank my international colleagues David Bruggeman (Pasco Phronesis blog), Dexter Johnson (Nanoclast blog on the IEEE [International Electrical and Electronics Engineers website]), and Dr. Andrew Maynard (2020 science blog and Risk Innovation Laboratory at Arizona State University), all of whom have been blogging as long or longer than I have (FYI, FrogHeart began in April/May 2008). More importantly, they have been wonderful sources of information and inspiration.

In particular, David, thank you for keeping me up to date on the Canadian and international science policy situations. Also, darn you for scooping me on the Canadian science policy scene, on more than one occasion.

Dexter, thank you for all those tidbits about the science and the business of nanotechnology that you tuck into your curated blog. There’s always a revelation or two to be found in your writings.

Andrew, congratulations on your move to Arizona State University (from the University of Michigan Risk Science Center) where you are founding their Risk Innovation Lab.

While Andrew’s blog has become more focused on the topic of risk, Andrew continues to write about nanotechnology by extending the topic to emerging technologies.

In fact, I have a Dec. 3, 2015 post featuring a recent Nature article by Andrew on the occasion of the upcoming 2016 World Economic Forum in Davos. In it he discusses new approaches to risk as occasioned by the rise of emerging technologies such synthetic biology, nanotechnology, and more.

While Tim Harper, serial entrepreneur and scientist, is not actively blogging about nanotechnology these days, his writings do pop up in various places, notably on the Azonano website where he is listed as an expert, which he most assuredly is. His focus these days is in establishing graphene-based startups.

Moving on to another somewhat related topic. While no one else seems to be writing about nanotechnology as extensively as I do, there are many, many Canadian science bloggers.

Canadian colleagues

Thank you to Gregor Wolbring, ur Canadian science blogger and professor at the University of Calgary. His writing about human enhancement has become increasingly timely as we continue to introduce electronics onto and into our bodies. While he writes regularly, I don’t believe he’s blogging regularly. However, you can find out more about Gregor and his work  at  http://www.crds.org/research/faculty/Gregor_Wolbring2.shtml
or on his facebook page
https://www.facebook.com/GregorWolbring

Science Borealis (scroll down to get to the feeds), a Canadian science blog aggregator, is my main source of information on the Canadian scene. Thank you for my second Editors Pick award. In 2014 the award was in the Science in Society category and in 2015 it’s in the Engineering & Tech category (last item on the list).

While I haven’t yet heard about the results of Paige Jarreau’s and Science Borealis’ joint survey on the Canadian science blog readers (the reader doesn’t have to be Canadian but the science blog has to be), I was delighted to be asked and to participate. My Dec. 14, 2015 posting listed preliminary results,

They have compiled some preliminary results:

  • 21 bloggers + Science Borealis hosted the survey.
  • 523 respondents began the survey.
  • 338 respondents entered their email addresses to win a prize
  • 63% of 400 Respondents are not science bloggers
  • 56% of 402 Respondents describe themselves as scientists
  • 76% of 431 Respondents were not familiar with Science Borealis before taking the survey
  • 85% of 403 Respondents often, very often or always seek out science information online.
  • 59% of 402 Respondents rarely or never seek science content that is specifically Canadian
  • Of 400 Respondents, locations were: 35% Canada, 35% US, 30% Other.

And most of all, a heartfelt thank you to all who read this blog.

FrogHeart and 2015

There won’t be any statistics from the software packaged with my  hosting service (AWSTATS and Webalizer). Google and its efforts to minimize spam (or so it claims) had a devastating effect on my visit numbers. As I used those numbers as motivation, fantasizing that my readership was increasing, I had to find other means for motivation and am not quite sure how I did it but I upped publication to three posts per day (five-day week) throughout most of the year.

With 260 working days (roughly) in a year that would have meant a total of 780 posts. I’ve rounded that down to 700 posts to allow for days off and days where I didn’t manage three.

In 2015 I logged my 4000th post and substantially contributed to the Science Borealis 2015 output. In the editors’ Dec. 20, 2015 post,

… Science Borealis now boasts a membership of 122 blogs  — about a dozen up from last year. Together, this year, our members have posted over 4,400 posts, with two weeks still to go….

At a rough guess, I’d estimate that FrogHeart was responsible for 15% of the Science Borealis output and 121 bloggers were responsible for the other 85%.

That’s enough for 2015.

FrogHeart and 2016

Bluntly, I do not know anything other than a change of some sort is likely.

Hopefully, I will be doing more art/science projects (my last one was ‘A digital poetry of gold nanoparticles’). I was awarded a small grant ($400 CAD) from the Canadian Academy of Independent Scholars (thank you!) for a spoken word project to be accomplished later this year.

As for this blog, I hope to continue.

In closing, I think it’s only fair to share Joan Armatrading’s song, ‘Eating the bear’. May we all do so in 2016,

Bonne Année!

Managing risks in a world of converging technology (the fourth industrial revolution)

Finally there’s an answer to the question: What (!!!) is the fourth industrial revolution? (I took a guess [wrongish] in my Nov. 20, 2015 post about a special presentation at the 2016 World Economic Forum’s IdeasLab.)

Andrew Maynard in a Dec. 3, 2015 think piece (also called a ‘thesis’) for Nature Nanotechnology answers the question,

… an approach that focuses on combining technologies such as additive manufacturing, automation, digital services and the Internet of Things, and … is part of a growing movement towards exploiting the convergence between emerging technologies. This technological convergence is increasingly being referred to as the ‘fourth industrial revolution’, and like its predecessors, it promises to transform the ways we live and the environments we live in. (While there is no universal agreement on what constitutes an ‘industrial revolution’, proponents of the fourth industrial revolution suggest that the first involved harnessing steam power to mechanize production; the second, the use of electricity in mass production; and the third, the use of electronics and information technology to automate production.)

In anticipation of the the 2016 World Economic Forum (WEF), which has the fourth industrial revolution as its theme, Andrew  explains how he sees the situation we are sliding into (from Andrew Maynard’s think piece),

As more people get closer to gaining access to increasingly powerful converging technologies, a complex risk landscape is emerging that lies dangerously far beyond the ken of current regulations and governance frameworks. As a result, we are in danger of creating a global ‘wild west’ of technology innovation, where our good intentions may be among the first casualties.

There are many other examples where converging technologies are increasing the gap between what we can do and our understanding of how to do it responsibly. The convergence between robotics, nanotechnology and cognitive augmentation, for instance, and that between artificial intelligence, gene editing and maker communities both push us into uncertain territory. Yet despite the vulnerabilities inherent with fast-evolving technological capabilities that are tightly coupled, complex and poorly regulated, we lack even the beginnings of national or international conceptual frameworks to think about responsible decision-making and responsive governance.

He also lists some recommendations,

Fostering effective multi-stakeholder dialogues.

Encouraging actionable empathy.

Providing educational opportunities for current and future stakeholders.

Developing next-generation foresight capabilities.

Transforming approaches to risk.

Investing in public–private partnerships.

Andrew concludes with this,

… The good news is that, in fields such as nanotechnology and synthetic biology, we have already begun to develop the skills to do this — albeit in a small way. We now need to learn how to scale up our efforts, so that our convergence in working together to build a better future mirrors the convergence of the technologies that will help achieve this.

It’s always a pleasure to read Andrew’s work as it’s thoughtful. I was surprised (since Andrew is a physicist by training) and happy to see the recommendation for “actionable empathy.”

Although, I don’t always agree with him on this occasion I don’t have any particular disagreements but I think that including a recommendation or two to cover the certainty we will get something wrong and have to work quickly to right things would be a good idea.  I’m thinking primarily of governments which are notoriously slow to respond with legislation for new developments and equally slow to change that legislation when the situation changes.

The technological environment Andrew is describing is dynamic, that is fast-moving and changing at a pace we have yet to properly conceptualize. Governments will need to change so they can respond in an agile fashion. My suggestion is:

Develop policy task forces that can be convened in hours and given the authority to respond to an immediate situation with oversight after the fact

Getting back to Andrew Maynard, you can find his think piece in its entirety via this link and citation,

Navigating the fourth industrial revolution by Andrew D. Maynard. Nature Nanotechnology 10, 1005–1006 (2015) doi:10.1038/nnano.2015.286 Published online 03 December 2015

This paper is behind a paywall.