Tag Archives: Yale University

Yes! Art, genetic modifications, gene editing, and xenotransplantation at the Vancouver Biennale (Canada)

Patricia Piccinini’s Curious Imaginings Courtesy: Vancouver Biennale [downloaded from http://dailyhive.com/vancouver/vancouver-biennale-unsual-public-art-2018/]

Up to this point, I’ve been a little jealous of the Art/Sci Salon’s (Toronto, Canada) January 2018 workshops for artists and discussions about CRISPR ((clustered regularly interspaced short palindromic repeats))/Cas9 and its social implications. (See my January 10, 2018 posting for more about the events.) Now, it seems Vancouver may be in line for its ‘own’ discussion about CRISPR and the implications of gene editing. The image you saw (above) represents one of the installations being hosted by the 2018 – 2020 edition of the Vancouver Biennale.

While this posting is mostly about the Biennale and Piccinini’s work, there is a ‘science’ subsection featuring the science of CRISPR and xenotransplantation. Getting back to the Biennale and Piccinini: A major public art event since 1988, the Vancouver Biennale has hosted over 91 outdoor sculptures and new media works by more than 78 participating artists from over 25 countries and from 4 continents.

Quickie description of the 2018 – 2020 Vancouver Biennale

The latest edition of the Vancouver Biennale was featured in a June 6, 2018 news item on the Daily Hive (Vancouver),

The Vancouver Biennale will be bringing new —and unusual— works of public art to the city beginning this June.

The theme for this season’s Vancouver Biennale exhibition is “re-IMAGE-n” and it kicks off on June 20 [2018] in Vanier Park with Saudi artist Ajlan Gharem’s Paradise Has Many Gates.

Gharem’s architectural chain-link sculpture resembles a traditional mosque, the piece is meant to challenge the notions of religious orthodoxy and encourages individuals to image a space free of Islamophobia.

Melbourne artist Patricia Piccinini’s Curious Imaginings is expected to be one of the most talked about installations of the exhibit. Her style of “oddly captivating, somewhat grotesque, human-animal hybrid creature” is meant to be shocking and thought-provoking.

Piccinini’s interactive [emphasis mine] experience will “challenge us to explore the social impacts of emerging biotechnology and our ethical limits in an age where genetic engineering and digital technologies are already pushing the boundaries of humanity.”

Piccinini’s work will be displayed in the 105-year-old Patricia Hotel in Vancouver’s Strathcona neighbourhood. The 90-day ticketed exhibition [emphasis mine] is scheduled to open this September [2018].

Given that this blog is focused on nanotechnology and other emerging technologies such as CRISPR, I’m focusing on Piccinini’s work and its art/science or sci-art status. This image from the GOMA Gallery where Piccinini’s ‘Curious Affection‘ installation is being shown from March 24 – Aug. 5, 2018 in Brisbane, Queensland, Australia may give you some sense of what one of her installations is like,

Courtesy: Queensland Art Gallery | Gallery of Modern Art (QAGOMA)

I spoke with Serena at the Vancouver Biennale office and asked about the ‘interactive’ aspect of Piccinini’s installation. She suggested the term ‘immersive’ as an alternative. In other words, you won’t be playing with the sculptures or pressing buttons and interacting with computer screens or robots. She also noted that the ticket prices have not been set yet and they are currently developing events focused on the issues raised by the installation. She knew that 2018 is the 200th anniversary of the publication of Mary Shelley’s Frankenstein but I’m not sure how the Biennale folks plan (or don’t plan)  to integrate any recognition of the novle’s impact on the discussions about ‘new’ technologies .They expect Piccinini will visit Vancouver. (Note 1: Piccinini’s work can  also be seen in a group exhibition titled: Frankenstein’s Birthday Party at the Hosfselt Gallery in San Francisco (California, US) from June 23 – August 11, 2018.  Note 2: I featured a number of international events commemorating the 200th anniversary of the publication of Mary Shelley’s novel, Frankenstein, in my Feb. 26, 2018 posting. Note 3: The term ‘Frankenfoods’ helped to shape the discussion of genetically modified organisms and food supply on this planet. It was a wildly successful campaign for activists affecting legislation in some areas of research. Scientists have not been as enthusiastic about the effects. My January 15, 2009 posting briefly traces a history of the term.)

The 2018 – 2020 Vancouver Biennale and science

A June 7, 2018 Vancouver Biennale news release provides more detail about the current series of exhibitions,

The Biennale is also committed to presenting artwork at the cutting edge of discussion and in keeping with the STEAM (science, technology, engineering, arts, math[ematics]) approach to integrating the arts and sciences. In August [2018], Colombian/American visual artist Jessica Angel will present her monumental installation Dogethereum Bridge at Hinge Park in Olympic Village. Inspired by blockchain technology, the artwork’s design was created through the integration of scientific algorithms, new developments in technology, and the arts. This installation, which will serve as an immersive space and collaborative hub for artists and technologists, will host a series of activations with blockchain as the inspirational jumping-off point.

In what is expected to become one of North America’s most talked-about exhibitions of the year, Melbourne artist Patricia Piccinini’s Curious Imaginings will see the intersection of art, science, and ethics. For the first time in the Biennale’s fifteen years of creating transformative experiences, and in keeping with the 2018-2020 theme of “re-IMAGE-n,” the Biennale will explore art in unexpected places by exhibiting in unconventional interior spaces.  The hyperrealist “world of oddly captivating, somewhat grotesque, human-animal hybrid creatures” will be the artist’s first exhibit in a non-museum setting, transforming a wing of the 105-year-old Patricia Hotel. Situated in Vancouver’s oldest neighbourbood of Strathcona, Piccinini’s interactive experience will “challenge us to explore the social impacts of emerging bio-technology and our ethical limits in an age where genetic engineering and digital technologies are already pushing the boundaries of humanity.” In this intimate hotel setting located in a neighborhood continually undergoing its own change, Curious Imaginings will empower visitors to personally consider questions posed by the exhibition, including the promises and consequences of genetic research and human interference. …

There are other pieces being presented at the Biennale but my special interest is in the art/sci pieces and, at this point, CRISPR.

Piccinini in more depth

You can find out more about Patricia Piccinini in her biography on the Vancouver Biennale website but I found this Char Larsson April 7, 2018 article for the Independent (UK) more informative (Note: A link has been removed),

Patricia Piccinini’s sculptures are deeply disquieting. Walking through Curious Affection, her new solo exhibition at Brisbane’s Gallery of Modern Art, is akin to entering a science laboratory full of DNA experiments. Made from silicone, fibreglass and even human hair, her sculptures are breathtakingly lifelike, however, we can’t be sure what life they are like. The artist creates an exuberant parallel universe where transgenic experiments flourish and human evolution has given way to genetic engineering and DNA splicing.

Curious Affection is a timely and welcome recognition of Piccinini’s enormous contribution to reaching back to the mid-1990s. Working across a variety of mediums including photography, video and drawing, she is perhaps best known for her hyperreal creations.

As a genre, hyperrealism depends on the skill of the artist to create the illusion of reality. To be truly successful, it must convince the spectator of its realness. Piccinini acknowledges this demand, but with a delightful twist. The excruciating attention to detail deliberately solicits our desire to look, only to generate unease, as her sculptures are imbued with a fascinating otherness. Part human, part animal, the works are uncannily familiar, but also alarmingly “other”.

Inspired by advances in genetically modified pigs to generate replacement organs for humans [also known as xenotransplantation], we are reminded that Piccinini has always been at the forefront of debates concerning the possibilities of science, technology and DNA cloning. She does so, however, with a warm affection and sense of humour, eschewing the hysterical anxiety frequently accompanying these scientific developments.

Beyond the astonishing level of detail achieved by working with silicon and fibreglass, there is an ethics at work here. Piccinini is asking us not to avert our gaze from the other, and in doing so, to develop empathy and understanding through the encounter.

I encourage anyone who’s interested to read Larsson’s entire piece (April 7, 2018 article).

According to her Wikipedia entry, Piccinini works in a variety of media including video, sound, sculpture, and more. She also has her own website.

Gene editing and xenotransplantation

Sarah Zhang’s June 8, 2018 article for The Atlantic provides a peek at the extraordinary degree of interest and competition in the field of gene editing and CRISPR ((clustered regularly interspaced short palindromic repeats))/Cas9 research (Note: A link has been removed),

China Is Genetically Engineering Monkeys With Brain Disorders

Guoping Feng applied to college the first year that Chinese universities reopened after the Cultural Revolution. It was 1977, and more than a decade’s worth of students—5.7 million—sat for the entrance exams. Feng was the only one in his high school to get in. He was assigned—by chance, essentially—to medical school. Like most of his contemporaries with scientific ambitions, he soon set his sights on graduate studies in the United States. “China was really like 30 to 50 years behind,” he says. “There was no way to do cutting-edge research.” So in 1989, he left for Buffalo, New York, where for the first time he saw snow piled several feet high. He completed his Ph.D. in genetics at the State University of New York at Buffalo.

Feng is short and slim, with a monk-like placidity and a quick smile, and he now holds an endowed chair in neuroscience at MIT, where he focuses on the genetics of brain disorders. His 45-person lab is part of the McGovern Institute for Brain Research, which was established in 2000 with the promise of a $350 million donation, the largest ever received by the university. In short, his lab does not lack for much.

Yet Feng now travels to China several times a year, because there, he can pursue research he has not yet been able to carry out in the United States. [emphasis mine] …

Feng had organized a symposium at SIAT [Shenzhen Institutes of Advanced Technology], and he was not the only scientist who traveled all the way from the United States to attend: He invited several colleagues as symposium speakers, including a fellow MIT neuroscientist interested in tree shrews, a tiny mammal related to primates and native to southern China, and Chinese-born neuroscientists who study addiction at the University of Pittsburgh and SUNY Upstate Medical University. Like Feng, they had left China in the ’80s and ’90s, part of a wave of young scientists in search of better opportunities abroad. Also like Feng, they were back in China to pursue a type of cutting-edge research too expensive and too impractical—and maybe too ethically sensitive—in the United States.

Here’s what precipitated Feng’s work in China, (from Zhang’s article; Note: Links have been removed)

At MIT, Feng’s lab worked on genetically engineering a monkey species called marmosets, which are very small and genuinely bizarre-looking. They are cheaper to keep due to their size, but they are a relatively new lab animal, and they can be difficult to train on lab tasks. For this reason, Feng also wanted to study Shank3 on macaques in China. Scientists have been cataloging the social behavior of macaques for decades, making it an obvious model for studies of disorders like autism that have a strong social component. Macaques are also more closely related to humans than marmosets, making their brains a better stand-in for those of humans.

The process of genetically engineering a macaque is not trivial, even with the advanced tools of CRISPR. Researchers begin by dosing female monkeys with the same hormones used in human in vitro fertilization. They then collect and fertilize the eggs, and inject the resulting embryos with CRISPR proteins using a long, thin glass needle. Monkey embryos are far more sensitive than mice embryos, and can be affected by small changes in the pH of the injection or the concentration of CRISPR proteins. Only some of the embryos will have the desired mutation, and only some will survive once implanted in surrogate mothers. It takes dozens of eggs to get to just one live monkey, so making even a few knockout monkeys required the support of a large breeding colony.

The first Shank3 macaque was born in 2015. Four more soon followed, bringing the total to five.

To visit his research animals, Feng now has to fly 8,000 miles across 12 time zones. It would be a lot more convenient to carry out his macaque research in the United States, of course, but so far, he has not been able to.

He originally inquired about making Shank3 macaques at the New England Primate Research Center, one of eight national primate research centers then funded by the National Institutes of Health in partnership with a local institution (Harvard Medical School, in this case). The center was conveniently located in Southborough, Massachusetts, just 20 miles west of the MIT campus. But in 2013, Harvard decided to shutter the center.

The decision came as a shock to the research community, and it was widely interpreted as a sign of waning interest in primate research in the United States. While the national primate centers have been important hubs of research on HIV, Zika, Ebola, and other diseases, they have also come under intense public scrutiny. Animal-rights groups like the Humane Society of the United States have sent investigators to work undercover in the labs, and the media has reported on monkey deaths in grisly detail. Harvard officially made its decision to close for “financial” reasons. But the announcement also came after the high-profile deaths of four monkeys from improper handling between 2010 and 2012. The deaths sparked a backlash; demonstrators showed up at the gates. The university gave itself two years to wind down their primate work, officially closing the center in 2015.

“They screwed themselves,” Michael Halassa, the MIT neuroscientist who spoke at Feng’s symposium, told me in Shenzhen. Wei-Dong Yao, another one of the speakers, chimed in, noting that just two years later CRISPR has created a new wave of interest in primate research. Yao was one of the researchers at Harvard’s primate center before it closed; he now runs a lab at SUNY Upstate Medical University that uses genetically engineered mouse and human stem cells, and he had come to Shenzhen to talk about restarting his addiction research on primates.

Here’s comes the competition (from Zhang’s article; Note: Links have been removed),

While the U.S. government’s biomedical research budget has been largely flat, both national and local governments in China are eager to raise their international scientific profiles, and they are shoveling money into research. A long-rumored, government-sponsored China Brain Project is supposed to give neuroscience research, and primate models in particular, a big funding boost. Chinese scientists may command larger salaries, too: Thanks to funding from the Shenzhen local government, a new principal investigator returning from overseas can get 3 million yuan—almost half a million U.S. dollars—over his or her first five years. China is even finding success in attracting foreign researchers from top U.S. institutions like Yale.

In the past few years, China has seen a miniature explosion of genetic engineering in monkeys. In Kunming, Shanghai, and Guangzhou, scientists have created monkeys engineered to show signs of Parkinson’s, Duchenne muscular dystrophy, autism, and more. And Feng’s group is not even the only one in China to have created Shank3 monkeys. Another group—a collaboration primarily between researchers at Emory University and scientists in China—has done the same.

Chinese scientists’ enthusiasm for CRISPR also extends to studies of humans, which are moving much more quickly, and in some cases under less oversight, than in the West. The first studies to edit human embryos and first clinical trials for cancer therapies using CRISPR have all happened in China. [emphases mine]

Some ethical issues are also covered (from Zhang’s article),

Parents with severely epileptic children had asked him if it would be possible to study the condition in a monkey. Feng told them what he thought would be technically possible. “But I also said, ‘I’m not sure I want to generate a model like this,’” he recalled. Maybe if there were a drug to control the monkeys’ seizures, he said: “I cannot see them seizure all the time.”

But is it ethical, he continued, to let these babies die without doing anything? Is it ethical to generate thousands or millions of mutant mice for studies of brain disorders, even when you know they will not elucidate much about human conditions?

Primates should only be used if other models do not work, says Feng, and only if a clear path forward is identified. The first step in his work, he says, is to use the Shank3 monkeys to identify the changes the mutations cause in the brain. Then, researchers might use that information to find targets for drugs, which could be tested in the same monkeys. He’s talking with the Oregon National Primate Research Center about carrying out similar work in the United States. ….[Note: I have a three-part series about CRISPR and germline editing* in the US, precipitated by research coming out of Oregon, Part 1, which links to the other parts, is here.]

Zhang’s June 8, 2018 article is excellent and I highly recommend reading it.

I touched on the topic of xenotransplanttaion in a commentary on a book about the science  of the television series, Orphan Black in a January 31,2018 posting (Note: A chimera is what you use to incubate a ‘human’ organ for transplantation or, more accurately, xenotransplantation),

On the subject of chimeras, the Canadian Broadcasting Corporation (CBC) featured a January 26, 2017 article about the pig-human chimeras on its website along with a video,

The end

I am very excited to see Piccinini’s work come to Vancouver. There have been a number of wonderful art and art/science installations and discussions here but this is the first one (I believe) to tackle the emerging gene editing technologies and the issues they raise. (It also fits in rather nicely with the 200th anniversary of the publication of Mary Shelley’s Frankenstein which continues to raise issues and stimulate discussion.)

In addition to the ethical issues raised in Zhang’s article, there are some other philosophical questions:

  • what does it mean to be human
  • if we are going to edit genes to create hybrid human/animals, what are they and how do they fit into our current animal/human schema
  • are you still human if you’ve had an organ transplant where the organ was incubated in a pig

There are also going to be legal issues. In addition to any questions about legal status, there are also fights about intellectual property such as the one involving Harvard & MIT’s [Massachusetts Institute of Technology] Broad Institute vs the University of California at Berkeley (March 15, 2017 posting)..

While I’m thrilled about the Piccinini installation, it should be noted the issues raised by other artworks hosted in this version of the Biennale are important. Happily, they have been broached here in Vancouver before and I suspect this will result in more nuanced  ‘conversations’ than are possible when a ‘new’ issue is introduced.

Bravo 2018 – 2020 Vancouver Biennale!

* Germline editing is when your gene editing will affect subsequent generations as opposed to editing out a mutated gene for the lifetime of a single individual.

Art/sci and CRISPR links

This art/science posting may prove of some interest:

The connectedness of living things: an art/sci project in Saskatchewan: evolutionary biology (February 16, 2018)

A selection of my CRISPR posts:

CRISPR and editing the germline in the US (part 1 of 3): In the beginning (August 15, 2017)

NOTE: An introductory CRISPR video describing how CRISPR/Cas9 works was embedded in part1.

Why don’t you CRISPR yourself? (January 25, 2018)

Editing the genome with CRISPR ((clustered regularly interspaced short palindromic repeats)-carrying nanoparticles (January 26, 2018)

Immune to CRISPR? (April 10, 2018)

AI (artificial intelligence) for Good Global Summit from May 15 – 17, 2018 in Geneva, Switzerland: details and an interview with Frederic Werner

With all the talk about artificial intelligence (AI), a lot more attention seems to be paid to apocalyptic scenarios: loss of jobs, financial hardship, loss of personal agency and privacy, and more with all of these impacts being described as global. Still, there are some folks who are considering and working on ‘AI for good’.

If you’d asked me, the International Telecommunications Union (ITU) would not have been my first guess (my choice would have been United Nations Educational, Scientific and Cultural Organization [UNESCO]) as an agency likely to host the 2018 AI for Good Global Summit. But, it turns out the ITU is a UN (United Nations agency) and, according to its Wikipedia entry, it’s an intergovernmental public-private partnership, which may explain the nature of the participants in the upcoming summit.

The news

First, there’s a May 4, 2018 ITU media advisory (received via email or you can find the full media advisory here) about the upcoming summit,

Artificial Intelligence (AI) is now widely identified as being able to address the greatest challenges facing humanity – supporting innovation in fields ranging from crisis management and healthcare to smart cities and communications networking.

The second annual ‘AI for Good Global Summit’ will take place 15-17 May [2018] in Geneva, and seeks to leverage AI to accelerate progress towards the United Nations’ Sustainable Development Goals and ultimately benefit humanity.

WHAT: Global event to advance ‘AI for Good’ with the participation of internationally recognized AI experts. The programme will include interactive high-level panels, while ‘AI Breakthrough Teams’ will propose AI strategies able to create impact in the near term, guided by an expert audience of mentors representing government, industry, academia and civil society – through interactive sessions. The summit will connect AI innovators with public and private-sector decision-makers, building collaboration to take promising strategies forward.

A special demo & exhibit track will feature innovative applications of AI designed to: protect women from sexual violence, avoid infant crib deaths, end child abuse, predict oral cancer, and improve mental health treatments for depression – as well as interactive robots including: Alice, a Dutch invention designed to support the aged; iCub, an open-source robot; and Sophia, the humanoid AI robot.

WHEN: 15-17 May 2018, beginning daily at 9 AM

WHERE: ITU Headquarters, 2 Rue de Varembé, Geneva, Switzerland (Please note: entrance to ITU is now limited for all visitors to the Montbrillant building entrance only on rue Varembé).

WHO: Confirmed participants to date include expert representatives from: Association for Computing Machinery, Bill and Melinda Gates Foundation, Cambridge University, Carnegie Mellon, Chan Zuckerberg Initiative, Consumer Trade Association, Facebook, Fraunhofer, Google, Harvard University, IBM Watson, IEEE, Intellectual Ventures, ITU, Microsoft, Massachusetts Institute of Technology (MIT), Partnership on AI, Planet Labs, Shenzhen Open Innovation Lab, University of California at Berkeley, University of Tokyo, XPRIZE Foundation, Yale University – and the participation of “Sophia” the humanoid robot and “iCub” the EU open source robotcub.

The interview

Frederic Werner, Senior Communications Officer at the International Telecommunication Union and** one of the organizers of the AI for Good Global Summit 2018 kindly took the time to speak to me and provide a few more details about the upcoming event.

Werner noted that the 2018 event grew out of a much smaller 2017 ‘workshop’ and first of its kind, about beneficial AI which this year has ballooned in size to 91 countries (about 15 participants are expected from Canada), 32 UN agencies, and substantive representation from the private sector. The 2017 event featured Dr. Yoshua Bengio of the University of Montreal  (Université de Montréal) was a featured speaker.

“This year, we’re focused on action-oriented projects that will help us reach our Sustainable Development Goals (SDGs) by 2030. We’re looking at near-term practical AI applications,” says Werner. “We’re matchmaking problem-owners and solution-owners.”

Academics, industry professionals, government officials, and representatives from UN agencies are gathering  to work on four tracks/themes:

In advance of this meeting, the group launched an AI repository (an action item from the 2017 meeting) on April 25, 2018 inviting people to list their AI projects (from the ITU’s April 25, 2018? AI repository news announcement),

ITU has just launched an AI Repository where anyone working in the field of artificial intelligence (AI) can contribute key information about how to leverage AI to help solve humanity’s greatest challenges.

This is the only global repository that identifies AI-related projects, research initiatives, think-tanks and organizations that aim to accelerate progress on the 17 United Nations’ Sustainable Development Goals (SDGs).

To submit a project, just press ‘Submit’ on the AI Repository site and fill in the online questionnaire, providing all relevant details of your project. You will also be asked to map your project to the relevant World Summit on the Information Society (WSIS) action lines and the SDGs. Approved projects will be officially registered in the repository database.

Benefits of participation on the AI Repository include:

WSIS Prizes recognize individuals, governments, civil society, local, regional and international agencies, research institutions and private-sector companies for outstanding success in implementing development oriented strategies that leverage the power of AI and ICTs.

Creating the AI Repository was one of the action items of last year’s AI for Good Global Summit.

We are looking forward to your submissions.

If you have any questions, please send an email to: ai@itu.int

“Your project won’t be visible immediately as we have to vet the submissions to weed out spam-type material and projects that are not in line with our goals,” says Werner. That said, there are already 29 projects in the repository. As you might expect, the UK, China, and US are in the repository but also represented are Egypt, Uganda, Belarus, Serbia, Peru, Italy, and other countries not commonly cited when discussing AI research.

Werner also pointed out in response to my surprise over the ITU’s role with regard to this AI initiative that the ITU is the only UN agency which has 192* member states (countries), 150 universities, and over 700 industry members as well as other member entities, which gives them tremendous breadth of reach. As well, the organization, founded originally in 1865 as the International Telegraph Convention, has extensive experience with global standardization in the information technology and telecommunications industries. (See more in their Wikipedia entry.)

Finally

There is a bit more about the summit on the ITU’s AI for Good Global Summit 2018 webpage,

The 2nd edition of the AI for Good Global Summit will be organized by ITU in Geneva on 15-17 May 2018, in partnership with XPRIZE Foundation, the global leader in incentivized prize competitions, the Association for Computing Machinery (ACM) and sister United Nations agencies including UNESCO, UNICEF, UNCTAD, UNIDO, Global Pulse, UNICRI, UNODA, UNIDIR, UNODC, WFP, IFAD, UNAIDS, WIPO, ILO, UNITAR, UNOPS, OHCHR, UN UniversityWHO, UNEP, ICAO, UNDP, The World Bank, UN DESA, CTBTOUNISDRUNOG, UNOOSAUNFPAUNECE, UNDPA, and UNHCR.

The AI for Good series is the leading United Nations platform for dialogue on AI. The action​​-oriented 2018 summit will identify practical applications of AI and supporting strategies to improve the quality and sustainability of life on our planet. The summit will continue to formulate strategies to ensure trusted, safe and inclusive development of AI technologies and equitable access to their benefits.

While the 2017 summit sparked the first ever inclusive global dialogue on beneficial AI, the action-oriented 2018 summit will focus on impactful AI solutions able to yield long-term benefits and help achieve the Sustainable Development Goals. ‘Breakthrough teams’ will demonstrate the potential of AI to map poverty and aid with natural disasters using satellite imagery, how AI could assist the delivery of citizen-centric services in smart cities, and new opportunities for AI to help achieve Universal Health Coverage, and finally to help achieve transparency and explainability in AI algorithms.

Teams will propose impactful AI strategies able to be enacted in the near term, guided by an expert audience of mentors representing government, industry, academia and civil society. Strategies will be evaluated by the mentors according to their feasibility and scalability, potential to address truly global challenges, degree of supporting advocacy, and applicability to market failures beyond the scope of government and industry. The exercise will connect AI innovators with public and private-sector decision-makers, building collaboration to take promising strategies forward.

“As the UN specialized agency for information and communication technologies, ITU is well placed to guide AI innovation towards the achievement of the UN Sustainable Development ​Goals. We are providing a neutral close quotation markplatform for international dialogue aimed at ​building a ​common understanding of the capabilities of emerging AI technologies.​​” Houlin Zhao, Secretary General ​of ITU​

Should you be close to Geneva, it seems that registration is still open. Just go to the ITU’s AI for Good Global Summit 2018 webpage, scroll the page down to ‘Documentation’ and you will find a link to the invitation and a link to online registration. Participation is free but I expect that you are responsible for your travel and accommodation costs.

For anyone unable to attend in person, the summit will be livestreamed (webcast in real time) and you can watch the sessions by following the link below,

https://www.itu.int/en/ITU-T/AI/2018/Pages/webcast.aspx

For those of us on the West Coast of Canada and other parts distant to Geneva, you will want to take the nine hour difference between Geneva (Switzerland) and here into account when viewing the proceedings. If you can’t manage the time difference, the sessions are being recorded and will be posted at a later date.

*’132 member states’ corrected to ‘192 member states’ on May 11, 2018 at 1500 hours PDT.

*Redundant ‘and’ removed on July 19, 2018.

Emerging technology and the law

I have three news bits about legal issues that are arising as a consequence of emerging technologies.

Deep neural networks, art, and copyright

Caption: The rise of automated art opens new creative avenues, coupled with new problems for copyright protection. Credit: Provided by: Alexander Mordvintsev, Christopher Olah and Mike Tyka

Presumably this artwork is a demonstration of automated art although they never really do explain how in the news item/news release. An April 26, 2017 news item on ScienceDaily announces research into copyright and the latest in using neural networks to create art,

In 1968, sociologist Jean Baudrillard wrote on automatism that “contained within it is the dream of a dominated world […] that serves an inert and dreamy humanity.”

With the growing popularity of Deep Neural Networks (DNN’s), this dream is fast becoming a reality.

Dr. Jean-Marc Deltorn, researcher at the Centre d’études internationales de la propriété intellectuelle in Strasbourg, argues that we must remain a responsive and responsible force in this process of automation — not inert dominators. As he demonstrates in a recent Frontiers in Digital Humanities paper, the dream of automation demands a careful study of the legal problems linked to copyright.

An April 26, 2017 Frontiers (publishing) news release on EurekAlert, which originated the news item, describes the research in more detail,

For more than half a century, artists have looked to computational processes as a way of expanding their vision. DNN’s are the culmination of this cross-pollination: by learning to identify a complex number of patterns, they can generate new creations.

These systems are made up of complex algorithms modeled on the transmission of signals between neurons in the brain.

DNN creations rely in equal measure on human inputs and the non-human algorithmic networks that process them.

Inputs are fed into the system, which is layered. Each layer provides an opportunity for a more refined knowledge of the inputs (shape, color, lines). Neural networks compare actual outputs to expected ones, and correct the predictive error through repetition and optimization. They train their own pattern recognition, thereby optimizing their learning curve and producing increasingly accurate outputs.

The deeper the layers are, the higher the level of abstraction. The highest layers are able to identify the contents of a given input with reasonable accuracy, after extended periods of training.

Creation thus becomes increasingly automated through what Deltorn calls “the arcane traceries of deep architecture”. The results are sufficiently abstracted from their sources to produce original creations that have been exhibited in galleries, sold at auction and performed at concerts.

The originality of DNN’s is a combined product of technological automation on one hand, human inputs and decisions on the other.

DNN’s are gaining popularity. Various platforms (such as DeepDream) now allow internet users to generate their very own new creations . This popularization of the automation process calls for a comprehensive legal framework that ensures a creator’s economic and moral rights with regards to his work – copyright protection.

Form, originality and attribution are the three requirements for copyright. And while DNN creations satisfy the first of these three, the claim to originality and attribution will depend largely on a given country legislation and on the traceability of the human creator.

Legislation usually sets a low threshold to originality. As DNN creations could in theory be able to create an endless number of riffs on source materials, the uncurbed creation of original works could inflate the existing number of copyright protections.

Additionally, a small number of national copyright laws confers attribution to what UK legislation defines loosely as “the person by whom the arrangements necessary for the creation of the work are undertaken.” In the case of DNN’s, this could mean anybody from the programmer to the user of a DNN interface.

Combined with an overly supple take on originality, this view on attribution would further increase the number of copyrightable works.

The risk, in both cases, is that artists will be less willing to publish their own works, for fear of infringement of DNN copyright protections.

In order to promote creativity – one seminal aim of copyright protection – the issue must be limited to creations that manifest a personal voice “and not just the electric glint of a computational engine,” to quote Deltorn. A delicate act of discernment.

DNN’s promise new avenues of creative expression for artists – with potential caveats. Copyright protection – a “catalyst to creativity” – must be contained. Many of us gently bask in the glow of an increasingly automated form of technology. But if we want to safeguard the ineffable quality that defines much art, it might be a good idea to hone in more closely on the differences between the electric and the creative spark.

This research is and be will part of a broader Frontiers Research Topic collection of articles on Deep Learning and Digital Humanities.

Here’s a link to and a citation for the paper,

Deep Creations: Intellectual Property and the Automata by Jean-Marc Deltorn. Front. Digit. Humanit., 01 February 2017 | https://doi.org/10.3389/fdigh.2017.00003

This paper is open access.

Conference on governance of emerging technologies

I received an April 17, 2017 notice via email about this upcoming conference. Here’s more from the Fifth Annual Conference on Governance of Emerging Technologies: Law, Policy and Ethics webpage,

The Fifth Annual Conference on Governance of Emerging Technologies:

Law, Policy and Ethics held at the new

Beus Center for Law & Society in Phoenix, AZ

May 17-19, 2017!

Call for Abstracts – Now Closed

The conference will consist of plenary and session presentations and discussions on regulatory, governance, legal, policy, social and ethical aspects of emerging technologies, including (but not limited to) nanotechnology, synthetic biology, gene editing, biotechnology, genomics, personalized medicine, human enhancement technologies, telecommunications, information technologies, surveillance technologies, geoengineering, neuroscience, artificial intelligence, and robotics. The conference is premised on the belief that there is much to be learned and shared from and across the governance experience and proposals for these various emerging technologies.

Keynote Speakers:

Gillian HadfieldRichard L. and Antoinette Schamoi Kirtland Professor of Law and Professor of Economics USC [University of Southern California] Gould School of Law

Shobita Parthasarathy, Associate Professor of Public Policy and Women’s Studies, Director, Science, Technology, and Public Policy Program University of Michigan

Stuart Russell, Professor at [University of California] Berkeley, is a computer scientist known for his contributions to artificial intelligence

Craig Shank, Vice President for Corporate Standards Group in Microsoft’s Corporate, External and Legal Affairs (CELA)

Plenary Panels:

Innovation – Responsible and/or Permissionless

Ellen-Marie Forsberg, Senior Researcher/Research Manager at Oslo and Akershus University College of Applied Sciences

Adam Thierer, Senior Research Fellow with the Technology Policy Program at the Mercatus Center at George Mason University

Wendell Wallach, Consultant, ethicist, and scholar at Yale University’s Interdisciplinary Center for Bioethics

 Gene Drives, Trade and International Regulations

Greg Kaebnick, Director, Editorial Department; Editor, Hastings Center Report; Research Scholar, Hastings Center

Jennifer Kuzma, Goodnight-North Carolina GlaxoSmithKline Foundation Distinguished Professor in Social Sciences in the School of Public and International Affairs (SPIA) and co-director of the Genetic Engineering and Society (GES) Center at North Carolina State University

Andrew Maynard, Senior Sustainability Scholar, Julie Ann Wrigley Global Institute of Sustainability Director, Risk Innovation Lab, School for the Future of Innovation in Society Professor, School for the Future of Innovation in Society, Arizona State University

Gary Marchant, Regents’ Professor of Law, Professor of Law Faculty Director and Faculty Fellow, Center for Law, Science & Innovation, Arizona State University

Marc Saner, Inaugural Director of the Institute for Science, Society and Policy, and Associate Professor, University of Ottawa Department of Geography

Big Data

Anupam Chander, Martin Luther King, Jr. Professor of Law and Director, California International Law Center, UC Davis School of Law

Pilar Ossorio, Professor of Law and Bioethics, University of Wisconsin, School of Law and School of Medicine and Public Health; Morgridge Institute for Research, Ethics Scholar-in-Residence

George Poste, Chief Scientist, Complex Adaptive Systems Initiative (CASI) (http://www.casi.asu.edu/), Regents’ Professor and Del E. Webb Chair in Health Innovation, Arizona State University

Emily Shuckburgh, climate scientist and deputy head of the Polar Oceans Team at the British Antarctic Survey, University of Cambridge

 Responsible Development of AI

Spring Berman, Ira A. Fulton Schools of Engineering, Arizona State University

John Havens, The IEEE [Institute of Electrical and Electronics Engineers] Global Initiative for Ethical Considerations in Artificial Intelligence and Autonomous Systems

Subbarao Kambhampati, Senior Sustainability Scientist, Julie Ann Wrigley Global Institute of Sustainability, Professor, School of Computing, Informatics and Decision Systems Engineering, Ira A. Fulton Schools of Engineering, Arizona State University

Wendell Wallach, Consultant, Ethicist, and Scholar at Yale University’s Interdisciplinary Center for Bioethics

Existential and Catastrophic Ricks [sic]

Tony Barrett, Co-Founder and Director of Research of the Global Catastrophic Risk Institute

Haydn Belfield,  Academic Project Administrator, Centre for the Study of Existential Risk at the University of Cambridge

Margaret E. Kosal Associate Director, Sam Nunn School of International Affairs, Georgia Institute of Technology

Catherine Rhodes,  Academic Project Manager, Centre for the Study of Existential Risk at CSER, University of Cambridge

These were the panels that are of interest to me; there are others on the homepage.

Here’s some information from the Conference registration webpage,

Early Bird Registration – $50 off until May 1! Enter discount code: earlybirdGETs50

New: Group Discount – Register 2+ attendees together and receive an additional 20% off for all group members!

Click Here to Register!

Conference registration fees are as follows:

  • General (non-CLE) Registration: $150.00
  • CLE Registration: $350.00
  • *Current Student / ASU Law Alumni Registration: $50.00
  • ^Cybsersecurity sessions only (May 19): $100 CLE / $50 General / Free for students (registration info coming soon)

There you have it.

Neuro-techno future laws

I’m pretty sure this isn’t the first exploration of potential legal issues arising from research into neuroscience although it’s the first one I’ve stumbled across. From an April 25, 2017 news item on phys.org,

New human rights laws to prepare for advances in neurotechnology that put the ‘freedom of the mind’ at risk have been proposed today in the open access journal Life Sciences, Society and Policy.

The authors of the study suggest four new human rights laws could emerge in the near future to protect against exploitation and loss of privacy. The four laws are: the right to cognitive liberty, the right to mental privacy, the right to mental integrity and the right to psychological continuity.

An April 25, 2017 Biomed Central news release on EurekAlert, which originated the news item, describes the work in more detail,

Marcello Ienca, lead author and PhD student at the Institute for Biomedical Ethics at the University of Basel, said: “The mind is considered to be the last refuge of personal freedom and self-determination, but advances in neural engineering, brain imaging and neurotechnology put the freedom of the mind at risk. Our proposed laws would give people the right to refuse coercive and invasive neurotechnology, protect the privacy of data collected by neurotechnology, and protect the physical and psychological aspects of the mind from damage by the misuse of neurotechnology.”

Advances in neurotechnology, such as sophisticated brain imaging and the development of brain-computer interfaces, have led to these technologies moving away from a clinical setting and into the consumer domain. While these advances may be beneficial for individuals and society, there is a risk that the technology could be misused and create unprecedented threats to personal freedom.

Professor Roberto Andorno, co-author of the research, explained: “Brain imaging technology has already reached a point where there is discussion over its legitimacy in criminal court, for example as a tool for assessing criminal responsibility or even the risk of reoffending. Consumer companies are using brain imaging for ‘neuromarketing’, to understand consumer behaviour and elicit desired responses from customers. There are also tools such as ‘brain decoders’ which can turn brain imaging data into images, text or sound. All of these could pose a threat to personal freedom which we sought to address with the development of four new human rights laws.”

The authors explain that as neurotechnology improves and becomes commonplace, there is a risk that the technology could be hacked, allowing a third-party to ‘eavesdrop’ on someone’s mind. In the future, a brain-computer interface used to control consumer technology could put the user at risk of physical and psychological damage caused by a third-party attack on the technology. There are also ethical and legal concerns over the protection of data generated by these devices that need to be considered.

International human rights laws make no specific mention to neuroscience, although advances in biomedicine have become intertwined with laws, such as those concerning human genetic data. Similar to the historical trajectory of the genetic revolution, the authors state that the on-going neurorevolution will force a reconceptualization of human rights laws and even the creation of new ones.

Marcello Ienca added: “Science-fiction can teach us a lot about the potential threat of technology. Neurotechnology featured in famous stories has in some cases already become a reality, while others are inching ever closer, or exist as military and commercial prototypes. We need to be prepared to deal with the impact these technologies will have on our personal freedom.”

Here’s a link to and a citation for the paper,

Towards new human rights in the age of neuroscience and neurotechnology by Marcello Ienca and Roberto Andorno. Life Sciences, Society and Policy201713:5 DOI: 10.1186/s40504-017-0050-1 Published: 26 April 2017

©  The Author(s). 2017

This paper is open access.

Curiosity may not kill the cat but, in science, it might be an antidote to partisanship

I haven’t stumbled across anything from the Cultural Cognition Project at Yale Law School in years so before moving onto their latest news, here’s more about the project,

The Cultural Cognition Project is a group of scholars interested in studying how cultural values shape public risk perceptions and related policy beliefs. Cultural cognition refers to the tendency of individuals to conform their beliefs about disputed matters of fact (e.g., whether global warming is a serious threat; whether the death penalty deters murder; whether gun control makes society more safe or less) to values that define their cultural identities.Project members are using the methods of various disciplines — including social psychology, anthropology, communications, and political science — to chart the impact of this phenomenon and to identify the mechanisms through which it operates. The Project also has an explicit normative objective: to identify processes of democratic decisionmaking by which society can resolve culturally grounded differences in belief in a manner that is both congenial to persons of diverse cultural outlooks and consistent with sound public policymaking.

It’s nice to catch up with some of the project’s latest work, from a Jan. 26, 2017 Yale University news release (also on EurekAlert),

Disputes over science-related policy issues such as climate change or fracking often seem as intractable as other politically charged debates. But in science, at least, simple curiosity might help bridge that partisan divide, according to new research.

In a study slated for publication in the journal Advances in Political Psychology, a Yale-led research team found that people who are curious about science are less polarized in their views on contentious issues than less-curious peers.

In an experiment, they found out why: Science-curious individuals are more willing to engage with surprising information that runs counter to their political predispositions.

“It’s a well-established finding that most people prefer to read or otherwise be exposed to information that fits rather than challenges their political preconceptions,” said research team leader Dan Kahan, Elizabeth K. Dollard Professor of Law and professor of psychology at Yale Law School. “This is called the echo-chamber effect.”

But science-curious individuals are more likely to venture out of that chamber, he said.

“When they are offered the choice to read news articles that support their views or challenge them on the basis of new evidence, science-curious individuals opt for the challenging information,” Kahan said. “For them, surprising pieces of evidence are bright shiny objects — they can’t help but grab at them.”

Kahan and other social scientists previously have shown that information based on scientific evidence can actually intensify — rather than moderate — political polarization on contentious topics such as gun control, climate change, fracking, or the safety of certain vaccines. The new study, which assessed science knowledge among subjects, reiterates the gaping divide separating how conservatives and liberals view science.

Republicans and Democrats with limited knowledge of science were equally likely to agree or disagree with the statement that “there is solid evidence that global warming is caused by human activity. However, the most science-literate conservatives were much more likely to disagree with the statement than less-knowledgeable peers. The most knowledgeable liberals almost universally agreed with the statement.

“Whatever measure of critical reasoning we used, we always observed this depressing pattern: The members of the public most able to make sense of scientific evidence are in fact the most polarized,” Kahan said.

But knowledge of science, and curiosity about science, are not the same thing, the study shows.

The team became interested in curiosity because of its ongoing collaborative research project to improve public engagement with science documentaries involving the Cultural Cognition Project at Yale Law School, the Annenberg Public Policy Center of the University of Pennsylvania, and Tangled Bank Studios at the Howard Hughes Medical Institute.

They noticed that the curious — those who sought out science stories for personal pleasure — not only were more interested in viewing science films on a variety of topics but also did not display political polarization associated with contentious science issues.

The new study found, for instance, that a much higher percentage of curious liberals and conservatives chose to read stories that ran counter to their political beliefs than did their non-curious peers.

“As their science curiosity goes up, the polarizing effects of higher science comprehension dissipate, and people move the same direction on contentious policies like climate change and fracking,” Kahan said.

It is unclear whether curiosity applied to other controversial issues can minimize the partisan rancor that infects other areas of society. But Kahan believes that the curious from both sides of the political and cultural divide should make good ambassadors to the more doctrinaire members of their own groups.

“Politically curious people are a resource who can promote enlightened self-government by sharing scientific information they are naturally inclined to learn and share,” he said.

Here’s my standard link to and citation for the paper,

Science Curiosity and Political Information Processing by Dan M. Kahan, Asheley R Landrum, Katie Carpenter, Laura Helft, and Kathleen Hall Jamieson. Political Psychology Volume 38, Issue Supplement S1 February 2017 Pages 179–199 DOI: 10.1111/pops.12396View First published: 26 January 2017

This paper is open and it can also be accessed here.

I last mentioned Kahan and The Cultural Cognition Project in an April 10, 2014 posting (scroll down about 45% of the way) about responsible science.

Communicating science effectively—a December 2016 book from the US National Academy of Sciences

I stumbled across this Dec. 13, 2016  essay/book announcement by Dr. Andrew Maynard and Dr. Dietram A. Scheufele on The Conversation,

Many scientists and science communicators have grappled with disregard for, or inappropriate use of, scientific evidence for years – especially around contentious issues like the causes of global warming, or the benefits of vaccinating children. A long debunked study on links between vaccinations and autism, for instance, cost the researcher his medical license but continues to keep vaccination rates lower than they should be.

Only recently, however, have people begun to think systematically about what actually works to promote better public discourse and decision-making around what is sometimes controversial science. Of course scientists would like to rely on evidence, generated by research, to gain insights into how to most effectively convey to others what they know and do.

As it turns out, the science on how to best communicate science across different issues, social settings and audiences has not led to easy-to-follow, concrete recommendations.

About a year ago, the National Academies of Sciences, Engineering and Medicine brought together a diverse group of experts and practitioners to address this gap between research and practice. The goal was to apply scientific thinking to the process of how we go about communicating science effectively. Both of us were a part of this group (with Dietram as the vice chair).

The public draft of the group’s findings – “Communicating Science Effectively: A Research Agenda” – has just been published. In it, we take a hard look at what effective science communication means and why it’s important; what makes it so challenging – especially where the science is uncertain or contested; and how researchers and science communicators can increase our knowledge of what works, and under what conditions.

At some level, all science communication has embedded values. Information always comes wrapped in a complex skein of purpose and intent – even when presented as impartial scientific facts. Despite, or maybe because of, this complexity, there remains a need to develop a stronger empirical foundation for effective communication of and about science.

Addressing this, the National Academies draft report makes an extensive number of recommendations. A few in particular stand out:

  • Use a systems approach to guide science communication. In other words, recognize that science communication is part of a larger network of information and influences that affect what people and organizations think and do.
  • Assess the effectiveness of science communication. Yes, researchers try, but often we still engage in communication first and evaluate later. Better to design the best approach to communication based on empirical insights about both audiences and contexts. Very often, the technical risk that scientists think must be communicated have nothing to do with the hopes or concerns public audiences have.
  • Get better at meaningful engagement between scientists and others to enable that “honest, bidirectional dialogue” about the promises and pitfalls of science that our committee chair Alan Leshner and others have called for.
  • Consider social media’s impact – positive and negative.
  • Work toward better understanding when and how to communicate science around issues that are contentious, or potentially so.

The paper version of the book has a cost but you can get a free online version.  Unfortunately,  I cannot copy and paste the book’s table of contents here and was not able to find a book index although there is a handy list of reference texts.

I have taken a very quick look at the book. If you’re in the field, it’s definitely worth a look. It is, however, written for and by academics. If you look at the list of writers and reviewers, you will find over 90% are professors at one university or another. That said, I was happy to see references to Dan Kahan’s work at the Yale Law School’s Culture Cognition Project cited. As happens they weren’t able to cite his latest work [***see my xxx, 2017 curiosity post***], released about a month after “Communicating Science Effectively: A Research Agenda.”

I was unable to find any reference to science communication via popular culture. I’m a little dismayed as I feel that this is a seriously ignored source of information by science communication specialists and academicians but not by the folks at MIT (Massachusetts Institute of Technology) who announced a wireless app in the same week as it was featured in an episode of the US television comedy, The Big Bang Theory. Here’s more from MIT’s emotion detection wireless app in a Feb. 1, 2017 news release (also on EurekAlert),

It’s a fact of nature that a single conversation can be interpreted in very different ways. For people with anxiety or conditions such as Asperger’s, this can make social situations extremely stressful. But what if there was a more objective way to measure and understand our interactions?

Researchers from MIT’s Computer Science and Artificial Intelligence Laboratory (CSAIL) and Institute of Medical Engineering and Science (IMES) say that they’ve gotten closer to a potential solution: an artificially intelligent, wearable system that can predict if a conversation is happy, sad, or neutral based on a person’s speech patterns and vitals.

“Imagine if, at the end of a conversation, you could rewind it and see the moments when the people around you felt the most anxious,” says graduate student Tuka Alhanai, who co-authored a related paper with PhD candidate Mohammad Ghassemi that they will present at next week’s Association for the Advancement of Artificial Intelligence (AAAI) conference in San Francisco. “Our work is a step in this direction, suggesting that we may not be that far away from a world where people can have an AI social coach right in their pocket.”

As a participant tells a story, the system can analyze audio, text transcriptions, and physiological signals to determine the overall tone of the story with 83 percent accuracy. Using deep-learning techniques, the system can also provide a “sentiment score” for specific five-second intervals within a conversation.

“As far as we know, this is the first experiment that collects both physical data and speech data in a passive but robust way, even while subjects are having natural, unstructured interactions,” says Ghassemi. “Our results show that it’s possible to classify the emotional tone of conversations in real-time.”

The researchers say that the system’s performance would be further improved by having multiple people in a conversation use it on their smartwatches, creating more data to be analyzed by their algorithms. The team is keen to point out that they developed the system with privacy strongly in mind: The algorithm runs locally on a user’s device as a way of protecting personal information. (Alhanai says that a consumer version would obviously need clear protocols for getting consent from the people involved in the conversations.)

How it works

Many emotion-detection studies show participants “happy” and “sad” videos, or ask them to artificially act out specific emotive states. But in an effort to elicit more organic emotions, the team instead asked subjects to tell a happy or sad story of their own choosing.

Subjects wore a Samsung Simband, a research device that captures high-resolution physiological waveforms to measure features such as movement, heart rate, blood pressure, blood flow, and skin temperature. The system also captured audio data and text transcripts to analyze the speaker’s tone, pitch, energy, and vocabulary.

“The team’s usage of consumer market devices for collecting physiological data and speech data shows how close we are to having such tools in everyday devices,” says Björn Schuller, professor and chair of Complex and Intelligent Systems at the University of Passau in Germany, who was not involved in the research. “Technology could soon feel much more emotionally intelligent, or even ‘emotional’ itself.”

After capturing 31 different conversations of several minutes each, the team trained two algorithms on the data: One classified the overall nature of a conversation as either happy or sad, while the second classified each five-second block of every conversation as positive, negative, or neutral.

Alhanai notes that, in traditional neural networks, all features about the data are provided to the algorithm at the base of the network. In contrast, her team found that they could improve performance by organizing different features at the various layers of the network.

“The system picks up on how, for example, the sentiment in the text transcription was more abstract than the raw accelerometer data,” says Alhanai. “It’s quite remarkable that a machine could approximate how we humans perceive these interactions, without significant input from us as researchers.”

Results

Indeed, the algorithm’s findings align well with what we humans might expect to observe. For instance, long pauses and monotonous vocal tones were associated with sadder stories, while more energetic, varied speech patterns were associated with happier ones. In terms of body language, sadder stories were also strongly associated with increased fidgeting and cardiovascular activity, as well as certain postures like putting one’s hands on one’s face.

On average, the model could classify the mood of each five-second interval with an accuracy that was approximately 18 percent above chance, and a full 7.5 percent better than existing approaches.

The algorithm is not yet reliable enough to be deployed for social coaching, but Alhanai says that they are actively working toward that goal. For future work the team plans to collect data on a much larger scale, potentially using commercial devices such as the Apple Watch that would allow them to more easily implement the system out in the world.

“Our next step is to improve the algorithm’s emotional granularity so that it is more accurate at calling out boring, tense, and excited moments, rather than just labeling interactions as ‘positive’ or ‘negative,’” says Alhanai. “Developing technology that can take the pulse of human emotions has the potential to dramatically improve how we communicate with each other.”

This research was made possible in part by the Samsung Strategy and Innovation Center.

Episode 14 of season 10 of The Big Bang Theory was titled “The Emotion Detection Automation”  (full episode can be found on this webpage) and broadcast on Feb. 2, 2017. There’s also a Feb. 2, 2017 recap (recapitulation) by Lincee Ray for EW.com (it seems Ray is unaware that there really is such a machine),

Who knew we would see the day when Sheldon and Raj figured out solutions for their social ineptitudes? Only The Big Bang Theory writers would think to tackle our favorite physicists’ lack of social skills with an emotion detector and an ex-girlfriend focus group. It’s been a while since I enjoyed both storylines as much as I did in this episode. That’s no bazinga.

When Raj tells the guys that he is back on the market, he wonders out loud what is wrong with his game. Why do women reject him? Sheldon receives the information like a scientist and runs through many possible answers. Raj shuts him down with a simple, “I’m fine.”

Sheldon is irritated when he learns that this obligatory remark is a mask for what Raj is really feeling. It turns out, Raj is not fine. Sheldon whines, wondering why no one just says exactly what’s on their mind. It’s quite annoying for those who struggle with recognizing emotional cues.

Lo and behold, Bernadette recently read about a gizmo that was created for people who have this exact same anxiety. MIT has a prototype, and because Howard is an alum, he can probably submit Sheldon’s name as a beta tester.

Of course this is a real thing. If anyone can build an emotion detector, it’s a bunch of awkward scientists with zero social skills.

This is the first time I’ve noticed an academic institution’s news release to be almost simultaneous with mention of its research in a popular culture television program, which suggests things have come a long way since I featured news about a webinar by the National Academies ‘ Science and Entertainment Exchange for film and television productions collaborating with scientists in an Aug. 28, 2012 post.

One last science/popular culture moment: Hidden Figures, a movie about African American women who were human computers supporting NASA (US National Aeronautics and Space Agency) efforts during the 1960s space race and getting a man on the moon was (shockingly) no. 1 in the US box office for a few weeks (there’s more about the movie here in my Sept. 2, 2016 post covering then upcoming movies featuring science).  After the movie was released, Mary Elizabeth Williams wrote up a Jan. 23, 2017 interview with the ‘Hidden Figures’ scriptwriter for Salon.com

I [Allison Schroeder] got on the phone with her [co-producer Renee Witt] and Donna  [co-producer Donna Gigliotti] and I said, “You have to hire me for this; I was born to write this.” Donna sort of rolled her eyes and was like, “God, these Hollywood types would say anything.” I said, “No, no, I grew up at Cape Canaveral. My grandmother was a computer programmer at NASA, my grandfather worked on the Mercury prototype, and I interned there all through high school and then the summer after my freshman year at Stanford I interned. I worked at a missile launch company.”

She was like, “OK that’s impressive.” And I said, “No, I literally grew up climbing on the Mercury capsule — hitting all the buttons, trying to launch myself into space.”

She said, “Well do you think you can handle the math?” I said that I had to study a certain amount of math at Stanford for economics degree. She said, “Oh, all right, that sounds pretty good.”

I pitched her a few scenes. I pitched her the end of the movie that you saw with Katherine running the numbers as John Glenn is trying to get up in space. I pitched her the idea of one of the women as a mechanic and to see her legs underneath the engine. You’re used to seeing a guy like that, but what would it be like to see heels and pantyhose and a skirt and she’s a mechanic and fixing something? Those are some of the scenes that I pitched them, and I got the job.

I love that the film begins with setting up their mechanical aptitude. You set up these are women; you set up these women of color. You set up exactly what that means in this moment in history. It’s like you just go from there.

I was on a really tight timeline because this started as an indie film. It was just Donna Gigliotti, Renee Witt, me and the author Margot Lee Shetterly for about a year working on it. I was only given four weeks for research and 12 weeks for writing the first draft. I’m not sure if I hadn’t known NASA and known the culture and just knew what the machines would look like, knew what the prototypes looked like, if I could have done it that quickly. I turned in that draft and Donna was like, “OK you’ve got the math and the science; it’s all here. Now go have fun.” Then I did a few more drafts and that was really enjoyable because I could let go of the fact I did it and make sure that the characters and the drive of the story and everything just fit what needed to happen.

For anyone interested in the science/popular culture connection, David Bruggeman of the Pasco Phronesis blog does a better job than I do of keeping up with the latest doings.

Getting back to ‘Communicating Science Effectively: A Research Agenda’, even with a mention of popular culture, it is a thoughtful book on the topic.

A Moebius strip of moving energy (vibrations)

This research extends a theorem which posits that waves will adapt to slowly changing conditions and return to their original vibration to note that the waves can be manipulated to a new state. A July 25, 2016 news item on ScienceDaily makes the announcement,

Yale physicists have created something similar to a Moebius strip of moving energy between two vibrating objects, opening the door to novel forms of control over waves in acoustics, laser optics, and quantum mechanics.

The discovery also demonstrates that a century-old physics theorem offers much greater freedom than had long been believed. …

A July 25, 2016 Yale University news release (also on EurekAlert) by Jim Shelton, which originated the news item, expands on the theme,

Yale’s experiment is deceptively simple in concept. The researchers set up a pair of connected, vibrating springs and studied the acoustic waves that traveled between them as they manipulated the shape of the springs. Vibrations — as well as other types of energy waves — are able to move, or oscillate, at different frequencies. In this instance, the springs vibrate at frequencies that merge, similar to a Moebius strip that folds in on itself.

The precise spot where the vibrations merge is called an “exceptional point.”

“It’s like a guitar string,” said Jack Harris, a Yale associate professor of physics and applied physics, and the study’s principal investigator. “When you pluck it, it may vibrate in the horizontal plane or the vertical plane. As it vibrates, we turn the tuning peg in a way that reliably converts the horizontal motion into vertical motion, regardless of the details of how the peg is turned.”

Unlike a guitar, however, the experiment required an intricate laser system to precisely control the vibrations, and a cryogenic refrigeration chamber in which the vibrations could be isolated from any unwanted disturbance.

The Yale experiment is significant for two reasons, the researchers said. First, it suggests a very dependable way to control wave signals. Second, it demonstrates an important — and surprising — extension to a long-established theorem of physics, the adiabatic theorem.

The adiabatic theorem says that waves will readily adapt to changing conditions if those changes take place slowly. As a result, if the conditions are gradually returned to their initial configuration, any waves in the system should likewise return to their initial state of vibration. In the Yale experiment, this does not happen; in fact, the waves can be manipulated into a new state.

“This is a very robust and general way to control waves and vibrations that was predicted theoretically in the last decade, but which had never been demonstrated before,” Harris said. “We’ve only scratched the surface here.”

In the same edition of Nature, a team from the Vienna University of Technology also presented research on a system for wave control via exceptional points.

Here’s a link to and a citation for the paper,

Topological energy transfer in an optomechanical system with exceptional points by H. Xu, D. Mason, Luyao Jiang, & J. G. E. Harris. Nature (2016) doi:10.1038/nature18604 Published online 25 July 2016

This paper is behind a paywall.

D-PLACE: an open access database of places, language, culture, and enviroment

In an attempt to be a bit more broad in my interpretation of the ‘society’ part of my commentary I’m including this July 8, 2016 news item on ScienceDaily (Note: A link has been removed),

An international team of researchers has developed a website at d-place.org to help answer long-standing questions about the forces that shaped human cultural diversity.

D-PLACE — the Database of Places, Language, Culture and Environment — is an expandable, open access database that brings together a dispersed body of information on the language, geography, culture and environment of more than 1,400 human societies. It comprises information mainly on pre-industrial societies that were described by ethnographers in the 19th and early 20th centuries.

A July 8, 2016 University of Toronto news release (also on EurekAlert), which originated the news item, expands on the theme,

“Human cultural diversity is expressed in numerous ways: from the foods we eat and the houses we build, to our religious practices and political organisation, to who we marry and the types of games we teach our children,” said Kathryn Kirby, a postdoctoral fellow in the Departments of Ecology & Evolutionary Biology and Geography at the University of Toronto and lead author of the study. “Cultural practices vary across space and time, but the factors and processes that drive cultural change and shape patterns of diversity remain largely unknown.

“D-PLACE will enable a whole new generation of scholars to answer these long-standing questions about the forces that have shaped human cultural diversity.”

Co-author Fiona Jordan, senior lecturer in anthropology at the University of Bristol and one of the project leads said, “Comparative research is critical for understanding the processes behind cultural diversity. Over a century of anthropological research around the globe has given us a rich resource for understanding the diversity of humanity – but bringing different resources and datasets together has been a huge challenge in the past.

“We’ve drawn on the emerging big data sets from ecology, and combined these with cultural and linguistic data so researchers can visualise diversity at a glance, and download data to analyse in their own projects.”

D-PLACE allows users to search by cultural practice (e.g., monogamy vs. polygamy), environmental variable (e.g. elevation, mean annual temperature), language family (e.g. Indo-European, Austronesian), or region (e.g. Siberia). The search results can be displayed on a map, a language tree or in a table, and can also be downloaded for further analysis.

It aims to enable researchers to investigate the extent to which patterns in cultural diversity are shaped by different forces, including shared history, demographics, migration/diffusion, cultural innovations, and environmental and ecological conditions.

D-PLACE was developed by an international team of scientists interested in cross-cultural research. It includes researchers from Max Planck Institute for the Science of Human history in Jena Germany, University of Auckland, Colorado State University, University of Toronto, University of Bristol, Yale, Human Relations Area Files, Washington University in Saint Louis, University of Michigan, American Museum of Natural History, and City University of New York.

The diverse team included: linguists; anthropologists; biogeographers; data scientists; ethnobiologists; and evolutionary ecologists, who employ a variety of research methods including field-based primary data collection; compilation of cross-cultural data sources; and analyses of existing cross-cultural datasets.

“The team’s diversity is reflected in D-PLACE, which is designed to appeal to a broad user base,” said Kirby. “Envisioned users range from members of the public world-wide interested in comparing their cultural practices with those of other groups, to cross-cultural researchers interested in pushing the boundaries of existing research into the drivers of cultural change.”

Here’s a link to and a citation for the paper,

D-PLACE: A Global Database of Cultural, Linguistic and Environmental Diversity by Kathryn R. Kirby, Russell D. Gray, Simon J. Greenhill, Fiona M. Jordan, Stephanie Gomes-Ng, Hans-Jörg Bibiko, Damián E. Blasi, Carlos A. Botero, Claire Bowern, Carol R. Ember, Dan Leehr, Bobbi S. Low, Joe McCarter, William Divale, Michael C. Gavin.  PLOS ONE, 2016; 11 (7): e0158391 DOI: 10.1371/journal.pone.0158391 Published July 8, 2016.

This paper is open access.

You can find D-PLACE here.

While it might not seem like that there would be a close link between anthropology and physics in the 19th and early 20th centuries, that information can be mined for more contemporary applications. For example, someone who wants to make a case for a more diverse scientific community may want to develop a social science approach to the discussion. The situation in my June 16, 2016 post titled: Science literacy, science advice, the US Supreme Court, and Britain’s House of Commons, could  be extended into a discussion and educational process using data from D-Place and other sources to make the point,

Science literacy may not be just for the public, it would seem that US Supreme Court judges may not have a basic understanding of how science works. David Bruggeman’s March 24, 2016 posting (on his Pasco Phronesis blog) describes a then current case before the Supreme Court (Justice Antonin Scalia has since died), Note: Links have been removed,

It’s a case concerning aspects of the University of Texas admissions process for undergraduates and the case is seen as a possible means of restricting race-based considerations for admission.  While I think the arguments in the case will likely revolve around factors far removed from science and or technology, there were comments raised by two Justices that struck a nerve with many scientists and engineers.

Both Justice Antonin Scalia and Chief Justice John Roberts raised questions about the validity of having diversity where science and scientists are concerned [emphasis mine].  Justice Scalia seemed to imply that diversity wasn’t esential for the University of Texas as most African-American scientists didn’t come from schools at the level of the University of Texas (considered the best university in Texas).  Chief Justice Roberts was a bit more plain about not understanding the benefits of diversity.  He stated, “What unique perspective does a black student bring to a class in physics?”

To that end, Dr. S. James Gates, theoretical physicist at the University of Maryland, and member of the President’s Council of Advisers on Science and Technology (and commercial actor) has an editorial in the March 25 [2016] issue of Science explaining that the value of having diversity in science does not accrue *just* to those who are underrepresented.

Dr. Gates relates his personal experience as a researcher and teacher of how people’s background inform their practice of science, and that two different people may use the same scientific method, but think about the problem differently.

I’m guessing that both Scalia and Roberts and possibly others believe that science is the discovery and accumulation of facts. In this worldview science facts such as gravity are waiting for discovery and formulation into a ‘law’. They do not recognize that most science is a collection of beliefs and may be influenced by personal beliefs. For example, we believe we’ve proved the existence of the Higgs boson but no one associated with the research has ever stated unequivocally that it exists.

More generally, with D-PLACE and the recently announced Trans-Atlantic Platform (see my July 15, 2016 post about it), it seems Canada’s humanities and social sciences communities are taking strides toward greater international collaboration and a more profound investment in digital scholarship.

YBC 7289: a 3,800-year-old mathematical text and 3D printing at Yale University

1,300 years before Pythagoras came up with the theorem associated with his name, a school kid in Babylon formed a disc out of clay and scratched out the theorem when the surface was drying.  According to an April 12, 2016 news item on phys.org the Bablyonians got to the theorem first, (Note: A link has been removed),

Thirty-eight hundred years ago, on the hot river plains of what is now southern Iraq, a Babylonian student did a bit of schoolwork that changed our understanding of ancient mathematics. The student scooped up a palm-sized clump of wet clay, formed a disc about the size and shape of a hamburger, and let it dry down a bit in the sun. On the surface of the moist clay the student drew a diagram that showed the people of the Old Babylonian Period (1,900–1,700 B.C.E.) fully understood the principles of the “Pythagorean Theorem” 1300 years before Greek geometer Pythagoras was born, and were also capable of calculating the square root of two to six decimal places.

Today, thanks to the Internet and new digital scanning methods being employed at Yale, this ancient geometry lesson continues to be used in modern classrooms around the world.

Just when you think it’s all about the theorem, the story which originated in an April 11, 2016 Yale University news release by Patrick Lynch takes a turn,

“This geometry tablet is one of the most-reproduced cultural objects that Yale owns — it’s published in mathematics textbooks the world over,” says Professor Benjamin Foster, curator of the Babylonian Collection, which includes the tablet. It’s also a popular teaching tool in Yale classes. “At the Babylonian Collection we have a very active teaching and learning function, and we regard education as one of the core parts of our mission,” says Foster. “We have graduate and undergraduate groups in our collection classroom every week.”

The tablet, formally known as YBC 7289, “Old Babylonian Period Mathematical Text,” came to Yale in 1909 as part of a much larger collection of cuneiform tablets assembled by J. Pierpont Morgan and donated to Yale. In the ancient Mideast cuneiform writing was created by using a sharp stylus pressed into the surface of a soft clay tablet to produce wedge-like impressions representing pictographic words and numbers. Morgan’s donation of tablets and other artifacts formed the nucleus of the Yale Babylonian Collection, which now incorporates 45,000 items from the ancient Mesopotamian kingdoms.

Discoverying [sic] the tablet’s mathematical significance

The importance of the geometry tablet was first recognized by science historians Otto Neugebauer and Abraham Sachs in their 1945 book “Mathematical Cuneiform Texts.”

“Ironically, mathematicians today are much more fascinated with the Babylonians’ ability to accurately calculate irrational numbers like the square root of two than they are with the geometry demonstrations,” notes associate Babylonian Collection curator Agnete Lassen.

“The Old Babylonian Period produced many tablets that show complex mathematics, but it also produced things you might not expect from a culture this old, such as grammars, dictionaries, and word lists,” says Lassen “One of the two main languages spoken in early Babylonia  was dying out, and people were careful to document and save what they could on cuneiform tablets. It’s ironic that almost 4,000 years ago people were thinking about cultural preservation, [emphasis mine] and actively preserving their learning for future generations.”.

This business about ancient peoples trying to preserve culture and learning for future generations suggests that the efforts in Palmyra, Syria (my April 6, 2016 post about 3D printing parts of Palmyra) are born of an age-old impulse. And then the story takes another turn and becomes a 3D printing story (from the Yale University news release),

Today, however, the tablet is a fragile lump of clay that would not survive routine handling in a classroom. In looking for alternatives that might bring the highlights of the Babylonian Collection to a wider audience, the collection’s curators partnered with Yale’s Institute for the Preservation of Cultural Heritage (IPCH) to bring the objects into the digital world.

Scanning at the IPCH

The IPCH Digitization Lab’s first step was to do reflectance transformation imaging (RTI) on each of fourteen Babylonian Collection objects. RTI is a photographic technique that enables a student or researcher to look at a subject with many different lighting angles. That’s particularly important for something like a cuneiform tablet, where there are complex 3D marks incised into the surface. With RTI you can freely manipulate the lighting, and see subtle surface variations that no ordinary photograph would reveal.

Chelsea Graham of the IPCH Digitization Lab and her colleague Yang Ying Yang of the Yale Computer Graphics Group then did laser scanning of the tablet to create a three-dimensional geometric model that can be freely rotated onscreen. The resulting 3D models can be combined with many other types of digital imaging to give researchers and students a virtual tablet onscreen, and the same data can be use to create a 3D printed facsimile that can be freely used in the classroom without risk to the delicate original.
3D printing digital materials

While virtual models on the computer screen have proved to be a valuable teaching and research resource, even the most accurate 3D model on a computer screen doesn’t convey the tactile  impact, and physicality of the real object. Yale’s Center for Engineering Innovation and Design has collaborated with the IPCH on a number of cultural heritage projects, and the center’s assistant director, Joseph Zinter, has used its 3D printing expertise on a wide range of engineering, basic science, and cultural heritage projects.

“Whether it’s a sculpture, a rare skull, or a microscopic neuron or molecule highly magnified, you can pick up a 3D printed model and hold it, and it’s a very different and important way to understand the data. Holding something in your hand is a distinctive learning experience,” notes Zinter.

Sharing cultural heritage projects in the digital world

Once a cultural artifact has entered the digital world there are practical problems with how to share the information with students and scholars. IPCH postdoctoral fellows Goze Akoglu and Eleni Kotoula are working with Yale computer science faculty member Holly Rushmeier to create an integrated collaborative software platform to support the research and sharing of cultural heritage artifacts like the Babylonian tablet.

“Right now cultural heritage professionals must juggle many kinds of software, running several types of specialized 2D and 3D media viewers as well as conventional word processing and graphics programs. Our vision is to create a single virtual environment that accommodates many kinds of media, as well as supporting communication and annotation within the project,” says Kotoula.

The wide sharing and disseminating of cultural artifacts is one advantage of digitizing objects, notes professor Rushmeier, “but the key thing about digital is the power to study large virtual collections. It’s not about scanning and modeling the individual object. When the scanned object becomes part of a large collection of digital data, then machine learning and search analysis tools can be run over the collection, allowing scholars to ask questions and make comparisons that aren’t possible by other means,” says Rushmeier.

Reflecting on the process that brings state-of-the-art digital tools to one of humanity’s oldest forms of writing, Graham said “It strikes me that this tablet has made a very long journey from classroom to classroom. People sometimes think the digital or 3D-printed models are just a novelty, or just for exhibitions, but you can engage and interact much more with the 3D printed object, or 3D model on the screen. I think the creators of this tablet would have appreciated the efforts to bring this fragile object back to the classroom.”

There is also a video highlighting the work,

Split some water molecules and save solar and wind (energy) for a future day

Professor Ted Sargent’s research team at the University of Toronto has a developed a new technique for saving the energy harvested by sun and wind farms according to a March 28, 2016 news item on Nanotechnology Now,

We can’t control when the wind blows and when the sun shines, so finding efficient ways to store energy from alternative sources remains an urgent research problem. Now, a group of researchers led by Professor Ted Sargent at the University of Toronto’s Faculty of Applied Science & Engineering may have a solution inspired by nature.

The team has designed the most efficient catalyst for storing energy in chemical form, by splitting water into hydrogen and oxygen, just like plants do during photosynthesis. Oxygen is released harmlessly into the atmosphere, and hydrogen, as H2, can be converted back into energy using hydrogen fuel cells.

Discovering a better way of storing energy from solar and wind farms is “one of the grand challenges in this field,” Ted Sargent says (photo above by Megan Rosenbloom via flickr) Courtesy: University of Toronto

Discovering a better way of storing energy from solar and wind farms is “one of the grand challenges in this field,” Ted Sargent says (photo above by Megan Rosenbloom via flickr) Courtesy: University of Toronto

A March 24, 2016 University of Toronto news release by Marit Mitchell, which originated the news item, expands on the theme,

“Today on a solar farm or a wind farm, storage is typically provided with batteries. But batteries are expensive, and can typically only store a fixed amount of energy,” says Sargent. “That’s why discovering a more efficient and highly scalable means of storing energy generated by renewables is one of the grand challenges in this field.”

You may have seen the popular high-school science demonstration where the teacher splits water into its component elements, hydrogen and oxygen, by running electricity through it. Today this requires so much electrical input that it’s impractical to store energy this way — too great proportion of the energy generated is lost in the process of storing it.

This new catalyst facilitates the oxygen-evolution portion of the chemical reaction, making the conversion from H2O into O2 and H2 more energy-efficient than ever before. The intrinsic efficiency of the new catalyst material is over three times more efficient than the best state-of-the-art catalyst.

Details are offered in the news release,

The new catalyst is made of abundant and low-cost metals tungsten, iron and cobalt, which are much less expensive than state-of-the-art catalysts based on precious metals. It showed no signs of degradation over more than 500 hours of continuous activity, unlike other efficient but short-lived catalysts. …

“With the aid of theoretical predictions, we became convinced that including tungsten could lead to a better oxygen-evolving catalyst. Unfortunately, prior work did not show how to mix tungsten homogeneously with the active metals such as iron and cobalt,” says one of the study’s lead authors, Dr. Bo Zhang … .

“We invented a new way to distribute the catalyst homogenously in a gel, and as a result built a device that works incredibly efficiently and robustly.”

This research united engineers, chemists, materials scientists, mathematicians, physicists, and computer scientists across three countries. A chief partner in this joint theoretical-experimental studies was a leading team of theorists at Stanford University and SLAC National Accelerator Laboratory under the leadership of Dr. Aleksandra Vojvodic. The international collaboration included researchers at East China University of Science & Technology, Tianjin University, Brookhaven National Laboratory, Canadian Light Source and the Beijing Synchrotron Radiation Facility.

“The team developed a new materials synthesis strategy to mix multiple metals homogeneously — thereby overcoming the propensity of multi-metal mixtures to separate into distinct phases,” said Jeffrey C. Grossman, the Morton and Claire Goulder and Family Professor in Environmental Systems at Massachusetts Institute of Technology. “This work impressively highlights the power of tightly coupled computational materials science with advanced experimental techniques, and sets a high bar for such a combined approach. It opens new avenues to speed progress in efficient materials for energy conversion and storage.”

“This work demonstrates the utility of using theory to guide the development of improved water-oxidation catalysts for further advances in the field of solar fuels,” said Gary Brudvig, a professor in the Department of Chemistry at Yale University and director of the Yale Energy Sciences Institute.

“The intensive research by the Sargent group in the University of Toronto led to the discovery of oxy-hydroxide materials that exhibit electrochemically induced oxygen evolution at the lowest overpotential and show no degradation,” said University Professor Gabor A. Somorjai of the University of California, Berkeley, a leader in this field. “The authors should be complimented on the combined experimental and theoretical studies that led to this very important finding.”

Here’s a link to and a citation for the paper,

Homogeneously dispersed, multimetal oxygen-evolving catalysts by Bo Zhang, Xueli Zheng, Oleksandr Voznyy, Riccardo Comin, Michal Bajdich, Max García-Melchor, Lili Han, Jixian Xu, Min Liu, Lirong Zheng, F. Pelayo García de Arquer, Cao Thang Dinh, Fengjia Fan, Mingjian Yuan, Emre Yassitepe, Ning Chen, Tom Regier, Pengfei Liu, Yuhang Li, Phil De Luna, Alyf Janmohamed, Huolin L. Xin, Huagui Yang, Aleksandra Vojvodic, Edward H. Sargent. Science  24 Mar 2016: DOI: 10.1126/science.aaf1525

This paper is behind a paywall.

Finding a way to prevent sunscreens from penetrating the skin

While nanosunscreens have been singled out for their possible impact on our health, the fact is many sunscreens contain dangerous ingredients penetrating the skin. A Dec. 14, 2015 news item on ScienceDaily describes some research into getting sunscreens to stay on the skin surface avoiding penetration,

A new sunscreen has been developed that encapsulates the UV-blocking compounds inside bio-adhesive nanoparticles, which adhere to the skin well, but do not penetrate beyond the skin’s surface. These properties resulted in highly effective UV protection in a mouse model, without the adverse effects observed with commercial sunscreens, including penetration into the bloodstream and generation of reactive oxygen species, which can damage DNA and lead to cancer.

A US National Institute of Biomedical Imaging and Bioengineering (NIBIB) Dec. 14, 2015 news release, which originated the news item, expands on the theme (Note: Links have been removed),

Commercial sunscreens use compounds that effectively filter out damaging UV light. However, there is concern that these agents have a variety of harmful effects due to penetration past the surface skin. For example, these products have been found in human breast tissue and urine and are known to disrupt the normal function of some hormones. Also, the exposure of the UV filters to light can produce toxic reactive oxygen species that are destructive to cells and tissues and can cause tumors through DNA damage.

“This work applies a novel bioengineering idea to a little known but significant health problem, adds Jessica Tucker, Ph.D., Director of the NIBIB Program in Delivery Systems and Devices for Drugs and Biologics. “While we are all familiar with the benefits of sunscreen, the potential toxicities from sunscreen due to penetration into the body and creation of DNA-damaging agents are not well known. Bioengineering sunscreen to inhibit penetration and keep any DNA-damaging compounds isolated in the nanoparticle and away from the skin is a great example of how a sophisticated technology can be used to solve a problem affecting the health of millions of people.”

Bioengineers and dermatologists at Yale University in New Haven, Connecticut combined their expertise in nanoparticle-based drug delivery and the molecular and cellular characteristics of the skin to address these potential health hazards of current commercial sunscreens.

The news release then goes on to provide some technical details,

The group encapsulated a commonly used sunscreen, padimate O (PO), inside a nanoparticle (a very small molecule often used to transport drugs and other agents into the body). PO is related to the better-known sunscreen PABA.

The bioadhesive nanoparticle containing the sunscreen PO was tested on pigs for penetration into the skin. A control group of pigs received the PO alone, not encapsulated in a nanoparticle. The PO penetrated beyond the surface layers of skin where it could potentially enter the bloodstream through blood vessels that are in the deeper skin layers. However, the PO inside the nanoparticle remained on the surface of the skin and did not penetrate into deeper layers.

Because the bioadhesive nanoparticles, or BNPs are larger than skin pores it was somewhat expected that they could not enter the body by that route. However, skin is full of hair follicles that are larger than BNPs and so could be a way for migration into the body. Surprisingly, BNPs did not pass through the hair follicle openings either. Tests indicated that the adhesive properties of the BNPs caused them to stick to the skin surface, unable to move through the hair follicles.

Further testing showed that the BNPs were water resistant and remained on the skin for a day or more, yet were easily removed by towel wiping. They also disappeared in several days through natural exfoliation of the surface skin.

BNPs enhance the effect of sunscreen

An important test was whether the BNP-encapsulated sunscreen retained its UV filtering properties. The researchers used a mouse model to test whether PO blocked sunburn when encapsulated in the BNPs. The BNP formulation successfully provided the same amount of UV protection as the commercial products applied directly to the skin of the hairless mouse model. Surprisingly, this was achieved even though the BNPs carried only a fraction (5%) of the amount of commercial sunblock applied to the mice.

Finally, the encapsulated sunscreen was tested for the formation of damaging oxygen-carrying molecules known as reactive oxygen species, (ROS) when exposed to UV light. The researchers hypothesized that any ROS created by the sunscreen’s interaction with UV would stay contained inside the BNP, unable to damage surrounding tissue. Following exposure to UV light, no damaging ROS were detected outside of the nanoparticle, indicating that any harmful agents that were formed remained inside of the nanoparticle, unable to make contact with the skin.

“We are extremely pleased with the properties and performance of our BNP formulation,” says senior author Mark Saltzman, Ph.D., Yale School of Engineering and Applied Science. “The sunscreen loaded BNPs combine the best properties of an effective sunscreen with a safety profile that alleviates the potential toxicities of the actual sunscreen product because it is encapsulated and literally never touches the skin.” Adds co-senior author, Michael Girardi, M.D. “Our nanoparticles performed as expected, however, these are preclinical findings. We are now in a position to assess the effects on human skin.”

So, all of this work has been done on animal models, which means that human clinical trials are the likely next step. As we wait, here’s a link to and a citation for this group’s paper,

A sunblock based on bioadhesive nanoparticles by Yang Deng, Asiri Ediriwickrema, Fan Yang, Julia Lewis, Michael Girardi, & W. Mark Saltzman. Nature Materials 14, 1278–1285 (2015) doi:10.1038/nmat4422 Published online 28 September 2015

This paper is behind a paywall.