Tag Archives: ASU

Emerging technology and the law

I have three news bits about legal issues that are arising as a consequence of emerging technologies.

Deep neural networks, art, and copyright

Caption: The rise of automated art opens new creative avenues, coupled with new problems for copyright protection. Credit: Provided by: Alexander Mordvintsev, Christopher Olah and Mike Tyka

Presumably this artwork is a demonstration of automated art although they never really do explain how in the news item/news release. An April 26, 2017 news item on ScienceDaily announces research into copyright and the latest in using neural networks to create art,

In 1968, sociologist Jean Baudrillard wrote on automatism that “contained within it is the dream of a dominated world […] that serves an inert and dreamy humanity.”

With the growing popularity of Deep Neural Networks (DNN’s), this dream is fast becoming a reality.

Dr. Jean-Marc Deltorn, researcher at the Centre d’études internationales de la propriété intellectuelle in Strasbourg, argues that we must remain a responsive and responsible force in this process of automation — not inert dominators. As he demonstrates in a recent Frontiers in Digital Humanities paper, the dream of automation demands a careful study of the legal problems linked to copyright.

An April 26, 2017 Frontiers (publishing) news release on EurekAlert, which originated the news item, describes the research in more detail,

For more than half a century, artists have looked to computational processes as a way of expanding their vision. DNN’s are the culmination of this cross-pollination: by learning to identify a complex number of patterns, they can generate new creations.

These systems are made up of complex algorithms modeled on the transmission of signals between neurons in the brain.

DNN creations rely in equal measure on human inputs and the non-human algorithmic networks that process them.

Inputs are fed into the system, which is layered. Each layer provides an opportunity for a more refined knowledge of the inputs (shape, color, lines). Neural networks compare actual outputs to expected ones, and correct the predictive error through repetition and optimization. They train their own pattern recognition, thereby optimizing their learning curve and producing increasingly accurate outputs.

The deeper the layers are, the higher the level of abstraction. The highest layers are able to identify the contents of a given input with reasonable accuracy, after extended periods of training.

Creation thus becomes increasingly automated through what Deltorn calls “the arcane traceries of deep architecture”. The results are sufficiently abstracted from their sources to produce original creations that have been exhibited in galleries, sold at auction and performed at concerts.

The originality of DNN’s is a combined product of technological automation on one hand, human inputs and decisions on the other.

DNN’s are gaining popularity. Various platforms (such as DeepDream) now allow internet users to generate their very own new creations . This popularization of the automation process calls for a comprehensive legal framework that ensures a creator’s economic and moral rights with regards to his work – copyright protection.

Form, originality and attribution are the three requirements for copyright. And while DNN creations satisfy the first of these three, the claim to originality and attribution will depend largely on a given country legislation and on the traceability of the human creator.

Legislation usually sets a low threshold to originality. As DNN creations could in theory be able to create an endless number of riffs on source materials, the uncurbed creation of original works could inflate the existing number of copyright protections.

Additionally, a small number of national copyright laws confers attribution to what UK legislation defines loosely as “the person by whom the arrangements necessary for the creation of the work are undertaken.” In the case of DNN’s, this could mean anybody from the programmer to the user of a DNN interface.

Combined with an overly supple take on originality, this view on attribution would further increase the number of copyrightable works.

The risk, in both cases, is that artists will be less willing to publish their own works, for fear of infringement of DNN copyright protections.

In order to promote creativity – one seminal aim of copyright protection – the issue must be limited to creations that manifest a personal voice “and not just the electric glint of a computational engine,” to quote Deltorn. A delicate act of discernment.

DNN’s promise new avenues of creative expression for artists – with potential caveats. Copyright protection – a “catalyst to creativity” – must be contained. Many of us gently bask in the glow of an increasingly automated form of technology. But if we want to safeguard the ineffable quality that defines much art, it might be a good idea to hone in more closely on the differences between the electric and the creative spark.

This research is and be will part of a broader Frontiers Research Topic collection of articles on Deep Learning and Digital Humanities.

Here’s a link to and a citation for the paper,

Deep Creations: Intellectual Property and the Automata by Jean-Marc Deltorn. Front. Digit. Humanit., 01 February 2017 | https://doi.org/10.3389/fdigh.2017.00003

This paper is open access.

Conference on governance of emerging technologies

I received an April 17, 2017 notice via email about this upcoming conference. Here’s more from the Fifth Annual Conference on Governance of Emerging Technologies: Law, Policy and Ethics webpage,

The Fifth Annual Conference on Governance of Emerging Technologies:

Law, Policy and Ethics held at the new

Beus Center for Law & Society in Phoenix, AZ

May 17-19, 2017!

Call for Abstracts – Now Closed

The conference will consist of plenary and session presentations and discussions on regulatory, governance, legal, policy, social and ethical aspects of emerging technologies, including (but not limited to) nanotechnology, synthetic biology, gene editing, biotechnology, genomics, personalized medicine, human enhancement technologies, telecommunications, information technologies, surveillance technologies, geoengineering, neuroscience, artificial intelligence, and robotics. The conference is premised on the belief that there is much to be learned and shared from and across the governance experience and proposals for these various emerging technologies.

Keynote Speakers:

Gillian HadfieldRichard L. and Antoinette Schamoi Kirtland Professor of Law and Professor of Economics USC [University of Southern California] Gould School of Law

Shobita Parthasarathy, Associate Professor of Public Policy and Women’s Studies, Director, Science, Technology, and Public Policy Program University of Michigan

Stuart Russell, Professor at [University of California] Berkeley, is a computer scientist known for his contributions to artificial intelligence

Craig Shank, Vice President for Corporate Standards Group in Microsoft’s Corporate, External and Legal Affairs (CELA)

Plenary Panels:

Innovation – Responsible and/or Permissionless

Ellen-Marie Forsberg, Senior Researcher/Research Manager at Oslo and Akershus University College of Applied Sciences

Adam Thierer, Senior Research Fellow with the Technology Policy Program at the Mercatus Center at George Mason University

Wendell Wallach, Consultant, ethicist, and scholar at Yale University’s Interdisciplinary Center for Bioethics

 Gene Drives, Trade and International Regulations

Greg Kaebnick, Director, Editorial Department; Editor, Hastings Center Report; Research Scholar, Hastings Center

Jennifer Kuzma, Goodnight-North Carolina GlaxoSmithKline Foundation Distinguished Professor in Social Sciences in the School of Public and International Affairs (SPIA) and co-director of the Genetic Engineering and Society (GES) Center at North Carolina State University

Andrew Maynard, Senior Sustainability Scholar, Julie Ann Wrigley Global Institute of Sustainability Director, Risk Innovation Lab, School for the Future of Innovation in Society Professor, School for the Future of Innovation in Society, Arizona State University

Gary Marchant, Regents’ Professor of Law, Professor of Law Faculty Director and Faculty Fellow, Center for Law, Science & Innovation, Arizona State University

Marc Saner, Inaugural Director of the Institute for Science, Society and Policy, and Associate Professor, University of Ottawa Department of Geography

Big Data

Anupam Chander, Martin Luther King, Jr. Professor of Law and Director, California International Law Center, UC Davis School of Law

Pilar Ossorio, Professor of Law and Bioethics, University of Wisconsin, School of Law and School of Medicine and Public Health; Morgridge Institute for Research, Ethics Scholar-in-Residence

George Poste, Chief Scientist, Complex Adaptive Systems Initiative (CASI) (http://www.casi.asu.edu/), Regents’ Professor and Del E. Webb Chair in Health Innovation, Arizona State University

Emily Shuckburgh, climate scientist and deputy head of the Polar Oceans Team at the British Antarctic Survey, University of Cambridge

 Responsible Development of AI

Spring Berman, Ira A. Fulton Schools of Engineering, Arizona State University

John Havens, The IEEE [Institute of Electrical and Electronics Engineers] Global Initiative for Ethical Considerations in Artificial Intelligence and Autonomous Systems

Subbarao Kambhampati, Senior Sustainability Scientist, Julie Ann Wrigley Global Institute of Sustainability, Professor, School of Computing, Informatics and Decision Systems Engineering, Ira A. Fulton Schools of Engineering, Arizona State University

Wendell Wallach, Consultant, Ethicist, and Scholar at Yale University’s Interdisciplinary Center for Bioethics

Existential and Catastrophic Ricks [sic]

Tony Barrett, Co-Founder and Director of Research of the Global Catastrophic Risk Institute

Haydn Belfield,  Academic Project Administrator, Centre for the Study of Existential Risk at the University of Cambridge

Margaret E. Kosal Associate Director, Sam Nunn School of International Affairs, Georgia Institute of Technology

Catherine Rhodes,  Academic Project Manager, Centre for the Study of Existential Risk at CSER, University of Cambridge

These were the panels that are of interest to me; there are others on the homepage.

Here’s some information from the Conference registration webpage,

Early Bird Registration – $50 off until May 1! Enter discount code: earlybirdGETs50

New: Group Discount – Register 2+ attendees together and receive an additional 20% off for all group members!

Click Here to Register!

Conference registration fees are as follows:

  • General (non-CLE) Registration: $150.00
  • CLE Registration: $350.00
  • *Current Student / ASU Law Alumni Registration: $50.00
  • ^Cybsersecurity sessions only (May 19): $100 CLE / $50 General / Free for students (registration info coming soon)

There you have it.

Neuro-techno future laws

I’m pretty sure this isn’t the first exploration of potential legal issues arising from research into neuroscience although it’s the first one I’ve stumbled across. From an April 25, 2017 news item on phys.org,

New human rights laws to prepare for advances in neurotechnology that put the ‘freedom of the mind’ at risk have been proposed today in the open access journal Life Sciences, Society and Policy.

The authors of the study suggest four new human rights laws could emerge in the near future to protect against exploitation and loss of privacy. The four laws are: the right to cognitive liberty, the right to mental privacy, the right to mental integrity and the right to psychological continuity.

An April 25, 2017 Biomed Central news release on EurekAlert, which originated the news item, describes the work in more detail,

Marcello Ienca, lead author and PhD student at the Institute for Biomedical Ethics at the University of Basel, said: “The mind is considered to be the last refuge of personal freedom and self-determination, but advances in neural engineering, brain imaging and neurotechnology put the freedom of the mind at risk. Our proposed laws would give people the right to refuse coercive and invasive neurotechnology, protect the privacy of data collected by neurotechnology, and protect the physical and psychological aspects of the mind from damage by the misuse of neurotechnology.”

Advances in neurotechnology, such as sophisticated brain imaging and the development of brain-computer interfaces, have led to these technologies moving away from a clinical setting and into the consumer domain. While these advances may be beneficial for individuals and society, there is a risk that the technology could be misused and create unprecedented threats to personal freedom.

Professor Roberto Andorno, co-author of the research, explained: “Brain imaging technology has already reached a point where there is discussion over its legitimacy in criminal court, for example as a tool for assessing criminal responsibility or even the risk of reoffending. Consumer companies are using brain imaging for ‘neuromarketing’, to understand consumer behaviour and elicit desired responses from customers. There are also tools such as ‘brain decoders’ which can turn brain imaging data into images, text or sound. All of these could pose a threat to personal freedom which we sought to address with the development of four new human rights laws.”

The authors explain that as neurotechnology improves and becomes commonplace, there is a risk that the technology could be hacked, allowing a third-party to ‘eavesdrop’ on someone’s mind. In the future, a brain-computer interface used to control consumer technology could put the user at risk of physical and psychological damage caused by a third-party attack on the technology. There are also ethical and legal concerns over the protection of data generated by these devices that need to be considered.

International human rights laws make no specific mention to neuroscience, although advances in biomedicine have become intertwined with laws, such as those concerning human genetic data. Similar to the historical trajectory of the genetic revolution, the authors state that the on-going neurorevolution will force a reconceptualization of human rights laws and even the creation of new ones.

Marcello Ienca added: “Science-fiction can teach us a lot about the potential threat of technology. Neurotechnology featured in famous stories has in some cases already become a reality, while others are inching ever closer, or exist as military and commercial prototypes. We need to be prepared to deal with the impact these technologies will have on our personal freedom.”

Here’s a link to and a citation for the paper,

Towards new human rights in the age of neuroscience and neurotechnology by Marcello Ienca and Roberto Andorno. Life Sciences, Society and Policy201713:5 DOI: 10.1186/s40504-017-0050-1 Published: 26 April 2017

©  The Author(s). 2017

This paper is open access.

A DNA switch for new electronic applications

I little dreamed when reading “The Double Helix : A Personal Account of the Discovery of the Structure of DNA” by James Watson that DNA (deoxyribonucleic acid) would one day become just another material for scientists to manipulate. A Feb. 20, 2017 news item on ScienceDaily describes the use of DNA as a material in electronics applications,

DNA, the stuff of life, may very well also pack quite the jolt for engineers trying to advance the development of tiny, low-cost electronic devices.

Much like flipping your light switch at home — only on a scale 1,000 times smaller than a human hair — an ASU [Arizona State University]-led team has now developed the first controllable DNA switch to regulate the flow of electricity within a single, atomic-sized molecule. The new study, led by ASU Biodesign Institute researcher Nongjian Tao, was published in the advanced online journal Nature Communications.

DNA, the stuff of life, may very well also pack quite the jolt for engineers trying to advance the development of tiny, low-cost electronic devices. Courtesy: ASU

A Feb. 20, 2017 ASU news release (also on EurekAlert), which originated the news item, provides more detail,

“It has been established that charge transport is possible in DNA, but for a useful device, one wants to be able to turn the charge transport on and off. We achieved this goal by chemically modifying DNA,” said Tao, who directs the Biodesign Center for Bioelectronics and Biosensors and is a professor in the Fulton Schools of Engineering. “Not only that, but we can also adapt the modified DNA as a probe to measure reactions at the single-molecule level. This provides a unique way for studying important reactions implicated in disease, or photosynthesis reactions for novel renewable energy applications.”

Engineers often think of electricity like water, and the research team’s new DNA switch acts to control the flow of electrons on and off, just like water coming out of a faucet.

Previously, Tao’s research group had made several discoveries to understand and manipulate DNA to more finely tune the flow of electricity through it. They found they could make DNA behave in different ways — and could cajole electrons to flow like waves according to quantum mechanics, or “hop” like rabbits in the way electricity in a copper wire works —creating an exciting new avenue for DNA-based, nano-electronic applications.

Tao assembled a multidisciplinary team for the project, including ASU postdoctoral student Limin Xiang and Li Yueqi performing bench experiments, Julio Palma working on the theoretical framework, with further help and oversight from collaborators Vladimiro Mujica (ASU) and Mark Ratner (Northwestern University).

To accomplish their engineering feat, Tao’s group, modified just one of DNA’s iconic double helix chemical letters, abbreviated as A, C, T or G, with another chemical group, called anthraquinone (Aq). Anthraquinone is a three-ringed carbon structure that can be inserted in between DNA base pairs but contains what chemists call a redox group (short for reduction, or gaining electrons or oxidation, losing electrons).

These chemical groups are also the foundation for how our bodies’ convert chemical energy through switches that send all of the electrical pulses in our brains, our hearts and communicate signals within every cell that may be implicated in the most prevalent diseases.

The modified Aq-DNA helix could now help it perform the switch, slipping comfortably in between the rungs that make up the ladder of the DNA helix, and bestowing it with a new found ability to reversibly gain or lose electrons.

Through their studies, when they sandwiched the DNA between a pair of electrodes, they careful [sic] controlled their electrical field and measured the ability of the modified DNA to conduct electricity. This was performed using a staple of nano-electronics, a scanning tunneling microscope, which acts like the tip of an electrode to complete a connection, being repeatedly pulled in and out of contact with the DNA molecules in the solution like a finger touching a water droplet.

“We found the electron transport mechanism in the present anthraquinone-DNA system favors electron “hopping” via anthraquinone and stacked DNA bases,” said Tao. In addition, they found they could reversibly control the conductance states to make the DNA switch on (high-conductance) or switch-off (low conductance). When anthraquinone has gained the most electrons (its most-reduced state), it is far more conductive, and the team finely mapped out a 3-D picture to account for how anthraquinone controlled the electrical state of the DNA.

For their next project, they hope to extend their studies to get one step closer toward making DNA nano-devices a reality.

“We are particularly excited that the engineered DNA provides a nice tool to examine redox reaction kinetics, and thermodynamics the single molecule level,” said Tao.

Here’s a link to and a citation for the paper,

I last featured Tao’s work with DNA in an April 20, 2015 posting.

Gate-controlled conductance switching in DNA by Limin Xiang, Julio L. Palma, Yueqi Li, Vladimiro Mujica, Mark A. Ratner, & Nongjian Tao.  Nature Communications 8, Article number: 14471 (2017)  doi:10.1038/ncomms14471 Published online: 20 February 2017

This paper is open access.

CRISPR patent decision: Harvard’s and MIT’s Broad Institute victorious—for now

I have written about the CRISPR patent tussle (Harvard & MIT’s [Massachusetts Institute of Technology] Broad Institute vs the University of California at Berkeley) previously in a Jan. 6, 2015 posting and in a more detailed May 14, 2015 posting. I also mentioned (in a Jan. 17, 2017 posting) CRISPR and its patent issues in the context of a posting about a Slate.com series on Frankenstein and the novel’s applicability to our own time. This patent fight is being bitterly fought as fortunes are at stake.

It seems a decision has been made regarding the CRISPR patent claims. From a Feb. 17, 2017 article by Charmaine Distor for The Science Times,

After an intense court battle, the US Patent and Trademark Office (USPTO) released its ruling on February 15 [2017]. The rights for the CRISPR-Cas9 gene editing technology was handed over to the Broad Institute of Harvard University and the Massachusetts Institute of Technology (MIT).

According to an article in Nature, the said court battle was between the Broad Institute and the University of California. The two institutions are fighting over the intellectual property right for the CRISPR patent. The case between the two started when the patent was first awarded to the Broad Institute despite having the University of California apply first for the CRISPR patent.

Heidi Ledford’s Feb. 17, 2017 article for Nature provides more insight into the situation (Note: Links have been removed),

It [USPTO] ruled that the Broad Institute of Harvard and MIT in Cambridge could keep its patents on using CRISPR–Cas9 in eukaryotic cells. That was a blow to the University of California in Berkeley, which had filed its own patents and had hoped to have the Broad’s thrown out.

The fight goes back to 2012, when Jennifer Doudna at Berkeley, Emmanuelle Charpentier, then at the University of Vienna, and their colleagues outlined how CRISPR–Cas9 could be used to precisely cut isolated DNA1. In 2013, Feng Zhang at the Broad and his colleagues — and other teams — showed2 how it could be adapted to edit DNA in eukaryotic cells such as plants, livestock and humans.

Berkeley filed for a patent earlier, but the USPTO granted the Broad’s patents first — and this week upheld them. There are high stakes involved in the ruling. The holder of key patents could make millions of dollars from CRISPR–Cas9’s applications in industry: already, the technique has sped up genetic research, and scientists are using it to develop disease-resistant livestock and treatments for human diseases.

But the fight for patent rights to CRISPR technology is by no means over. Here are four reasons why.

1. Berkeley can appeal the ruling

2. European patents are still up for grabs

3. Other parties are also claiming patent rights on CRISPR–Cas9

4. CRISPR technology is moving beyond what the patents cover

As for Ledford’s 3rd point, there are an estimated 763 patent families (groups of related patents) claiming CAS9 leading to the distinct possibility that the Broad Institute will be fighting many patent claims in the future.

Once you’ve read Distor’s and Ledford’s articles, you may want to check out Adam Rogers’ and Eric Niiler’s Feb. 16, 2017 CRISPR patent article for Wired,

The fight over who owns the most promising technique for editing genes—cutting and pasting the stuff of life to cure disease and advance scientific knowledge—has been a rough one. A team on the West Coast, at UC Berkeley, filed patents on the method, Crispr-Cas9; a team on the East Coast, based at MIT and the Broad Institute, filed their own patents in 2014 after Berkeley’s, but got them granted first. The Berkeley group contended that this constituted “interference,” and that Berkeley deserved the patent.

At stake: millions, maybe billions of dollars in biotech money and licensing fees, the future of medicine, the future of bioscience. Not nothing. Who will benefit depends on who owns the patents.

On Wednesday [Feb. 15, 2017], the US Patent Trial and Appeal Board kind of, sort of, almost began to answer that question. Berkeley will get the patent for using the system called Crispr-Cas9 in any living cell, from bacteria to blue whales. Broad/MIT gets the patent in eukaryotic cells, which is to say, plants and animals.

It’s … confusing. “The patent that the Broad received is for the use of Crispr gene-editing technology in eukaryotic cells. The patent for the University of California is for all cells,” says Jennifer Doudna, the UC geneticist and co-founder of Caribou Biosciences who co-invented Crispr, on a conference call. Her metaphor: “They have a patent on green tennis balls; we have a patent for all tennis balls.”

Observers didn’t quite buy that topspin. If Caribou is playing tennis, it’s looking like Broad/MIT is Serena Williams.

“UC does not necessarily lose everything, but they’re no doubt spinning the story,” says Robert Cook-Deegan, an expert in genetic policy at Arizona State University’s School for the Future of Innovation in Society. “UC’s claims to eukaryotic uses of Crispr-Cas9 will not be granted in the form they sought. That’s a big deal, and UC was the big loser.”

UC officials said Wednesday [Feb. 15, 2017] that they are studying the 51-page decision and considering whether to appeal. That leaves members of the biotechnology sector wondering who they will have to pay to use Crispr as part of a business—and scientists hoping the outcome won’t somehow keep them from continuing their research.

….

Happy reading!

News from Arizona State University’s The Frankenstein Bicentennial Project

I received a September 2016 newsletter (issued occasionally) from The Frankenstein Bicentennial Project at Arizona State University (ASU) which contained these two tidbits:

I, Artist

Bobby Zokaites converted a Roomba, a robotic vacuum, from a room cleaning device to an art-maker by removing the dust collector and vacuuming system and replacing it with a paint reservoir. Artists have been playing with robots to make art since the 1950s. This work is an extension of a genre, repurposing a readily available commercial robot.

With this project, Bobby set out to create a self-portrait of a generation, one that grew up with access to a vast amount of information and constantly bombarded by advertisements. The Roomba paintings prove that a robot can paint a reasonably complex painting, and do it differently every time; thus this version of the Turing test was successful.

As in the story of Frankenstein, this work also interrogates questions of creativity and responsibility. Is this a truly creative work of art, and if so, who is the artist; man or machine?

Both the text description and the video are from: https://www.youtube.com/watch?v=0m5ihmwPWgY

Frankenstein at 200 Exhibit

From the September 2016 newsletter (Note: Links have been removed),

Just as the creature in Frankenstein [the monster is never named in the book; its creator, however, is Victor Frankenstein] was assembled from an assortment of materials, so too is the cultural understanding of the Frankenstein myth. Now a new, interdisciplinary exhibit at ASU Libraries examines how Mary Shelley’s 200-year-old science fiction story continues to inspire, educate, and frighten 21st century audiences.

Frankenstein at 200 is open now through December 10 on the first floor of ASU’s Hayden Library in Tempe, AZ.

Here’s more from the exhibit’s webpage on the ASU website,

No work of literature has done more to shape the way people imagine science and its moral consequences than “Frankenstein;” or “The Modern Prometheus,” Mary Shelley’s enduring tale of creation and responsibility. The novel’s themes and tropes continue to resonate with contemporary audiences, influencing the way we confront emerging technologies, conceptualize the process of scientific research, and consider the ethical relationships between creators and their creations

Two hundred years after Mary Shelley imagined the story that would become “Frankenstein,” ASU Libraries is exhibiting an interdisciplinary installation that contextualizes the conditions of the original tale while exploring it’s continued importance in our technological age. Featuring work by ASU faculty and students, this exhibition includes a variety of physical and digital artifacts, original art projects and interactive elements that examine “Frankenstein’s” colossal scientific, technological, cultural and social impacts.

About the Frankenstein Bicentennial Project: Launched by Drs. David Guston and Ed Finn in 2013, the Frankenstein Bicentennial Project, is a global celebration of the bicentennial of the writing and publication of Mary Shelley’s Frankenstein, from 2016-2018. The project uses Frankenstein as a lens to examine the complex relationships between science, technology, ethics, and society. To learn more visit frankenstein.asu.edu and follow @FrankensteinASU on Twitter

There are more informational tidbits at The Frankenstein Bicentennial Project website.

A couple of Frankenstein dares from The Frankenstein Bicentennial project

Drat! I’ve gotten the information about the first Frankenstein dare (a short story challenge) a little late in the game since the deadline is 11:59 pm PDT on July 31, 2016. In any event, here’s more about the two dares,

And for those who like their information in written form, here are the details from the Arizona State University’s (ASU) Frankenstein Bicentennial Dare (on The Franklin Bicentennial Project website),

Two centuries ago, on a dare to tell the best scary story, 19-year-old Mary Shelley imagined an idea that became the basis for Frankenstein. Mary’s original concept became the novel that arguably kick-started the genres of science fiction and Gothic horror, but also provided an enduring myth that shapes how we grapple with creativity, science, technology, and their consequences.
Two hundred years later, inspired by that classic dare, we’re challenging you to create new myths for the 21st century along with our partners National Novel Writing Month (NaNoWriMo), Chabot Space and Science Center, and Creative Nonfiction magazine.

FRANKENSTEIN 200

Presented by NaNoWriMo and the Chabot Space and Science Center

Frankenstein is a classic of Gothic literature – a gripping, tragic story about Victor Frankenstein’s failure to accept responsibility for the consequences of bringing new life into the world. In this dare, we’re challenging you to write a scary story that explores the relationship between creators and the “monsters” they create.

Almost anything that we create can become monstrous: a misinterpreted piece of architecture; a song whose meaning has been misappropriated; a big, but misunderstood idea; or, of course, an actual creature. And in Frankenstein, Shelley teaches us that monstrous does not always mean evil – in fact, creators can prove to be more destructive and inhuman than the things they bring into being

Tell us your story in 1,000 – 1,800 words on Medium.com and use the hashtag #Frankenstein200. Read other #Frankenstein200 stories, and use the recommend button at the bottom of each post for the stories you like. Winners in the short fiction contest will receive personal feedback from Hugo and Sturgeon Award-winning science fiction and fantasy author Elizabeth Bear, as well as a curated selection of classic and contemporary science fiction books and  Frankenstein goodies, courtesy of the NaNoWriMo team.

Rules and Mechanics

  • There are no restrictions on content. Entry is limited to one submission per author. Submissions must be in English and between 1,000 to 1,800 words. You must follow all Medium Terms of Service, including the Rules.
  • All entries submitted and tagged as #Frankenstein200 and in compliance with the rules outlined here will be considered.
  • The deadline for submissions is 11:59 PM on July 31, 2016.
  • Three winners will be selected at random on August 1, 2016.
  • Each winner receives the following prize package including:
  • Additionally, one of the three winners, chosen at random, will receive written coaching/feedback from Elizabeth Bear on his or her entry.
  • Select stories will be featured on Frankenscape, a public geo-storytelling project hosted by ASU’s Frankenstein Bicentennial Project. Stories may also be featured in National Novel Writing Month communications and social media platforms.
  • U.S. residents only [emphasis mine]; void where prohibited by law. No purchase is necessary to enter or win.

Dangerous Creations: Real-life Frankenstein Stories

Presented by Creative Nonfiction magazine

Creative Nonfiction magazine is daring writers to write original and true stories that explore humans’ efforts to control and redirect nature, the evolving relationships between humanity and science/technology, and contemporary interpretations of monstrosity.

Essays must be vivid and dramatic; they should combine a strong and compelling narrative with an informative or reflective element and reach beyond a strictly personal experience for some universal or deeper meaning. We’re open to a broad range of interpretations of the “Frankenstein” theme, with the understanding that all works submitted must tell true stories and be factually accurate. Above all, we’re looking for well-written prose, rich with detail and a distinctive voice.

Creative Nonfiction editors and a judge (to be announced) will award $10,000 and publication for Best Essay and two $2,500 prizes and publication for runners-up. All essays submitted will be considered for publication in the winter 2018 issue of the magazine.

Deadline for submissions: March 20, 2017.
For complete guidelines: www.creativenonfiction.org/submissions

[Note: There is a submission fee for the nonfiction dare and no indication as to whether or not there are residency requirements.]

A July 27, 2016 email received from The Frankenstein Bicentennial Project (which is how I learned about the dares somewhat belatedly) has this about the first dare,

Planetary Design, Transhumanism, and Pork Products
Our #Frankenstein200 Contest Took Us in Some Unexpected Directions

Last month [June 2016], we partnered with National Novel Writing Month (NaNoWriMo) and The Chabot Space and Science Center to dare the world to create stories in the spirit of Mary Shelley’s Frankenstein, to celebrate the 200th anniversary of the novel’s conception.

We received a bevy of intriguing and sometimes frightening submissions that explore the complex relationships between creators and their “monsters.” Here are a few tales that caught our eye:

The Man Who Harnessed the Sun
By Sandra Knisely
Eliza has to choose between protecting the scientist who once gave her the world and punishing him for letting it all slip away. Read the story…

The Mortality Complex
By Brandon Miller
When the boogeyman of medical students reflects on life. Read the story…

Bacon Man
By Corey Pressman
A Frankenstein story in celebration of ASU’s Frankenstein Bicentennial Project. And bacon. Read the story… 

You can find the stories that have been submitted to date for the creative short story dare at Medium.com.

Good luck! And, don’t forget to tag your short story with #Frankenstein200 and submit it by July 31, 2016 (if you are a US resident). There’s still lots of time to enter a submission for a creative nonfiction piece.

Nanoparticles in baby formula

Needle-like particles of hydroxyapatite found in infant formula by ASU researchers. Westerhoff and Schoepf/ASU, CC BY-ND

Needle-like particles of hydroxyapatite found in infant formula by ASU [Arizona State University] researchers. Westerhoff and Schoepf/ASU, CC BY-ND

Nanowerk is featuring an essay about hydroxyapatite nanoparticles in baby formula written by Dr. Andrew Maynard in a May 17, 2016 news item (Note: A link has been removed),

There’s a lot of stuff you’d expect to find in baby formula: proteins, carbs, vitamins, essential minerals. But parents probably wouldn’t anticipate finding extremely small, needle-like particles. Yet this is exactly what a team of scientists here at Arizona State University [ASU] recently discovered.

The research, commissioned and published by Friends of the Earth (FoE) – an environmental advocacy group – analyzed six commonly available off-the-shelf baby formulas (liquid and powder) and found nanometer-scale needle-like particles in three of them. The particles were made of hydroxyapatite – a poorly soluble calcium-rich mineral. Manufacturers use it to regulate acidity in some foods, and it’s also available as a dietary supplement.

Andrew’s May 17, 2016 essay first appeared on The Conversation website,

Looking at these particles at super-high magnification, it’s hard not to feel a little anxious about feeding them to a baby. They appear sharp and dangerous – not the sort of thing that has any place around infants. …

… questions like “should infants be ingesting them?” make a lot of sense. However, as is so often the case, the answers are not quite so straightforward.

Andrew begins by explaining about calcium and hydroxyapatite (from The Conversation),

Calcium is an essential part of a growing infant’s diet, and is a legally required component in formula. But not necessarily in the form of hydroxyapatite nanoparticles.

Hydroxyapatite is a tough, durable mineral. It’s naturally made in our bodies as an essential part of bones and teeth – it’s what makes them so strong. So it’s tempting to assume the substance is safe to eat. But just because our bones and teeth are made of the mineral doesn’t automatically make it safe to ingest outright.

The issue here is what the hydroxyapatite in formula might do before it’s digested, dissolved and reconstituted inside babies’ bodies. The size and shape of the particles ingested has a lot to do with how they behave within a living system.

He then discusses size and shape, which are important at the nanoscale,

Size and shape can make a difference between safe and unsafe when it comes to particles in our food. Small particles aren’t necessarily bad. But they can potentially get to parts of our body that larger ones can’t reach. Think through the gut wall, into the bloodstream, and into organs and cells. Ingested nanoscale particles may be able to interfere with cells – even beneficial gut microbes – in ways that larger particles don’t.

These possibilities don’t necessarily make nanoparticles harmful. Our bodies are pretty well adapted to handling naturally occurring nanoscale particles – you probably ate some last time you had burnt toast (carbon nanoparticles), or poorly washed vegetables (clay nanoparticles from the soil). And of course, how much of a material we’re exposed to is at least as important as how potentially hazardous it is.

Yet there’s a lot we still don’t know about the safety of intentionally engineered nanoparticles in food. Toxicologists have started paying close attention to such particles, just in case their tiny size makes them more harmful than otherwise expected.

Currently, hydroxyapatite is considered safe at the macroscale by the US Food and Drug Administration (FDA). However, the agency has indicated that nanoscale versions of safe materials such as hydroxyapatite may not be safe food additives. From Andrew’s May 17, 2016 essay,

Hydroxyapatite is a tough, durable mineral. It’s naturally made in our bodies as an essential part of bones and teeth – it’s what makes them so strong. So it’s tempting to assume the substance is safe to eat. But just because our bones and teeth are made of the mineral doesn’t automatically make it safe to ingest outright.

The issue here is what the hydroxyapatite in formula might do before it’s digested, dissolved and reconstituted inside babies’ bodies. The size and shape of the particles ingested has a lot to do with how they behave within a living system. Size and shape can make a difference between safe and unsafe when it comes to particles in our food. Small particles aren’t necessarily bad. But they can potentially get to parts of our body that larger ones can’t reach. Think through the gut wall, into the bloodstream, and into organs and cells. Ingested nanoscale particles may be able to interfere with cells – even beneficial gut microbes – in ways that larger particles don’t.These possibilities don’t necessarily make nanoparticles harmful. Our bodies are pretty well adapted to handling naturally occurring nanoscale particles – you probably ate some last time you had burnt toast (carbon nanoparticles), or poorly washed vegetables (clay nanoparticles from the soil). And of course, how much of a material we’re exposed to is at least as important as how potentially hazardous it is.Yet there’s a lot we still don’t know about the safety of intentionally engineered nanoparticles in food. Toxicologists have started paying close attention to such particles, just in case their tiny size makes them more harmful than otherwise expected.

Putting particle size to one side for a moment, hydroxyapatite is classified by the US Food and Drug Administration (FDA) as “Generally Regarded As Safe.” That means it considers the material safe for use in food products – at least in a non-nano form. However, the agency has raised concerns that nanoscale versions of food ingredients may not be as safe as their larger counterparts.Some manufacturers may be interested in the potential benefits of “nanosizing” – such as increasing the uptake of vitamins and minerals, or altering the physical, textural and sensory properties of foods. But because decreasing particle size may also affect product safety, the FDA indicates that intentionally nanosizing already regulated food ingredients could require regulatory reevaluation.In other words, even though non-nanoscale hydroxyapatite is “Generally Regarded As Safe,” according to the FDA, the safety of any nanoscale form of the substance would need to be reevaluated before being added to food products.Despite this size-safety relationship, the FDA confirmed to me that the agency is unaware of any food substance intentionally engineered at the nanoscale that has enough generally available safety data to determine it should be “Generally Regarded As Safe.”Casting further uncertainty on the use of nanoscale hydroxyapatite in food, a 2015 report from the European Scientific Committee on Consumer Safety (SCCS) suggests there may be some cause for concern when it comes to this particular nanomaterial.Prompted by the use of nanoscale hydroxyapatite in dental products to strengthen teeth (which they consider “cosmetic products”), the SCCS reviewed published research on the material’s potential to cause harm. Their conclusion?

The available information indicates that nano-hydroxyapatite in needle-shaped form is of concern in relation to potential toxicity. Therefore, needle-shaped nano-hydroxyapatite should not be used in cosmetic products.

This recommendation was based on a handful of studies, none of which involved exposing people to the substance. Researchers injected hydroxyapatite needles directly into the bloodstream of rats. Others exposed cells outside the body to the material and observed the effects. In each case, there were tantalizing hints that the small particles interfered in some way with normal biological functions. But the results were insufficient to indicate whether the effects were meaningful in people.

As Andrew also notes in his essay, none of the studies examined by the SCCS OEuropean Scientific Committee on Consumer Safety) looked at what happens to nano-hydroxyapatite once it enters your gut and that is what the researchers at Arizona State University were considering (from the May 17, 2016 essay),

The good news is that, according to preliminary studies from ASU researchers, hydroxyapatite needles don’t last long in the digestive system.

This research is still being reviewed for publication. But early indications are that as soon as the needle-like nanoparticles hit the highly acidic fluid in the stomach, they begin to dissolve. So fast in fact, that by the time they leave the stomach – an exceedingly hostile environment – they are no longer the nanoparticles they started out as.

These findings make sense since we know hydroxyapatite dissolves in acids, and small particles typically dissolve faster than larger ones. So maybe nanoscale hydroxyapatite needles in food are safer than they sound.

This doesn’t mean that the nano-needles are completely off the hook, as some of them may get past the stomach intact and reach more vulnerable parts of the gut. But the findings do suggest these ultra-small needle-like particles could be an effective source of dietary calcium – possibly more so than larger or less needle-like particles that may not dissolve as quickly.

Intriguingly, recent research has indicated that calcium phosphate nanoparticles form naturally in our stomachs and go on to be an important part of our immune system. It’s possible that rapidly dissolving hydroxyapatite nano-needles are actually a boon, providing raw material for these natural and essential nanoparticles.

While it’s comforting to know that preliminary research suggests that the hydroxyapatite nanoparticles are likely safe for use in food products, Andrew points out that more needs to be done to insure safety (from the May 17, 2016 essay),

And yet, even if these needle-like hydroxyapatite nanoparticles in infant formula are ultimately a good thing, the FoE report raises a number of unresolved questions. Did the manufacturers knowingly add the nanoparticles to their products? How are they and the FDA ensuring the products’ safety? Do consumers have a right to know when they’re feeding their babies nanoparticles?

Whether the manufacturers knowingly added these particles to their formula is not clear. At this point, it’s not even clear why they might have been added, as hydroxyapatite does not appear to be a substantial source of calcium in most formula. …

And regardless of the benefits and risks of nanoparticles in infant formula, parents have a right to know what’s in the products they’re feeding their children. In Europe, food ingredients must be legally labeled if they are nanoscale. In the U.S., there is no such requirement, leaving American parents to feel somewhat left in the dark by producers, the FDA and policy makers.

As far as I’m aware, the Canadian situation is much the same as the US. If the material is considered safe at the macroscale, there is no requirement to indicate that a nanoscale version of the material is in the product.

I encourage you to read Andrew’s essay in its entirety. As for the FoE report (Nanoparticles in baby formula: Tiny new ingredients are a big concern), that is here.

Frankenstein and Switzerland in 2016

The Frankenstein Bicentennial celebration is in process as various events and projects are now being launched. In a Nov. 12, 2015 posting I made mention of the Frankenstein Bicentennial Project 1818-2018 at Arizona State University (ASU; scroll down about 15% of the way),

… the Transmedia Museum (Frankenstein Bicentennial Project 1818-2018).  This project is being hosted by Arizona State University. From the project homepage,

No work of literature has done more to shape the way people imagine science and its moral consequences than Frankenstein; or The Modern Prometheus, Mary Shelley’s enduring tale of creation and responsibility. The novel’s themes and tropes—such as the complex dynamic between creator and creation—continue to resonate with contemporary audiences. Frankenstein continues to influence the way we confront emerging technologies, conceptualize the process of scientific research, imagine the motivations and ethical struggles of scientists, and weigh the benefits of innovation with its unforeseen pitfalls.

The Frankenstein Bicentennial Project will infuse science and engineering endeavors with considerations of ethics. It will use the power of storytelling and art to shape processes of innovation and empower public appraisal of techno-scientific research and creation. It will offer humanists and artists a new set of concerns around research, public policy, and the ramifications of exploration and invention. And it will inspire new scientific and technological advances inspired by Shelley’s exploration of our inspiring and terrifying ability to bring new life into the world. Frankenstein represents a landmark fusion of science, ethics, and literary expression.

The bicentennial provides an opportunity for vivid reflection on how science is culturally framed and understood by the public, as well as our ethical limitations and responsibility for nurturing the products of our creativity. It is also a moment to unveil new scientific and technological marvels, especially in the areas of synthetic biology and artificial intelligence. Engaging with Frankenstein allows scholars and educators, artists and writers, and the public at large to consider the history of scientific invention, reflect on contemporary research, and question the future of our technological society. Acting as a network hub for the bicentennial celebration, ASU will encourage and coordinate collaboration across institutions and among diverse groups worldwide.

2016 Frankenstein events

Now, there’s an exhibition in Switzerland where Frankenstein was ‘born’ according to a May 12, 2016 news item on phys.org,

Frankenstein, the story of a scientist who brings to life a cadaver and causes his own downfall, has for two centuries given voice to anxiety surrounding the unrelenting advance of science.

To mark the 200 years since England’s Mary Shelley first imagined the ultimate horror story during a visit to a frigid, rain-drenched Switzerland, an exhibit opens in Geneva Friday called “Frankenstein, Creation of Darkness”.

In the dimly-lit, expansive basement at the Martin Bodmer Foundation, a long row of glass cases holds 15 hand-written, yellowed pages from a notebook where Shelley in 1816 wrote the first version of what is considered a masterpiece of romantic literature.

The idea for her “miserable monster” came when at just 18 she and her future husband, English poet Percy Bysshe Shelley, went to a summer home—the Villa Diodati—rented by literary great Lord Byron on the outskirts of Geneva.

The current private owners of the picturesque manor overlooking Lake Geneva will also open their lush gardens to guided tours during the nearby exhibit which runs to October 9 [May 13 – Oct. 9, 2016].

While the spot today is lovely, with pink and purple lilacs spilling from the terraces and gravel walkways winding through rose-covered arches, in the summer of 1816 the atmosphere was more somber.

A massive eruption from the Tambora volcano in Indonesia wreaked havoc with the global climate that year, and a weather report for Geneva in June on display at the exhibit mentions “not a single leaf” had yet appeared on the oak trees.

To pass the time, poet Lord Byron challenged the band of literary bohemians gathered at the villa to each invent a ghost story, resulting in several famous pieces of writing.

English doctor and author John Polidori came up with the idea for “The Vampyre”, which was published three years later and is considered to have pioneered the romantic vampyre genre, including works like Bram Stoker’s “Dracula”.

That book figures among a multitude of first editions at the Geneva exhibit, including three of Mary Shelley’s “Frankenstein, or the Modern Prometheus”—the most famous story to emerge from the competition.

Here’s a description of the exhibit, from the Martin Bodmer Foundation’s Frankenstein webpage,

To celebrate the 200th anniversary of the writing of this historically influential work of literature, the Martin Bodmer Foundation presents a major exhibition on the origins of Frankenstein, the perspectives it opens and the questions it raises.

A best seller since its first publication in 1818, Mary Shelley’s novel continues to demand attention. The questions it raises remain at the heart of literary and philosophical concerns: the ethics of science, climate change, the technologisation of the human body, the unconscious, human otherness, the plight of the homeless and the dispossessed.

The exposition Frankenstein: Creation of Darkness recreates the beginnings of the novel in its first manuscript and printed forms, along with paintings and engravings that evoke the world of 1816. A variety of literary and scientific works are presented as sources of the novel’s ideas. While exploring the novel’s origins, the exhibition also evokes the social and scientific themes of the novel that remain important in our own day.

For what it’s worth, I have come across analyses which suggest science and technology may not have been the primary concern at the time. There are interpretations which suggest issues around childbirth (very dangerous until modern times) and fear of disfigurement and disfigured individuals. What makes Frankenstein and the book so fascinating is how flexible interpretations can be. (For more about Frankenstein and flexibility, read Susan Tyler Hitchcock’s 2009 book, Frankenstein: a cultural history.)

There’s one more upcoming Frankenstein event, from The Frankenstein Bicentennial announcement webpage,

On June 14 and 15, 2016, the Brocher Foundation, Arizona State University, Duke University, and the University of Lausanne will host “Frankenstein’s Shadow,” a symposium in Geneva, Switzerland to commemorate the origin of Frankenstein and assess its influence in different times and cultures, particularly its resonance in debates about public policy governing biotechnology and medicine. These dates place the symposium almost exactly 200 years after Mary Shelley initially conceived the idea for Frankenstein on June 16, 1816, and in almost exactly the same geographical location on the shores of Lake Geneva.

If you’re interested in details such as the programme schedule, there’s this PDF,

Frankenstein¹s_ShadowConference

Enjoy!

NISE Net, the acronym remains the same but the name changes

NISE Net, the US Nanoscale Informal Science Education Network is winding down the nano and refocussing on STEM (science, technology, engineering, and mathematics). In short, NISE Net will now stand for National Informal STEM Education Network. Here’s more from the Jan. 7, 2016 NISE Net announcement in the January 2016 issue of the Nano Bite,

COMMUNITY NEWS

NISE Network is Transitioning to the National Informal STEM Education Network

Thank you for all the great work you have done over the past decade. It has opened up totally new possibilities for the decade ahead.

We are excited to let you know that with the completion of NSF funding for the Nanoscale Informal Science Education Network, and the soon-to-be-announced NASA [US National Aeronautics and Space Administration]-funded Space and Earth Informal STEM Education project, the NISE Network is transitioning to a new, ongoing identity as the National Informal STEM Education Network! While we’ll still be known as the NISE Net, network partners will now engage audiences across the United States in a range of STEM topics. Several new projects are already underway and others are in discussion for the future.

Current NISE Net projects include:

  • The original Nanoscale Informal Science Education Network (NISE Net), focusing on nanoscale science, engineering, and technology (funded by NSF and led by the Museum of Science, Boston)
  • Building with Biology, focusing on synthetic biology (funded by NSF and led by the Museum of Science with AAAS [American Association for the Advancement of Science], BioBuilder, and SynBerc [emphases mine])
  • Sustainability in Science Museums (funded by Walton Sustainability Solutions Initiatives and led by Arizona State University)
  • Transmedia Museum, focusing on science and society issues raised by Mary Shelley’s Frankenstein (funded by NSF and led by Arizona State University)
  • Space and Earth Informal STEM Education (funded by NASA and led by the Science Museum of Minnesota)

The “new” NISE Net will be led by the Science Museum of Minnesota in collaboration with the Museum of Science and Arizona State University. Network leadership, infrastructure, and participating organizations will include existing Network partners, and others attracted to the new topics. We will be in touch through the newsletter, blog, and website in the coming months to share more about our plans for the Network and its projects.

In the mean time, work is continuing with partners within the Nanoscale Informal Science Education Network throughout 2016, with an award end date of February 28, 2017. Although there will not be a new NanoDays 2016 kit, we encourage our partners to continue to engage audiences in nano by hosting NanoDays events in 2016 (March 26 – April 3) and in the years ahead using their existing kit materials. The Network will continue to host and update nisenet.org and the online catalog that includes 627 products of which 366 are NISE Net products (public and professional), 261 are Linked products, and 55 are Evaluation and Research reports. The Evaluation and Research team is continuing to work on final Network reports, and the Museum and Community Partnerships project has awarded 100 Explore Science physical kits to partners to create new or expanded collaborations with local community organizations to reach new underserved audiences not currently engaged in nano. These collaborative projects are taking place spring-summer 2016.

Thank you again for making this possible through your great work.

Best regards,

Larry Bell, Museum of Science
Paul Martin, Science Museum of Minnesota and
Rae Ostman, Arizona State University

As noted in previous posts, I’m quite interested in the synthetic biology focus the network has established in the last several months starting in late Spring 2015 and the mention of two (new-to-me) organizations, BioBuilder and Synberc piqued my interest.

I found this on the About the foundation page of the BioBuilder website,

What’s the best way to solve today’s health problems? Or hunger challenges? Address climate change concerns? Or keep the environment cleaner? These are big questions. And everyone can be part of the solutions. Everyone. Middle school students, teens, high school teachers.

At BioBuilder, we teach problem solving.
We bring current science to the classroom.
We engage our students to become real scientists — the problem solvers who will change the world.
At BioBuilder, we empower educators to be agents of educational reform by reconnecting teachers all across the country with their love of teaching and their own love of learning.

Synthetic biology programs living cells to tackle today’s challenges. Biofuels, safer foods, anti-malarial drugs, less toxic cancer treatment, biodegradable adhesives — all fuel young students’ imaginations. At BioBuilder, we empower students to tackle these big questions. BioBuilder’s curricula and teacher training capitalize on students’ need to know, to explore and to be part of solving real world problems. Developed by an award winning team out of MIT [Massachusetts Institute of Technology], BioBuilder is taught in schools across the country and supported by thought leaders in the STEM community.

BioBuilder proves that learning by doing works. And inspires.

As for Synberc, it is the Synthetic Biology Engineering Research Center and they has this to say about themselves on their About us page (Note: Links have been removed),

Synberc is a multi-university research center established in 2006 with a grant from the National Science Foundation (NSF) to help lay the foundation for synthetic biology Our mission is threefold:

develop the foundational understanding and technologies to build biological components and assemble them into integrated systems to accomplish many particular tasks;
train a new cadre of engineers who will specialize in engineering biology; and
engage the public about the opportunities and challenges of engineering biology.

Just as electrical engineers have made it possible for us to assemble computers from standardized parts (hard drives, memory cards, motherboards, and so on), we envision a day when biological engineers will be able to systematically assemble biological components such as sensors, signals, pathways, and logic gates in order to build bio-based systems that solve real-world problems in health, energy, and the environment.

In our work, we apply engineering principles to biology to develop tools that improve how fast — and how well — we can go through the design-test-build cycle. These include smart fermentation organisms that can sense their environment and adjust accordingly, and multiplex automated genome engineering, or MAGE, designed for large-scale programming and evolution of cells. We also pursue the discovery of applications that can lead to significant public benefit, such as synthetic artemisinin [emphasis mine], an anti-malaria drug that costs less and is more effective than the current plant-derived treatment.

The reference to ‘synthetic artemisinin’ caught my eye as I wrote an April 12, 2013 posting featuring this “… anti-malaria drug …” and the claim that the synthetic “… costs less and is more effective than the current plant-derived treatment” wasn’t quite the conclusion journalist, Brendan Borrell arrived at. Perhaps there’s been new research? If so, please let me know.

Managing risks in a world of converging technology (the fourth industrial revolution)

Finally there’s an answer to the question: What (!!!) is the fourth industrial revolution? (I took a guess [wrongish] in my Nov. 20, 2015 post about a special presentation at the 2016 World Economic Forum’s IdeasLab.)

Andrew Maynard in a Dec. 3, 2015 think piece (also called a ‘thesis’) for Nature Nanotechnology answers the question,

… an approach that focuses on combining technologies such as additive manufacturing, automation, digital services and the Internet of Things, and … is part of a growing movement towards exploiting the convergence between emerging technologies. This technological convergence is increasingly being referred to as the ‘fourth industrial revolution’, and like its predecessors, it promises to transform the ways we live and the environments we live in. (While there is no universal agreement on what constitutes an ‘industrial revolution’, proponents of the fourth industrial revolution suggest that the first involved harnessing steam power to mechanize production; the second, the use of electricity in mass production; and the third, the use of electronics and information technology to automate production.)

In anticipation of the the 2016 World Economic Forum (WEF), which has the fourth industrial revolution as its theme, Andrew  explains how he sees the situation we are sliding into (from Andrew Maynard’s think piece),

As more people get closer to gaining access to increasingly powerful converging technologies, a complex risk landscape is emerging that lies dangerously far beyond the ken of current regulations and governance frameworks. As a result, we are in danger of creating a global ‘wild west’ of technology innovation, where our good intentions may be among the first casualties.

There are many other examples where converging technologies are increasing the gap between what we can do and our understanding of how to do it responsibly. The convergence between robotics, nanotechnology and cognitive augmentation, for instance, and that between artificial intelligence, gene editing and maker communities both push us into uncertain territory. Yet despite the vulnerabilities inherent with fast-evolving technological capabilities that are tightly coupled, complex and poorly regulated, we lack even the beginnings of national or international conceptual frameworks to think about responsible decision-making and responsive governance.

He also lists some recommendations,

Fostering effective multi-stakeholder dialogues.

Encouraging actionable empathy.

Providing educational opportunities for current and future stakeholders.

Developing next-generation foresight capabilities.

Transforming approaches to risk.

Investing in public–private partnerships.

Andrew concludes with this,

… The good news is that, in fields such as nanotechnology and synthetic biology, we have already begun to develop the skills to do this — albeit in a small way. We now need to learn how to scale up our efforts, so that our convergence in working together to build a better future mirrors the convergence of the technologies that will help achieve this.

It’s always a pleasure to read Andrew’s work as it’s thoughtful. I was surprised (since Andrew is a physicist by training) and happy to see the recommendation for “actionable empathy.”

Although, I don’t always agree with him on this occasion I don’t have any particular disagreements but I think that including a recommendation or two to cover the certainty we will get something wrong and have to work quickly to right things would be a good idea.  I’m thinking primarily of governments which are notoriously slow to respond with legislation for new developments and equally slow to change that legislation when the situation changes.

The technological environment Andrew is describing is dynamic, that is fast-moving and changing at a pace we have yet to properly conceptualize. Governments will need to change so they can respond in an agile fashion. My suggestion is:

Develop policy task forces that can be convened in hours and given the authority to respond to an immediate situation with oversight after the fact

Getting back to Andrew Maynard, you can find his think piece in its entirety via this link and citation,

Navigating the fourth industrial revolution by Andrew D. Maynard. Nature Nanotechnology 10, 1005–1006 (2015) doi:10.1038/nnano.2015.286 Published online 03 December 2015

This paper is behind a paywall.