Tag Archives: Arizona State University

2017 S.NET annual meeting early bird registration open until July 14, 2017

The Society for the Study of New and Emerging Technologies (S.NET), which at one time was known as the Society for the Study of Nano and other Emerging Technologies, is holding its 2017 annual meeting in Arizona, US. Here’s more from a July 4, 2017 S.NET notice (received via email),

We have an exciting schedule planned for our 2017 meeting in Phoenix,
Arizona. Our confirmed plenary speakers –Professors Langdon Winner,
Alfred Nordmann and Ulrike Felt– and a diverse host of researchers from
across the planet promise to make this conference intellectually
engaging, as well as exciting.

If you haven’t already, make sure to register for the conference and the
dinner. THE DEADLINE HAS BEEN MOVED BACK TO JULY 14. 2017.

I tried to find more information about the meeting and discovered the meeting theme here in the February 2017 S.NET Newsletter,

October 9-11, 2017, Arizona State University, Tempe (USA)

Conference Theme: Engaging the Flux

Even the most seemingly stable entities fluctuate over time. Facts and artifacts, cultures and constitutions, people and planets. As the new and the old act, interact and intra-act within broader systems of time, space and meaning, we observe—and necessarily engage with—the constantly changing forms of socio-technological orders. As scholars and practitioners of new and emerging sciences and technologies, we are constantly tracking these moving targets, and often from within them. As technologists and researchers, we are also acutely aware that our research activities can influence the developmental trajectories of our objects of concern and study, as well as ourselves, our colleagues and the governance structures in which we live and work.

“Engaging the Flux” captures this sense that ubiquitous change is all about us, operative at all observable scales. “Flux” points to the perishability of apparently natural orders, as well as apparently stable technosocial orders. In embracing flux as its theme, the 2017 conference encourages participants to examine what the widely acknowledged acceleration of change reverberating across the planet means for the production of the technosciences, the social studies of knowledge production, art practices that engage technosciences and public deliberations about the societal significance of these practices in the contemporary moment.

This year’s conference theme aims to encourage us to examine the ways we—as scholars, scientists, artists, experts, citizens—have and have not taken into account the myriad modulations flowing and failing to flow from our engagements with our objects of study. The theme also invites us to anticipate how the conditions that partially structure these engagements may themselves be changing.

Our goal is to draw a rich range of examinations of flux and its implications for technoscientific and technocultural practices, broadly construed. Questions of specific interest include: Given the pervasiveness of political, ecological and technological fluctuations, what are the most socially responsible roles for experts, particularly in the context of policymaking? What would it mean to not merely accept perishability, but to lean into it, to positively embrace the going under of technological systems? What value can imaginaries offer in developing navigational capacities in periods of accelerated change? How can young and junior researchers —in social sciences, natural sciences, humanities or engineering— position themselves for meaningful, rewarding careers given the complementary uncertainties? How can the growing body of research straddling art and science communities help us make sense of flux and chart a course through it? What types of recalibrations are called for in order to speak effectively to diverse, and increasingly divergent, publics about the value of knowledge production and scientific rigor?

There are a few more details about the conference here on the  S.NET 2017 meeting registration page,

The ​2017 ​S. ​NET ​conference ​is ​held ​in ​Phoenix, ​Arizona ​(USA) ​and ​hosted ​by ​Arizona ​State ​University. ​ ​This ​year’s ​meeting ​will ​provide ​a ​forum ​for ​scholarly ​engagement ​and ​reflection ​on ​the ​meaning ​of ​coupled ​socio-technical ​change ​as ​a ​contemporary ​political ​phenomenon, ​a ​recurrent ​historical ​theme, ​and ​an ​object ​of ​future ​anticipation. ​ ​

HOTEL ​BLOCK ​- ​the ​new ​Marriott ​in ​downtown ​Phoenix ​has ​reserved ​rooms ​at ​$139 ​(single) ​or ​$159 ​(double ​bed). ​ ​ ​Please ​use ​the ​link ​on ​the ​S.Net ​home ​page ​to ​book ​your ​room. ​ ​

REGISTRATION ​for ​non-students: ​ ​
Early ​bird ​pricing ​is ​available ​until ​Saturday, ​July ​14, ​2017. ​ ​
Registration ​increases ​to ​$220 ​starting ​Sunday, ​July ​15, ​2017. ​
Start Your Registration
Select registrant type *
Select registrant type *
Faculty/Postdoc/private industry/gov employee ($175) Details
Student – submitting abstract or poster ($50)
Student – not submitting abstract or poster ($100)

There you have it.

Patent Politics: a June 23, 2017 book launch at the Wilson Center (Washington, DC)

I received a June 12, 2017 notice (via email) from the Wilson Center (also know as the Woodrow Wilson Center for International Scholars) about a book examining patents and policies in the United States and in Europe and its upcoming launch,

Patent Politics: Life Forms, Markets, and the Public Interest in the United States and Europe

Over the past thirty years, the world’s patent systems have experienced pressure from civil society like never before. From farmers to patient advocates, new voices are arguing that patents impact public health, economic inequality, morality—and democracy. These challenges, to domains that we usually consider technical and legal, may seem surprising. But in Patent Politics, Shobita Parthasarathy argues that patent systems have always been deeply political and social.

To demonstrate this, Parthasarathy takes readers through a particularly fierce and prolonged set of controversies over patents on life forms linked to important advances in biology and agriculture and potentially life-saving medicines. Comparing battles over patents on animals, human embryonic stem cells, human genes, and plants in the United States and Europe, she shows how political culture, ideology, and history shape patent system politics. Clashes over whose voices and which values matter in the patent system, as well as what counts as knowledge and whose expertise is important, look quite different in these two places. And through these debates, the United States and Europe are developing very different approaches to patent and innovation governance. Not just the first comprehensive look at the controversies swirling around biotechnology patents, Patent Politics is also the first in-depth analysis of the political underpinnings and implications of modern patent systems, and provides a timely analysis of how we can reform these systems around the world to maximize the public interest.

Join us on June 23 [2017] from 4-6 pm [elsewhere the time is listed at 4-7 pm] for a discussion on the role of the patent system in governing emerging technologies, on the launch of Shobita Parthasarathy’s Patent Politics: Life Forms, Markets, and the Public Interest in the United States and Europe (University of Chicago Press, 2017).

You can find more information such as this on the Patent Politics event page,

Speakers

Keynote


  • Shobita Parthasarathy

    Fellow
    Associate Professor of Public Policy and Women’s Studies, and Director of the Science, Technology, and Public Policy Program, at University of Michigan

Moderator


  • Eleonore Pauwels

    Senior Program Associate and Director of Biology Collectives, Science and Technology Innovation Program
    Formerly European Commission, Directorate-General for Research and Technological Development, Directorate on Science, Economy and Society

Panelists


  • Daniel Sarewitz

    Co-Director, Consortium for Science, Policy & Outcomes Professor of Science and Society, School for the Future of Innovation in Society

  • Richard Harris

    Award-Winning Journalist National Public Radio Author of “Rigor Mortis: How Sloppy Science Creates Worthless Cures, Crushes Hope, and Wastes Billions”

For those who cannot attend in person, there will be a live webcast. If you can be there in person, you can RSVP here (Note: The time frame for the event is listed in some places as 4-7 pm.) I cannot find any reason for the time frame disparity. My best guess is that the discussion is scheduled for two hours with a one hour reception afterwards for those who can attend in person.

Emerging technology and the law

I have three news bits about legal issues that are arising as a consequence of emerging technologies.

Deep neural networks, art, and copyright

Caption: The rise of automated art opens new creative avenues, coupled with new problems for copyright protection. Credit: Provided by: Alexander Mordvintsev, Christopher Olah and Mike Tyka

Presumably this artwork is a demonstration of automated art although they never really do explain how in the news item/news release. An April 26, 2017 news item on ScienceDaily announces research into copyright and the latest in using neural networks to create art,

In 1968, sociologist Jean Baudrillard wrote on automatism that “contained within it is the dream of a dominated world […] that serves an inert and dreamy humanity.”

With the growing popularity of Deep Neural Networks (DNN’s), this dream is fast becoming a reality.

Dr. Jean-Marc Deltorn, researcher at the Centre d’études internationales de la propriété intellectuelle in Strasbourg, argues that we must remain a responsive and responsible force in this process of automation — not inert dominators. As he demonstrates in a recent Frontiers in Digital Humanities paper, the dream of automation demands a careful study of the legal problems linked to copyright.

An April 26, 2017 Frontiers (publishing) news release on EurekAlert, which originated the news item, describes the research in more detail,

For more than half a century, artists have looked to computational processes as a way of expanding their vision. DNN’s are the culmination of this cross-pollination: by learning to identify a complex number of patterns, they can generate new creations.

These systems are made up of complex algorithms modeled on the transmission of signals between neurons in the brain.

DNN creations rely in equal measure on human inputs and the non-human algorithmic networks that process them.

Inputs are fed into the system, which is layered. Each layer provides an opportunity for a more refined knowledge of the inputs (shape, color, lines). Neural networks compare actual outputs to expected ones, and correct the predictive error through repetition and optimization. They train their own pattern recognition, thereby optimizing their learning curve and producing increasingly accurate outputs.

The deeper the layers are, the higher the level of abstraction. The highest layers are able to identify the contents of a given input with reasonable accuracy, after extended periods of training.

Creation thus becomes increasingly automated through what Deltorn calls “the arcane traceries of deep architecture”. The results are sufficiently abstracted from their sources to produce original creations that have been exhibited in galleries, sold at auction and performed at concerts.

The originality of DNN’s is a combined product of technological automation on one hand, human inputs and decisions on the other.

DNN’s are gaining popularity. Various platforms (such as DeepDream) now allow internet users to generate their very own new creations . This popularization of the automation process calls for a comprehensive legal framework that ensures a creator’s economic and moral rights with regards to his work – copyright protection.

Form, originality and attribution are the three requirements for copyright. And while DNN creations satisfy the first of these three, the claim to originality and attribution will depend largely on a given country legislation and on the traceability of the human creator.

Legislation usually sets a low threshold to originality. As DNN creations could in theory be able to create an endless number of riffs on source materials, the uncurbed creation of original works could inflate the existing number of copyright protections.

Additionally, a small number of national copyright laws confers attribution to what UK legislation defines loosely as “the person by whom the arrangements necessary for the creation of the work are undertaken.” In the case of DNN’s, this could mean anybody from the programmer to the user of a DNN interface.

Combined with an overly supple take on originality, this view on attribution would further increase the number of copyrightable works.

The risk, in both cases, is that artists will be less willing to publish their own works, for fear of infringement of DNN copyright protections.

In order to promote creativity – one seminal aim of copyright protection – the issue must be limited to creations that manifest a personal voice “and not just the electric glint of a computational engine,” to quote Deltorn. A delicate act of discernment.

DNN’s promise new avenues of creative expression for artists – with potential caveats. Copyright protection – a “catalyst to creativity” – must be contained. Many of us gently bask in the glow of an increasingly automated form of technology. But if we want to safeguard the ineffable quality that defines much art, it might be a good idea to hone in more closely on the differences between the electric and the creative spark.

This research is and be will part of a broader Frontiers Research Topic collection of articles on Deep Learning and Digital Humanities.

Here’s a link to and a citation for the paper,

Deep Creations: Intellectual Property and the Automata by Jean-Marc Deltorn. Front. Digit. Humanit., 01 February 2017 | https://doi.org/10.3389/fdigh.2017.00003

This paper is open access.

Conference on governance of emerging technologies

I received an April 17, 2017 notice via email about this upcoming conference. Here’s more from the Fifth Annual Conference on Governance of Emerging Technologies: Law, Policy and Ethics webpage,

The Fifth Annual Conference on Governance of Emerging Technologies:

Law, Policy and Ethics held at the new

Beus Center for Law & Society in Phoenix, AZ

May 17-19, 2017!

Call for Abstracts – Now Closed

The conference will consist of plenary and session presentations and discussions on regulatory, governance, legal, policy, social and ethical aspects of emerging technologies, including (but not limited to) nanotechnology, synthetic biology, gene editing, biotechnology, genomics, personalized medicine, human enhancement technologies, telecommunications, information technologies, surveillance technologies, geoengineering, neuroscience, artificial intelligence, and robotics. The conference is premised on the belief that there is much to be learned and shared from and across the governance experience and proposals for these various emerging technologies.

Keynote Speakers:

Gillian HadfieldRichard L. and Antoinette Schamoi Kirtland Professor of Law and Professor of Economics USC [University of Southern California] Gould School of Law

Shobita Parthasarathy, Associate Professor of Public Policy and Women’s Studies, Director, Science, Technology, and Public Policy Program University of Michigan

Stuart Russell, Professor at [University of California] Berkeley, is a computer scientist known for his contributions to artificial intelligence

Craig Shank, Vice President for Corporate Standards Group in Microsoft’s Corporate, External and Legal Affairs (CELA)

Plenary Panels:

Innovation – Responsible and/or Permissionless

Ellen-Marie Forsberg, Senior Researcher/Research Manager at Oslo and Akershus University College of Applied Sciences

Adam Thierer, Senior Research Fellow with the Technology Policy Program at the Mercatus Center at George Mason University

Wendell Wallach, Consultant, ethicist, and scholar at Yale University’s Interdisciplinary Center for Bioethics

 Gene Drives, Trade and International Regulations

Greg Kaebnick, Director, Editorial Department; Editor, Hastings Center Report; Research Scholar, Hastings Center

Jennifer Kuzma, Goodnight-North Carolina GlaxoSmithKline Foundation Distinguished Professor in Social Sciences in the School of Public and International Affairs (SPIA) and co-director of the Genetic Engineering and Society (GES) Center at North Carolina State University

Andrew Maynard, Senior Sustainability Scholar, Julie Ann Wrigley Global Institute of Sustainability Director, Risk Innovation Lab, School for the Future of Innovation in Society Professor, School for the Future of Innovation in Society, Arizona State University

Gary Marchant, Regents’ Professor of Law, Professor of Law Faculty Director and Faculty Fellow, Center for Law, Science & Innovation, Arizona State University

Marc Saner, Inaugural Director of the Institute for Science, Society and Policy, and Associate Professor, University of Ottawa Department of Geography

Big Data

Anupam Chander, Martin Luther King, Jr. Professor of Law and Director, California International Law Center, UC Davis School of Law

Pilar Ossorio, Professor of Law and Bioethics, University of Wisconsin, School of Law and School of Medicine and Public Health; Morgridge Institute for Research, Ethics Scholar-in-Residence

George Poste, Chief Scientist, Complex Adaptive Systems Initiative (CASI) (http://www.casi.asu.edu/), Regents’ Professor and Del E. Webb Chair in Health Innovation, Arizona State University

Emily Shuckburgh, climate scientist and deputy head of the Polar Oceans Team at the British Antarctic Survey, University of Cambridge

 Responsible Development of AI

Spring Berman, Ira A. Fulton Schools of Engineering, Arizona State University

John Havens, The IEEE [Institute of Electrical and Electronics Engineers] Global Initiative for Ethical Considerations in Artificial Intelligence and Autonomous Systems

Subbarao Kambhampati, Senior Sustainability Scientist, Julie Ann Wrigley Global Institute of Sustainability, Professor, School of Computing, Informatics and Decision Systems Engineering, Ira A. Fulton Schools of Engineering, Arizona State University

Wendell Wallach, Consultant, Ethicist, and Scholar at Yale University’s Interdisciplinary Center for Bioethics

Existential and Catastrophic Ricks [sic]

Tony Barrett, Co-Founder and Director of Research of the Global Catastrophic Risk Institute

Haydn Belfield,  Academic Project Administrator, Centre for the Study of Existential Risk at the University of Cambridge

Margaret E. Kosal Associate Director, Sam Nunn School of International Affairs, Georgia Institute of Technology

Catherine Rhodes,  Academic Project Manager, Centre for the Study of Existential Risk at CSER, University of Cambridge

These were the panels that are of interest to me; there are others on the homepage.

Here’s some information from the Conference registration webpage,

Early Bird Registration – $50 off until May 1! Enter discount code: earlybirdGETs50

New: Group Discount – Register 2+ attendees together and receive an additional 20% off for all group members!

Click Here to Register!

Conference registration fees are as follows:

  • General (non-CLE) Registration: $150.00
  • CLE Registration: $350.00
  • *Current Student / ASU Law Alumni Registration: $50.00
  • ^Cybsersecurity sessions only (May 19): $100 CLE / $50 General / Free for students (registration info coming soon)

There you have it.

Neuro-techno future laws

I’m pretty sure this isn’t the first exploration of potential legal issues arising from research into neuroscience although it’s the first one I’ve stumbled across. From an April 25, 2017 news item on phys.org,

New human rights laws to prepare for advances in neurotechnology that put the ‘freedom of the mind’ at risk have been proposed today in the open access journal Life Sciences, Society and Policy.

The authors of the study suggest four new human rights laws could emerge in the near future to protect against exploitation and loss of privacy. The four laws are: the right to cognitive liberty, the right to mental privacy, the right to mental integrity and the right to psychological continuity.

An April 25, 2017 Biomed Central news release on EurekAlert, which originated the news item, describes the work in more detail,

Marcello Ienca, lead author and PhD student at the Institute for Biomedical Ethics at the University of Basel, said: “The mind is considered to be the last refuge of personal freedom and self-determination, but advances in neural engineering, brain imaging and neurotechnology put the freedom of the mind at risk. Our proposed laws would give people the right to refuse coercive and invasive neurotechnology, protect the privacy of data collected by neurotechnology, and protect the physical and psychological aspects of the mind from damage by the misuse of neurotechnology.”

Advances in neurotechnology, such as sophisticated brain imaging and the development of brain-computer interfaces, have led to these technologies moving away from a clinical setting and into the consumer domain. While these advances may be beneficial for individuals and society, there is a risk that the technology could be misused and create unprecedented threats to personal freedom.

Professor Roberto Andorno, co-author of the research, explained: “Brain imaging technology has already reached a point where there is discussion over its legitimacy in criminal court, for example as a tool for assessing criminal responsibility or even the risk of reoffending. Consumer companies are using brain imaging for ‘neuromarketing’, to understand consumer behaviour and elicit desired responses from customers. There are also tools such as ‘brain decoders’ which can turn brain imaging data into images, text or sound. All of these could pose a threat to personal freedom which we sought to address with the development of four new human rights laws.”

The authors explain that as neurotechnology improves and becomes commonplace, there is a risk that the technology could be hacked, allowing a third-party to ‘eavesdrop’ on someone’s mind. In the future, a brain-computer interface used to control consumer technology could put the user at risk of physical and psychological damage caused by a third-party attack on the technology. There are also ethical and legal concerns over the protection of data generated by these devices that need to be considered.

International human rights laws make no specific mention to neuroscience, although advances in biomedicine have become intertwined with laws, such as those concerning human genetic data. Similar to the historical trajectory of the genetic revolution, the authors state that the on-going neurorevolution will force a reconceptualization of human rights laws and even the creation of new ones.

Marcello Ienca added: “Science-fiction can teach us a lot about the potential threat of technology. Neurotechnology featured in famous stories has in some cases already become a reality, while others are inching ever closer, or exist as military and commercial prototypes. We need to be prepared to deal with the impact these technologies will have on our personal freedom.”

Here’s a link to and a citation for the paper,

Towards new human rights in the age of neuroscience and neurotechnology by Marcello Ienca and Roberto Andorno. Life Sciences, Society and Policy201713:5 DOI: 10.1186/s40504-017-0050-1 Published: 26 April 2017

©  The Author(s). 2017

This paper is open access.

A DNA switch for new electronic applications

I little dreamed when reading “The Double Helix : A Personal Account of the Discovery of the Structure of DNA” by James Watson that DNA (deoxyribonucleic acid) would one day become just another material for scientists to manipulate. A Feb. 20, 2017 news item on ScienceDaily describes the use of DNA as a material in electronics applications,

DNA, the stuff of life, may very well also pack quite the jolt for engineers trying to advance the development of tiny, low-cost electronic devices.

Much like flipping your light switch at home — only on a scale 1,000 times smaller than a human hair — an ASU [Arizona State University]-led team has now developed the first controllable DNA switch to regulate the flow of electricity within a single, atomic-sized molecule. The new study, led by ASU Biodesign Institute researcher Nongjian Tao, was published in the advanced online journal Nature Communications.

DNA, the stuff of life, may very well also pack quite the jolt for engineers trying to advance the development of tiny, low-cost electronic devices. Courtesy: ASU

A Feb. 20, 2017 ASU news release (also on EurekAlert), which originated the news item, provides more detail,

“It has been established that charge transport is possible in DNA, but for a useful device, one wants to be able to turn the charge transport on and off. We achieved this goal by chemically modifying DNA,” said Tao, who directs the Biodesign Center for Bioelectronics and Biosensors and is a professor in the Fulton Schools of Engineering. “Not only that, but we can also adapt the modified DNA as a probe to measure reactions at the single-molecule level. This provides a unique way for studying important reactions implicated in disease, or photosynthesis reactions for novel renewable energy applications.”

Engineers often think of electricity like water, and the research team’s new DNA switch acts to control the flow of electrons on and off, just like water coming out of a faucet.

Previously, Tao’s research group had made several discoveries to understand and manipulate DNA to more finely tune the flow of electricity through it. They found they could make DNA behave in different ways — and could cajole electrons to flow like waves according to quantum mechanics, or “hop” like rabbits in the way electricity in a copper wire works —creating an exciting new avenue for DNA-based, nano-electronic applications.

Tao assembled a multidisciplinary team for the project, including ASU postdoctoral student Limin Xiang and Li Yueqi performing bench experiments, Julio Palma working on the theoretical framework, with further help and oversight from collaborators Vladimiro Mujica (ASU) and Mark Ratner (Northwestern University).

To accomplish their engineering feat, Tao’s group, modified just one of DNA’s iconic double helix chemical letters, abbreviated as A, C, T or G, with another chemical group, called anthraquinone (Aq). Anthraquinone is a three-ringed carbon structure that can be inserted in between DNA base pairs but contains what chemists call a redox group (short for reduction, or gaining electrons or oxidation, losing electrons).

These chemical groups are also the foundation for how our bodies’ convert chemical energy through switches that send all of the electrical pulses in our brains, our hearts and communicate signals within every cell that may be implicated in the most prevalent diseases.

The modified Aq-DNA helix could now help it perform the switch, slipping comfortably in between the rungs that make up the ladder of the DNA helix, and bestowing it with a new found ability to reversibly gain or lose electrons.

Through their studies, when they sandwiched the DNA between a pair of electrodes, they careful [sic] controlled their electrical field and measured the ability of the modified DNA to conduct electricity. This was performed using a staple of nano-electronics, a scanning tunneling microscope, which acts like the tip of an electrode to complete a connection, being repeatedly pulled in and out of contact with the DNA molecules in the solution like a finger touching a water droplet.

“We found the electron transport mechanism in the present anthraquinone-DNA system favors electron “hopping” via anthraquinone and stacked DNA bases,” said Tao. In addition, they found they could reversibly control the conductance states to make the DNA switch on (high-conductance) or switch-off (low conductance). When anthraquinone has gained the most electrons (its most-reduced state), it is far more conductive, and the team finely mapped out a 3-D picture to account for how anthraquinone controlled the electrical state of the DNA.

For their next project, they hope to extend their studies to get one step closer toward making DNA nano-devices a reality.

“We are particularly excited that the engineered DNA provides a nice tool to examine redox reaction kinetics, and thermodynamics the single molecule level,” said Tao.

Here’s a link to and a citation for the paper,

I last featured Tao’s work with DNA in an April 20, 2015 posting.

Gate-controlled conductance switching in DNA by Limin Xiang, Julio L. Palma, Yueqi Li, Vladimiro Mujica, Mark A. Ratner, & Nongjian Tao.  Nature Communications 8, Article number: 14471 (2017)  doi:10.1038/ncomms14471 Published online: 20 February 2017

This paper is open access.

Essays on Frankenstein

Slate.com is dedicating a month (January 2017) to Frankenstein. This means there were will be one or more essays each week on one aspect or another of Frankenstein and science. These essays are one of a series of initiatives jointly supported by Slate, Arizona State University, and an organization known as New America. It gets confusing since these essays are listed as part of two initiatives:  Futurography and Future Tense.

The really odd part, as far as I’m concerned, is that there is no mention of Arizona State University’s (ASU) The Frankenstein Bicentennial Project (mentioned in my Oct. 26, 2016 posting). Perhaps they’re concerned that people will think ASU is advertising the project?

Introductions

Getting back to the essays, a Jan. 3, 2017 article by Jacob Brogan explains, by means of a ‘Question and Answer’ format article, why the book and the monster maintain popular interest after two centuries (Note: We never do find out who or how many people are supplying the answers),

OK, fine. I get that this book is important, but why are we talking about it in a series about emerging technology?

Though people still tend to weaponize it as a simple anti-scientific screed, Frankenstein, which was first published in 1818, is much richer when we read it as a complex dialogue about our relationship to innovation—both our desire for it and our fear of the changes it brings. Mary Shelley was just a teenager when she began to compose Frankenstein, but she was already grappling with our complex relationship to new forces. Almost two centuries on, the book is just as propulsive and compelling as it was when it was first published. That’s partly because it’s so thick with ambiguity—and so resistant to easy interpretation.

Is it really ambiguous? I mean, when someone calls something frankenfood, they aren’t calling it “ethically ambiguous food.”

It’s a fair point. For decades, Frankenstein has been central to discussions in and about bioethics. Perhaps most notably, it frequently crops up as a reference point in discussions of genetically modified organisms, where the prefix Franken- functions as a sort of convenient shorthand for human attempts to meddle with the natural order. Today, the most prominent flashpoint for those anxieties is probably the clustered regularly interspaced short palindromic repeats, or CRISPR, gene-editing technique [emphasis mine]. But it’s really oversimplifying to suggest Frankenstein is a cautionary tale about monkeying with life.

As we’ll see throughout this month on Futurography, it’s become a lens for looking at the unintended consequences of things like synthetic biology, animal experimentation, artificial intelligence, and maybe even social networking. Facebook, for example, has arguably taken on a life of its own, as its algorithms seem to influence the course of elections. Mark Zuckerberg, who’s sometimes been known to disavow the power of his own platform, might well be understood as a Frankensteinian figure, amplifying his creation’s monstrosity by neglecting its practical needs.

But this book is almost 200 years old! Surely the actual science in it is bad.

Shelley herself would probably be the first to admit that the science in the novel isn’t all that accurate. Early in the novel, Victor Frankenstein meets with a professor who castigates him for having read the wrong works of “natural philosophy.” Shelley’s protagonist has mostly been studying alchemical tomes and otherwise fantastical works, the sort of things that were recognized as pseudoscience, even by the standards of the day. Near the start of the novel, Frankenstein attends a lecture in which the professor declaims on the promise of modern science. He observes that where the old masters “promised impossibilities and performed nothing,” the new scientists achieve far more in part because they “promise very little; they know that metals cannot be transmuted and that the elixir of life is a chimera.”

Is it actually about bad science, though?

Not exactly, but it has been read as a story about bad scientists.

Ultimately, Frankenstein outstrips his own teachers, of course, and pulls off the very feats they derided as mere fantasy. But Shelley never seems to confuse fact and fiction, and, in fact, she largely elides any explanation of how Frankenstein pulls off the miraculous feat of animating dead tissue. We never actually get a scene of the doctor awakening his creature. The novel spends far more dwelling on the broader reverberations of that act, showing how his attempt to create one life destroys countless others. Read in this light, Frankenstein isn’t telling us that we shouldn’t try to accomplish new things, just that we should take care when we do.

This speaks to why the novel has stuck around for so long. It’s not about particular scientific accomplishments but the vagaries of scientific progress in general.

Does that make it into a warning against playing God?

It’s probably a mistake to suggest that the novel is just a critique of those who would usurp the divine mantle. Instead, you can read it as a warning about the ways that technologists fall short of their ambitions, even in their greatest moments of triumph.

Look at what happens in the novel: After bringing his creature to life, Frankenstein effectively abandons it. Later, when it entreats him to grant it the rights it thinks it deserves, he refuses. Only then—after he reneges on his responsibilities—does his creation really go bad. We all know that Frankenstein is the doctor and his creation is the monster, but to some extent it’s the doctor himself who’s made monstrous by his inability to take responsibility for what he’s wrought.

I encourage you to read Brogan’s piece in its entirety and perhaps supplement the reading. Mary Shelley has a pretty interesting history. She ran off with Percy Bysshe Shelley who was married to another woman, in 1814  at the age of seventeen years. Her parents were both well known and respected intellectuals and philosophers, William Godwin and Mary Wollstonecraft. By the time Mary Shelley wrote her book, her first baby had died and she had given birth to a second child, a boy.  Percy Shelley was to die a few years later as was her son and a third child she’d given birth to. (Her fourth child born in 1819 did survive.) I mention the births because one analysis I read suggests the novel is also a commentary on childbirth. In fact, the Frankenstein narrative has been examined from many perspectives (other than science) including feminism and LGBTQ studies.

Getting back to the science fiction end of things, the next part of the Futurography series is titled “A Cheat-Sheet Guide to Frankenstein” and that too is written by Jacob Brogan with a publication date of Jan. 3, 2017,

Key Players

Marilyn Butler: Butler, a literary critic and English professor at the University of Cambridge, authored the seminal essay “Frankenstein and Radical Science.”

Jennifer Doudna: A professor of chemistry and biology at the University of California, Berkeley, Doudna helped develop the CRISPR gene-editing technique [emphasis mine].

Stephen Jay Gould: Gould is an evolutionary biologist and has written in defense of Frankenstein’s scientific ambitions, arguing that hubris wasn’t the doctor’s true fault.

Seán Ó hÉigeartaigh: As executive director of the Center for Existential Risk at the University of Cambridge, hÉigeartaigh leads research into technologies that threaten the existience of our species.

Jim Hightower: This columnist and activist helped popularize the term frankenfood to describe genetically modified crops.

Mary Shelley: Shelley, the author of Frankenstein, helped create science fiction as we now know it.

J. Craig Venter: A leading genomic researcher, Venter has pursued a variety of human biotechnology projects.

Lingo

….

Debates

Popular Culture

Further Reading

….

‘Franken’ and CRISPR

The first essay is in a Jan. 6, 2016 article by Kay Waldman focusing on the ‘franken’ prefix (Note: links have been removed),

In a letter to the New York Times on June 2, 1992, an English professor named Paul Lewis lopped off the top of Victor Frankenstein’s surname and sewed it onto a tomato. Railing against genetically modified crops, Lewis put a new generation of natural philosophers on notice: “If they want to sell us Frankenfood, perhaps it’s time to gather the villagers, light some torches and head to the castle,” he wrote.

William Safire, in a 2000 New York Times column, tracked the creation of the franken- prefix to this moment: an academic channeling popular distrust of science by invoking the man who tried to improve upon creation and ended up disfiguring it. “There’s no telling where or how it will end,” he wrote wryly, referring to the spread of the construction. “It has enhanced the sales of the metaphysical novel that Ms. Shelley’s husband, the poet Percy Bysshe Shelley, encouraged her to write, and has not harmed sales at ‘Frank’n’Stein,’ the fast-food chain whose hot dogs and beer I find delectably inorganic.” Safire went on to quote the American Dialect Society’s Laurence Horn, who lamented that despite the ’90s flowering of frankenfruits and frankenpigs, people hadn’t used Frankensense to describe “the opposite of common sense,” as in “politicians’ motivations for a creatively stupid piece of legislation.”

A year later, however, Safire returned to franken- in dead earnest. In an op-ed for the Times avowing the ethical value of embryonic stem cell research, the columnist suggested that a White House conference on bioethics would salve the fears of Americans concerned about “the real dangers of the slippery slope to Frankenscience.”

All of this is to say that franken-, the prefix we use to talk about human efforts to interfere with nature, flips between “funny” and “scary” with ease. Like Shelley’s monster himself, an ungainly patchwork of salvaged parts, it can seem goofy until it doesn’t—until it taps into an abiding anxiety that technology raises in us, a fear of overstepping.

Waldman’s piece hints at how language can shape discussions while retaining a rather playful quality.

This series looks to be a good introduction while being a bit problematic in spots, which roughly sums up my conclusion about their ‘nano’ series in my Oct. 7, 2016 posting titled: Futurography’s nanotechnology series: a digest.

By the way, I noted the mention of CRISPR as it brought up an issue that they don’t appear to be addressing in this series (perhaps they will do this elsewhere?): intellectual property.

There’s a patent dispute over CRISPR as noted in this American Chemical Society’s Chemistry and Engineering News Jan. 9, 2017 video,

Playing God

This series on Frankenstein is taking on other contentious issues. A perennial favourite is ‘playing God’ as noted in Bina Venkataraman’s Jan. 11, 2017 essay on the topic,

Since its publication nearly 200 years ago, Shelley’s gothic novel has been read as a cautionary tale of the dangers of creation and experimentation. James Whale’s 1931 film took the message further, assigning explicitly the hubris of playing God to the mad scientist. As his monster comes to life, Dr. Frankenstein, played by Colin Clive, triumphantly exclaims: “Now I know what it feels like to be God!”

The admonition against playing God has since been ceaselessly invoked as a rhetorical bogeyman. Secular and religious, critic and journalist alike have summoned the term to deride and outright dismiss entire areas of research and technology, including stem cells, genetically modified crops, recombinant DNA, geoengineering, and gene editing. As we near the two-century commemoration of Shelley’s captivating story, we would be wise to shed this shorthand lesson—and to put this part of the Frankenstein legacy to rest in its proverbial grave.

The trouble with the term arises first from its murkiness. What exactly does it mean to play God, and why should we find it objectionable on its face? All but zealots would likely agree that it’s fine to create new forms of life through selective breeding and grafting of fruit trees, or to use in-vitro fertilization to conceive life outside the womb to aid infertile couples. No one objects when people intervene in what some deem “acts of God,” such as earthquakes, to rescue victims and provide relief. People get fully behind treating patients dying of cancer with “unnatural” solutions like chemotherapy. Most people even find it morally justified for humans to mete out decisions as to who lives or dies in the form of organ transplant lists that prize certain people’s survival over others.

So what is it—if not the imitation of a deity or the creation of life—that inspires people to invoke the idea of “playing God” to warn against, or even stop, particular technologies? A presidential commission charged in the early 1980s with studying the ethics of genetic engineering of humans, in the wake of the recombinant DNA revolution, sheds some light on underlying motivations. The commission sought to understand the concerns expressed by leaders of three major religious groups in the United States—representing Protestants, Jews, and Catholics—who had used the phrase “playing God” in a 1980 letter to President Jimmy Carter urging government oversight. Scholars from the three faiths, the commission concluded, did not see a theological reason to flat-out prohibit genetic engineering. Their concerns, it turned out, weren’t exactly moral objections to scientists acting as God. Instead, they echoed those of the secular public; namely, they feared possible negative effects from creating new human traits or new species. In other words, the religious leaders who called recombinant DNA tools “playing God” wanted precautions taken against bad consequences but did not inherently oppose the use of the technology as an act of human hubris.

She presents an interesting argument and offers this as a solution,

The lesson for contemporary science, then, is not that we should cease creating and discovering at the boundaries of current human knowledge. It’s that scientists and technologists ought to steward their inventions into society, and to more rigorously participate in public debate about their work’s social and ethical consequences. Frankenstein’s proper legacy today would be to encourage researchers to address the unsavory implications of their technologies, whether it’s the cognitive and social effects of ubiquitous smartphone use or the long-term consequences of genetically engineered organisms on ecosystems and biodiversity.

Some will undoubtedly argue that this places an undue burden on innovators. Here, again, Shelley’s novel offers a lesson. Scientists who cloister themselves as Dr. Frankenstein did—those who do not fully contemplate the consequences of their work—risk later encounters with the horror of their own inventions.

At a guess, Venkataraman seems to be assuming that if scientists communicate and make their case that the public will cease to panic with reference moralistic and other concerns. My understanding is that social scientists have found this is not the case. Someone may understand the technology quite well and still oppose it.

Frankenstein and anti-vaxxers

The Jan. 16, 2017 essay by Charles Kenny is the weakest of the lot, so far (Note: Links have been removed),

In 1780, University of Bologna physician Luigi Galvani found something peculiar: When he applied an electric current to the legs of a dead frog, they twitched. Thirty-seven years later, Mary Shelley had Galvani’s experiments in mind as she wrote her fable of Faustian overreach, wherein Dr. Victor Frankenstein plays God by reanimating flesh.

And a little less than halfway between those two dates, English physician Edward Jenner demonstrated the efficacy of a vaccine against smallpox—one of the greatest killers of the age. Given the suspicion with which Romantic thinkers like Shelley regarded scientific progress, it is no surprise that many at the time damned the procedure as against the natural order. But what is surprising is how that suspicion continues to endure, even after two centuries of spectacular successes for vaccination. This anti-vaccination stance—which now infects even the White House—demonstrates the immense harm that can be done by excessive distrust of technological advance.

Kenny employs history as a framing device. Crudely, Galvani’s experiments led to Mary Shelley’s Frankenstein which is a fable about ‘playing God’. (Kenny seems unaware there are many other readings of and perspectives on the book.) As for his statement ” … the suspicion with which Romantic thinkers like Shelley regarded scientific progress … ,” I’m not sure how he arrived at his conclusion about Romantic thinkers. According to Richard Holmes (in his book, The Age of Wonder: How the Romantic Generation Discovered the Beauty and Terror of Science), their relationship to science was more complex. Percy Bysshe Shelley ran ballooning experiments and wrote poetry about science, which included footnotes for the literature and concepts he was referencing; John Keats was a medical student prior to his establishment as a poet; and Samuel Taylor Coleridge (The Rime of the Ancient Mariner, etc.) maintained a healthy correspondence with scientists of the day sometimes influencing their research. In fact, when you analyze the matter, you realize even scientists are, on occasion, suspicious of science.

As for the anti-vaccination wars, I wish this essay had been more thoughtful. Yes, Andrew Wakefield’s research showing a link between MMR (measles, mumps, and rubella) vaccinations and autism is a sham. However, having concerns and suspicions about technology does not render you a fool who hasn’t progressed from 18th/19th Century concerns and suspicions about science and technology. For example, vaccines are being touted for all kinds of things, the latest being a possible antidote to opiate addiction (see Susan Gados’ June 28, 2016 article for ScienceNews). Are we going to be vaccinated for everything? What happens when you keep piling vaccination on top of vaccination? Instead of a debate, the discussion has devolved to: “I’m right and you’re wrong.”

For the record, I’m grateful for the vaccinations I’ve had and the diminishment of diseases that were devastating and seem to be making a comeback with this current anti-vaccination fever. That said, I think there are some important questions about vaccines.

Kenny’s essay could have been a nuanced discussion of vaccines that have clearly raised the bar for public health and some of the concerns regarding the current pursuit of yet more vaccines. Instead, he’s been quite dismissive of anyone who questions vaccination orthodoxy.

The end of this piece

There will be more essays in Slate’s Frankenstein series but I don’t have time to digest and write commentary for all of them.

Please use this piece as a critical counterpoint to some of the series and, if I’ve done my job, you’ll critique this critique. Please do let me know if you find any errors or want to add an opinion or add your own critique in the Comments of this blog.

ETA Jan. 25, 2017: Here’s the Frankenstein webspace on Slate’s Futurography which lists all the essays in this series. It’s well worth looking at the list. There are several that were not covered here.

News from Arizona State University’s The Frankenstein Bicentennial Project

I received a September 2016 newsletter (issued occasionally) from The Frankenstein Bicentennial Project at Arizona State University (ASU) which contained these two tidbits:

I, Artist

Bobby Zokaites converted a Roomba, a robotic vacuum, from a room cleaning device to an art-maker by removing the dust collector and vacuuming system and replacing it with a paint reservoir. Artists have been playing with robots to make art since the 1950s. This work is an extension of a genre, repurposing a readily available commercial robot.

With this project, Bobby set out to create a self-portrait of a generation, one that grew up with access to a vast amount of information and constantly bombarded by advertisements. The Roomba paintings prove that a robot can paint a reasonably complex painting, and do it differently every time; thus this version of the Turing test was successful.

As in the story of Frankenstein, this work also interrogates questions of creativity and responsibility. Is this a truly creative work of art, and if so, who is the artist; man or machine?

Both the text description and the video are from: https://www.youtube.com/watch?v=0m5ihmwPWgY

Frankenstein at 200 Exhibit

From the September 2016 newsletter (Note: Links have been removed),

Just as the creature in Frankenstein [the monster is never named in the book; its creator, however, is Victor Frankenstein] was assembled from an assortment of materials, so too is the cultural understanding of the Frankenstein myth. Now a new, interdisciplinary exhibit at ASU Libraries examines how Mary Shelley’s 200-year-old science fiction story continues to inspire, educate, and frighten 21st century audiences.

Frankenstein at 200 is open now through December 10 on the first floor of ASU’s Hayden Library in Tempe, AZ.

Here’s more from the exhibit’s webpage on the ASU website,

No work of literature has done more to shape the way people imagine science and its moral consequences than “Frankenstein;” or “The Modern Prometheus,” Mary Shelley’s enduring tale of creation and responsibility. The novel’s themes and tropes continue to resonate with contemporary audiences, influencing the way we confront emerging technologies, conceptualize the process of scientific research, and consider the ethical relationships between creators and their creations

Two hundred years after Mary Shelley imagined the story that would become “Frankenstein,” ASU Libraries is exhibiting an interdisciplinary installation that contextualizes the conditions of the original tale while exploring it’s continued importance in our technological age. Featuring work by ASU faculty and students, this exhibition includes a variety of physical and digital artifacts, original art projects and interactive elements that examine “Frankenstein’s” colossal scientific, technological, cultural and social impacts.

About the Frankenstein Bicentennial Project: Launched by Drs. David Guston and Ed Finn in 2013, the Frankenstein Bicentennial Project, is a global celebration of the bicentennial of the writing and publication of Mary Shelley’s Frankenstein, from 2016-2018. The project uses Frankenstein as a lens to examine the complex relationships between science, technology, ethics, and society. To learn more visit frankenstein.asu.edu and follow @FrankensteinASU on Twitter

There are more informational tidbits at The Frankenstein Bicentennial Project website.

Breathing nanoparticles into your brain

Thanks to Dexter Johnson and his Sept. 8, 2016 posting (on the Nanoclast blog on the IEEE [Institute for Electrical and Electronics Engineers]) for bringing this news about nanoparticles in the brain to my attention (Note: Links have been removed),

An international team of researchers, led by Barbara Maher, a professor at Lancaster University, in England, has found evidence that suggests that the nanoparticles that were first detected in the human brain over 20 years ago may have an external rather an internal source.

These magnetite nanoparticles are an airborne particulate that are abundant in urban environments and formed by combustion or friction-derived heating. In other words, they have been part of the pollution in the air of our cities since the dawn of the Industrial Revolution.

However, according to Andrew Maynard, a professor at Arizona State University, and a noted expert on the risks associated with nanomaterials,  the research indicates that this finding extends beyond magnetite to any airborne nanoscale particles—including those deliberately manufactured.

“The findings further support the possibility of these particles entering the brain via the olfactory nerve if inhaled.  In this respect, they are certainly relevant to our understanding of the possible risks presented by engineered nanomaterials—especially those that are iron-based and have magnetic properties,” said Maynard in an e-mail interview with IEEE Spectrum. “However, ambient exposures to airborne nanoparticles will typically be much higher than those associated with engineered nanoparticles, simply because engineered nanoparticles will usually be manufactured and handled under conditions designed to avoid release and exposure.”

A Sept. 5, 2016 University of Lancaster press release made the research announcement,

Researchers at Lancaster University found abundant magnetite nanoparticles in the brain tissue from 37 individuals aged three to 92-years-old who lived in Mexico City and Manchester. This strongly magnetic mineral is toxic and has been implicated in the production of reactive oxygen species (free radicals) in the human brain, which are associated with neurodegenerative diseases including Alzheimer’s disease.

Professor Barbara Maher, from Lancaster Environment Centre, and colleagues (from Oxford, Glasgow, Manchester and Mexico City) used spectroscopic analysis to identify the particles as magnetite. Unlike angular magnetite particles that are believed to form naturally within the brain, most of the observed particles were spherical, with diameters up to 150 nm, some with fused surfaces, all characteristic of high-temperature formation – such as from vehicle (particularly diesel) engines or open fires.

The spherical particles are often accompanied by nanoparticles containing other metals, such as platinum, nickel, and cobalt.

Professor Maher said: “The particles we found are strikingly similar to the magnetite nanospheres that are abundant in the airborne pollution found in urban settings, especially next to busy roads, and which are formed by combustion or frictional heating from vehicle engines or brakes.”

Other sources of magnetite nanoparticles include open fires and poorly sealed stoves within homes. Particles smaller than 200 nm are small enough to enter the brain directly through the olfactory nerve after breathing air pollution through the nose.

“Our results indicate that magnetite nanoparticles in the atmosphere can enter the human brain, where they might pose a risk to human health, including conditions such as Alzheimer’s disease,” added Professor Maher.

Leading Alzheimer’s researcher Professor David Allsop, of Lancaster University’s Faculty of Health and Medicine, said: “This finding opens up a whole new avenue for research into a possible environmental risk factor for a range of different brain diseases.”

Damian Carrington’s Sept. 5, 2016 article for the Guardian provides a few more details,

“They [the troubling magnetite particles] are abundant,” she [Maher] said. “For every one of [the crystal shaped particles] we saw about 100 of the pollution particles. The thing about magnetite is it is everywhere.” An analysis of roadside air in Lancaster found 200m magnetite particles per cubic metre.

Other scientists told the Guardian the new work provided strong evidence that most of the magnetite in the brain samples come from air pollution but that the link to Alzheimer’s disease remained speculative.

For anyone who might be concerned about health risks, there’s this from Andrew Maynard’s comments in Dexter Johnson’s Sept. 8, 2016 posting,

“In most workplaces, exposure to intentionally made nanoparticles is likely be small compared to ambient nanoparticles, and so it’s reasonable to assume—at least without further data—that this isn’t a priority concern for engineered nanomaterial production,” said Maynard.

While deliberate nanoscale manufacturing may not carry much risk, Maynard does believe that the research raises serious questions about other manufacturing processes where exposure to high concentrations of airborne nanoscale iron particles is common—such as welding, gouging, or working with molten ore and steel.

It seems everyone is agreed that the findings are concerning but I think it might be good to remember that the percentage of people who develop Alzheimer’s Disease is much smaller than the population of people who have crystals in their brains. In other words, these crystals might (they don’t know) be a factor and likely there would have to be one or more factors to create the condition for developing Alzheimer’s.

Here’s a link to and a citation for the paper,

Magnetite pollution nanoparticles in the human brain by Barbara A. Maher, Imad A. M. Ahmed, Vassil Karloukovski, Donald A. MacLaren, Penelope G. Fouldsd, David Allsop, David M. A. Mann, Ricardo Torres-Jardón, and Lilian Calderon-Garciduenas. PNAS [Proceedings of the National Academy of Sciences] doi: 10.1073/pnas.1605941113

This paper is behind a paywall but Dexter’s posting offers more detail for those who are still curious.

A couple of Frankenstein dares from The Frankenstein Bicentennial project

Drat! I’ve gotten the information about the first Frankenstein dare (a short story challenge) a little late in the game since the deadline is 11:59 pm PDT on July 31, 2016. In any event, here’s more about the two dares,

And for those who like their information in written form, here are the details from the Arizona State University’s (ASU) Frankenstein Bicentennial Dare (on The Franklin Bicentennial Project website),

Two centuries ago, on a dare to tell the best scary story, 19-year-old Mary Shelley imagined an idea that became the basis for Frankenstein. Mary’s original concept became the novel that arguably kick-started the genres of science fiction and Gothic horror, but also provided an enduring myth that shapes how we grapple with creativity, science, technology, and their consequences.
Two hundred years later, inspired by that classic dare, we’re challenging you to create new myths for the 21st century along with our partners National Novel Writing Month (NaNoWriMo), Chabot Space and Science Center, and Creative Nonfiction magazine.

FRANKENSTEIN 200

Presented by NaNoWriMo and the Chabot Space and Science Center

Frankenstein is a classic of Gothic literature – a gripping, tragic story about Victor Frankenstein’s failure to accept responsibility for the consequences of bringing new life into the world. In this dare, we’re challenging you to write a scary story that explores the relationship between creators and the “monsters” they create.

Almost anything that we create can become monstrous: a misinterpreted piece of architecture; a song whose meaning has been misappropriated; a big, but misunderstood idea; or, of course, an actual creature. And in Frankenstein, Shelley teaches us that monstrous does not always mean evil – in fact, creators can prove to be more destructive and inhuman than the things they bring into being

Tell us your story in 1,000 – 1,800 words on Medium.com and use the hashtag #Frankenstein200. Read other #Frankenstein200 stories, and use the recommend button at the bottom of each post for the stories you like. Winners in the short fiction contest will receive personal feedback from Hugo and Sturgeon Award-winning science fiction and fantasy author Elizabeth Bear, as well as a curated selection of classic and contemporary science fiction books and  Frankenstein goodies, courtesy of the NaNoWriMo team.

Rules and Mechanics

  • There are no restrictions on content. Entry is limited to one submission per author. Submissions must be in English and between 1,000 to 1,800 words. You must follow all Medium Terms of Service, including the Rules.
  • All entries submitted and tagged as #Frankenstein200 and in compliance with the rules outlined here will be considered.
  • The deadline for submissions is 11:59 PM on July 31, 2016.
  • Three winners will be selected at random on August 1, 2016.
  • Each winner receives the following prize package including:
  • Additionally, one of the three winners, chosen at random, will receive written coaching/feedback from Elizabeth Bear on his or her entry.
  • Select stories will be featured on Frankenscape, a public geo-storytelling project hosted by ASU’s Frankenstein Bicentennial Project. Stories may also be featured in National Novel Writing Month communications and social media platforms.
  • U.S. residents only [emphasis mine]; void where prohibited by law. No purchase is necessary to enter or win.

Dangerous Creations: Real-life Frankenstein Stories

Presented by Creative Nonfiction magazine

Creative Nonfiction magazine is daring writers to write original and true stories that explore humans’ efforts to control and redirect nature, the evolving relationships between humanity and science/technology, and contemporary interpretations of monstrosity.

Essays must be vivid and dramatic; they should combine a strong and compelling narrative with an informative or reflective element and reach beyond a strictly personal experience for some universal or deeper meaning. We’re open to a broad range of interpretations of the “Frankenstein” theme, with the understanding that all works submitted must tell true stories and be factually accurate. Above all, we’re looking for well-written prose, rich with detail and a distinctive voice.

Creative Nonfiction editors and a judge (to be announced) will award $10,000 and publication for Best Essay and two $2,500 prizes and publication for runners-up. All essays submitted will be considered for publication in the winter 2018 issue of the magazine.

Deadline for submissions: March 20, 2017.
For complete guidelines: www.creativenonfiction.org/submissions

[Note: There is a submission fee for the nonfiction dare and no indication as to whether or not there are residency requirements.]

A July 27, 2016 email received from The Frankenstein Bicentennial Project (which is how I learned about the dares somewhat belatedly) has this about the first dare,

Planetary Design, Transhumanism, and Pork Products
Our #Frankenstein200 Contest Took Us in Some Unexpected Directions

Last month [June 2016], we partnered with National Novel Writing Month (NaNoWriMo) and The Chabot Space and Science Center to dare the world to create stories in the spirit of Mary Shelley’s Frankenstein, to celebrate the 200th anniversary of the novel’s conception.

We received a bevy of intriguing and sometimes frightening submissions that explore the complex relationships between creators and their “monsters.” Here are a few tales that caught our eye:

The Man Who Harnessed the Sun
By Sandra Knisely
Eliza has to choose between protecting the scientist who once gave her the world and punishing him for letting it all slip away. Read the story…

The Mortality Complex
By Brandon Miller
When the boogeyman of medical students reflects on life. Read the story…

Bacon Man
By Corey Pressman
A Frankenstein story in celebration of ASU’s Frankenstein Bicentennial Project. And bacon. Read the story… 

You can find the stories that have been submitted to date for the creative short story dare at Medium.com.

Good luck! And, don’t forget to tag your short story with #Frankenstein200 and submit it by July 31, 2016 (if you are a US resident). There’s still lots of time to enter a submission for a creative nonfiction piece.