Clearly a lawyer wrote this June 26, 2017 essay on theconversation.com (Note: A link has been removed),
When a group of museums and researchers in the Netherlands unveiled a portrait entitled The Next Rembrandt, it was something of a tease to the art world. It wasn’t a long lost painting but a new artwork generated by a computer that had analysed thousands of works by the 17th-century Dutch artist Rembrandt Harmenszoon van Rijn.
The computer used something called machine learning [emphasis mine] to analyse and reproduce technical and aesthetic elements in Rembrandt’s works, including lighting, colour, brush-strokes and geometric patterns. The result is a portrait produced based on the styles and motifs found in Rembrandt’s art but produced by algorithms.
But who owns creative works generated by artificial intelligence? This isn’t just an academic question. AI is already being used to generate works in music, journalism and gaming, and these works could in theory be deemed free of copyright because they are not created by a human author.
This would mean they could be freely used and reused by anyone and that would be bad news for the companies selling them. Imagine you invest millions in a system that generates music for video games, only to find that music isn’t protected by law and can be used without payment by anyone in the world.
Unlike with earlier computer-generated works of art, machine learning software generates truly creative works without human input or intervention. AI is not just a tool. While humans program the algorithms, the decision making – the creative spark – comes almost entirely from the machine.
It could have been someone involved in the technology but nobody with that background would write “… something called machine learning … .” Andres Guadamuz, lecturer in Intellectual Property Law at the University of Sussex, goes on to say (Note: Links have been removed),
Unlike with earlier computer-generated works of art, machine learning software generates truly creative works without human input or intervention. AI is not just a tool. While humans program the algorithms, the decision making – the creative spark – comes almost entirely from the machine.
That doesn’t mean that copyright should be awarded to the computer, however. Machines don’t (yet) have the rights and status of people under the law. But that doesn’t necessarily mean there shouldn’t be any copyright either. Not all copyright is owned by individuals, after all.
Companies are recognised as legal people and are often awarded copyright for works they don’t directly create. This occurs, for example, when a film studio hires a team to make a movie, or a website commissions a journalist to write an article. So it’s possible copyright could be awarded to the person (company or human) that has effectively commissioned the AI to produce work for it.
Things are likely to become yet more complex as AI tools are more commonly used by artists and as the machines get better at reproducing creativity, making it harder to discern if an artwork is made by a human or a computer. Monumental advances in computing and the sheer amount of computational power becoming available may well make the distinction moot. At that point, we will have to decide what type of protection, if any, we should give to emergent works created by intelligent algorithms with little or no human intervention.
The most sensible move seems to follow those countries that grant copyright to the person who made the AI’s operation possible, with the UK’s model looking like the most efficient. This will ensure companies keep investing in the technology, safe in the knowledge they will reap the benefits. What happens when we start seriously debating whether computers should be given the status and rights of people is a whole other story.
The team that developed a ‘new’ Rembrandt produced a video about the process,
Mark Brown’s April 5, 2016 article abut this project (which was unveiled on April 5, 2017 in Amsterdam, Netherlands) for the Guardian newspaper provides more detail such as this,
It [Next Rembrandt project] is the result of an 18-month project which asks whether new technology and data can bring back to life one of the greatest, most innovative painters of all time.
Advertising executive [Bas] Korsten, whose brainchild the project was, admitted that there were many doubters. “The idea was greeted with a lot of disbelief and scepticism,” he said. “Also coming up with the idea is one thing, bringing it to life is another.”
The project has involved data scientists, developers, engineers and art historians from organisations including Microsoft, Delft University of Technology, the Mauritshuis in The Hague and the Rembrandt House Museum in Amsterdam.
The final 3D printed painting consists of more than 148 million pixels and is based on 168,263 Rembrandt painting fragments.
Some of the challenges have been in designing a software system that could understand Rembrandt based on his use of geometry, composition and painting materials. A facial recognition algorithm was then used to identify and classify the most typical geometric patterns used to paint human features.
It sounds like it was a fascinating project but I don’t believe ‘The Next Rembrandt’ is an example of AI creativity or an example of the ‘creative spark’ Guadamuz discusses. This seems more like the kind of work that could be done by a talented forger or fraudster. As I understand it, even when a human creates this type of artwork (a newly discovered and unknown xxx masterpiece), the piece is not considered a creative work in its own right. Some pieces are outright fraudulent and others which are described as “in the manner of xxx.”
Taking a somewhat different approach to mine, Timothy Geigner at Techdirt has also commented on the question of copyright and AI in relation to Guadamuz’s essay in a July 7, 2017 posting,
Unlike with earlier computer-generated works of art, machine learning software generates truly creative works without human input or intervention. AI is not just a tool. While humans program the algorithms, the decision making – the creative spark – comes almost entirely from the machine.
Let’s get the easy part out of the way: the culminating sentence in the quote above is not true. The creative spark is not the artistic output. Rather, the creative spark has always been known as the need to create in the first place. This isn’t a trivial quibble, either, as it factors into the simple but important reasoning for why AI and machines should certainly not receive copyright rights on their output.
That reasoning is the purpose of copyright law itself. Far too many see copyright as a reward system for those that create art rather than what it actually was meant to be: a boon to an artist to compensate for that artist to create more art for the benefit of the public as a whole. Artificial intelligence, however far progressed, desires only what it is programmed to desire. In whatever hierarchy of needs an AI might have, profit via copyright would factor either laughably low or not at all into its future actions. Future actions of the artist, conversely, are the only item on the agenda for copyright’s purpose. If receiving a copyright wouldn’t spur AI to create more art beneficial to the public, then copyright ought not to be granted.
Geigner goes on (July 7, 2017 posting) to elucidate other issues with the ideas expressed in the general debates of AI and ‘rights’ and the EU’s solution.
A classicist, biologist and computer scientist all walk into a room — what comes next isn’t the punchline but a new method to analyze relationships among ancient Latin and Greek texts, developed in part by researchers from The University of Texas at Austin.
Their work, referred to as quantitative criticism, is highlighted in a study published in the Proceedings of the National Academy of Sciences. The paper identifies subtle literary patterns in order to map relationships between texts and more broadly to trace the cultural evolution of literature.
“As scholars of the humanities well know, literature is a system within which texts bear a multitude of relationships to one another. Understanding what is distinctive about one text entails knowing how it fits within that system,” said Pramit Chaudhuri, associate professor in the Department of Classics at UT Austin. “Our work seeks to harness the power of quantification and computation to describe those relationships at macro and micro levels not easily achieved by conventional reading alone.”
In the study, the researchers create literary profiles based on stylometric features, such as word usage, punctuation and sentence structure, and use techniques from machine learning to understand these complex datasets. Taking a computational approach enables the discovery of small but important characteristics that distinguish one work from another — a process that could require years using manual counting methods.
“One aspect of the technical novelty of our work lies in the unusual types of literary features studied,” Chaudhuri said. “Much computational text analysis focuses on words, but there are many other important hallmarks of style, such as sound, rhythm and syntax.”
Another component of their work builds on Matthew Jockers’ literary “macroanalysis,” which uses machine learning to identify stylistic signatures of particular genres within a large body of English literature. Implementing related approaches, Chaudhuri and his colleagues have begun to trace the evolution of Latin prose style, providing new, quantitative evidence for the sweeping impact of writers such as Caesar and Livy on the subsequent development of Roman prose literature.
“There is a growing appreciation that culture evolves and that language can be studied as a cultural artifact, but there has been less research focused specifically on the cultural evolution of literature,” said the study’s lead author Joseph Dexter, a Ph.D. candidate in systems biology at Harvard University. “Working in the area of classics offers two advantages: the literary tradition is a long and influential one well served by digital resources, and classical scholarship maintains a strong interest in close linguistic study of literature.”
Unusually for a publication in a science journal, the paper contains several examples of the types of more speculative literary reading enabled by the quantitative methods introduced. The authors discuss the poetic use of rhyming sounds for emphasis and of particular vocabulary to evoke mood, among other literary features.
“Computation has long been employed for attribution and dating of literary works, problems that are unambiguous in scope and invite binary or numerical answers,” Dexter said. “The recent explosion of interest in the digital humanities, however, has led to the key insight that similar computational methods can be repurposed to address questions of literary significance and style, which are often more ambiguous and open ended. For our group, this humanist work of criticism is just as important as quantitative methods and data.”
The paper is the work of the Quantitative Criticism Lab (www.qcrit.org), co-directed by Chaudhuri and Dexter in collaboration with researchers from several other institutions. It is funded in part by a 2016 National Endowment for the Humanities grant and the Andrew W. Mellon Foundation New Directions Fellowship, awarded in 2016 to Chaudhuri to further his education in statistics and biology. Chaudhuri was one of 12 scholars selected for the award, which provides humanities researchers the opportunity to train outside of their own area of special interest with a larger goal of bridging the humanities and social sciences.
Here’s another link to the paper along with a citation,
Quantitative criticism of literary relationships by Joseph P. Dexter, Theodore Katz, Nilesh Tripuraneni, Tathagata Dasgupta, Ajay Kannan, James A. Brofos, Jorge A. Bonilla Lopez, Lea A. Schroeder, Adriana Casarez, Maxim Rabinovich, Ayelet Haimson Lushkov, and Pramit Chaudhuri. PNAS Published online before print April 3, 2017, doi: 10.1073/pnas.1611910114
Slate.com is dedicating a month (January 2017) to Frankenstein. This means there were will be one or more essays each week on one aspect or another of Frankenstein and science. These essays are one of a series of initiatives jointly supported by Slate, Arizona State University, and an organization known as New America. It gets confusing since these essays are listed as part of two initiatives: Futurography and Future Tense.
The really odd part, as far as I’m concerned, is that there is no mention of Arizona State University’s (ASU) The Frankenstein Bicentennial Project (mentioned in my Oct. 26, 2016 posting). Perhaps they’re concerned that people will think ASU is advertising the project?
Getting back to the essays, a Jan. 3, 2017 article by Jacob Brogan explains, by means of a ‘Question and Answer’ format article, why the book and the monster maintain popular interest after two centuries (Note: We never do find out who or how many people are supplying the answers),
OK, fine. I get that this book is important, but why are we talking about it in a series about emerging technology?
Though people still tend to weaponize it as a simple anti-scientific screed, Frankenstein, which was first published in 1818, is much richer when we read it as a complex dialogue about our relationship to innovation—both our desire for it and our fear of the changes it brings. Mary Shelley was just a teenager when she began to compose Frankenstein, but she was already grappling with our complex relationship to new forces. Almost two centuries on, the book is just as propulsive and compelling as it was when it was first published. That’s partly because it’s so thick with ambiguity—and so resistant to easy interpretation.
Is it really ambiguous? I mean, when someone calls something frankenfood, they aren’t calling it “ethically ambiguous food.”
It’s a fair point. For decades, Frankenstein has been central to discussions in and about bioethics. Perhaps most notably, it frequently crops up as a reference point in discussions of genetically modified organisms, where the prefix Franken- functions as a sort of convenient shorthand for human attempts to meddle with the natural order. Today, the most prominent flashpoint for those anxieties is probably the clustered regularly interspaced short palindromic repeats, or CRISPR, gene-editing technique [emphasis mine]. But it’s really oversimplifying to suggest Frankenstein is a cautionary tale about monkeying with life.
As we’ll see throughout this month on Futurography, it’s become a lens for looking at the unintended consequences of things like synthetic biology, animal experimentation, artificial intelligence, and maybe even social networking. Facebook, for example, has arguably taken on a life of its own, as its algorithms seem to influence the course of elections. Mark Zuckerberg, who’s sometimes been known to disavow the power of his own platform, might well be understood as a Frankensteinian figure, amplifying his creation’s monstrosity by neglecting its practical needs.
But this book is almost 200 years old! Surely the actual science in it is bad.
Shelley herself would probably be the first to admit that the science in the novel isn’t all that accurate. Early in the novel, Victor Frankenstein meets with a professor who castigates him for having read the wrong works of “natural philosophy.” Shelley’s protagonist has mostly been studying alchemical tomes and otherwise fantastical works, the sort of things that were recognized as pseudoscience, even by the standards of the day. Near the start of the novel, Frankenstein attends a lecture in which the professor declaims on the promise of modern science. He observes that where the old masters “promised impossibilities and performed nothing,” the new scientists achieve far more in part because they “promise very little; they know that metals cannot be transmuted and that the elixir of life is a chimera.”
Is it actually about bad science, though?
Not exactly, but it has been read as a story about bad scientists.
Ultimately, Frankenstein outstrips his own teachers, of course, and pulls off the very feats they derided as mere fantasy. But Shelley never seems to confuse fact and fiction, and, in fact, she largely elides any explanation of how Frankenstein pulls off the miraculous feat of animating dead tissue. We never actually get a scene of the doctor awakening his creature. The novel spends far more dwelling on the broader reverberations of that act, showing how his attempt to create one life destroys countless others. Read in this light, Frankenstein isn’t telling us that we shouldn’t try to accomplish new things, just that we should take care when we do.
This speaks to why the novel has stuck around for so long. It’s not about particular scientific accomplishments but the vagaries of scientific progress in general.
Does that make it into a warning against playing God?
It’s probably a mistake to suggest that the novel is just a critique of those who would usurp the divine mantle. Instead, you can read it as a warning about the ways that technologists fall short of their ambitions, even in their greatest moments of triumph.
Look at what happens in the novel: After bringing his creature to life, Frankenstein effectively abandons it. Later, when it entreats him to grant it the rights it thinks it deserves, he refuses. Only then—after he reneges on his responsibilities—does his creation really go bad. We all know that Frankenstein is the doctor and his creation is the monster, but to some extent it’s the doctor himself who’s made monstrous by his inability to take responsibility for what he’s wrought.
I encourage you to read Brogan’s piece in its entirety and perhaps supplement the reading. Mary Shelley has a pretty interesting history. She ran off with Percy Bysshe Shelley who was married to another woman, in 1814 at the age of seventeen years. Her parents were both well known and respected intellectuals and philosophers, William Godwin and Mary Wollstonecraft. By the time Mary Shelley wrote her book, her first baby had died and she had given birth to a second child, a boy. Percy Shelley was to die a few years later as was her son and a third child she’d given birth to. (Her fourth child born in 1819 did survive.) I mention the births because one analysis I read suggests the novel is also a commentary on childbirth. In fact, the Frankenstein narrative has been examined from many perspectives (other than science) including feminism and LGBTQ studies.
Getting back to the science fiction end of things, the next part of the Futurography series is titled “A Cheat-Sheet Guide to Frankenstein” and that too is written by Jacob Brogan with a publication date of Jan. 3, 2017,
Marilyn Butler: Butler, a literary critic and English professor at the University of Cambridge, authored the seminal essay “Frankenstein and Radical Science.”
Jennifer Doudna: A professor of chemistry and biology at the University of California, Berkeley, Doudna helped develop the CRISPR gene-editing technique [emphasis mine].
Stephen Jay Gould: Gould is an evolutionary biologist and has written in defense of Frankenstein’s scientific ambitions, arguing that hubris wasn’t the doctor’s true fault.
Seán Ó hÉigeartaigh: As executive director of the Center for Existential Risk at the University of Cambridge, hÉigeartaigh leads research into technologies that threaten the existience of our species.
Jim Hightower: This columnist and activist helped popularize the term frankenfood to describe genetically modified crops.
Mary Shelley: Shelley, the author of Frankenstein, helped create science fiction as we now know it.
J. Craig Venter: A leading genomic researcher, Venter has pursued a variety of human biotechnology projects.
‘Franken’ and CRISPR
The first essay is in a Jan. 6, 2016 article by Kay Waldman focusing on the ‘franken’ prefix (Note: links have been removed),
In a letter to the New York Times on June 2, 1992, an English professor named Paul Lewis lopped off the top of Victor Frankenstein’s surname and sewed it onto a tomato. Railing against genetically modified crops, Lewis put a new generation of natural philosophers on notice: “If they want to sell us Frankenfood, perhaps it’s time to gather the villagers, light some torches and head to the castle,” he wrote.
William Safire, in a 2000 New York Times column, tracked the creation of the franken- prefix to this moment: an academic channeling popular distrust of science by invoking the man who tried to improve upon creation and ended up disfiguring it. “There’s no telling where or how it will end,” he wrote wryly, referring to the spread of the construction. “It has enhanced the sales of the metaphysical novel that Ms. Shelley’s husband, the poet Percy Bysshe Shelley, encouraged her to write, and has not harmed sales at ‘Frank’n’Stein,’ the fast-food chain whose hot dogs and beer I find delectably inorganic.” Safire went on to quote the American Dialect Society’s Laurence Horn, who lamented that despite the ’90s flowering of frankenfruits and frankenpigs, people hadn’t used Frankensense to describe “the opposite of common sense,” as in “politicians’ motivations for a creatively stupid piece of legislation.”
A year later, however, Safire returned to franken- in dead earnest. In an op-ed for the Times avowing the ethical value of embryonic stem cell research, the columnist suggested that a White House conference on bioethics would salve the fears of Americans concerned about “the real dangers of the slippery slope to Frankenscience.”
All of this is to say that franken-, the prefix we use to talk about human efforts to interfere with nature, flips between “funny” and “scary” with ease. Like Shelley’s monster himself, an ungainly patchwork of salvaged parts, it can seem goofy until it doesn’t—until it taps into an abiding anxiety that technology raises in us, a fear of overstepping.
Waldman’s piece hints at how language can shape discussions while retaining a rather playful quality.
Since its publication nearly 200 years ago, Shelley’s gothic novel has been read as a cautionary tale of the dangers of creation and experimentation. James Whale’s 1931 film took the message further, assigning explicitly the hubris of playing God to the mad scientist. As his monster comes to life, Dr. Frankenstein, played by Colin Clive, triumphantly exclaims: “Now I know what it feels like to be God!”
The admonition against playing God has since been ceaselessly invoked as a rhetorical bogeyman. Secular and religious, critic and journalist alike have summoned the term to deride and outright dismiss entire areas of research and technology, including stem cells, genetically modified crops, recombinant DNA, geoengineering, and gene editing. As we near the two-century commemoration of Shelley’s captivating story, we would be wise to shed this shorthand lesson—and to put this part of the Frankenstein legacy to rest in its proverbial grave.
The trouble with the term arises first from its murkiness. What exactly does it mean to play God, and why should we find it objectionable on its face? All but zealots would likely agree that it’s fine to create new forms of life through selective breeding and grafting of fruit trees, or to use in-vitro fertilization to conceive life outside the womb to aid infertile couples. No one objects when people intervene in what some deem “acts of God,” such as earthquakes, to rescue victims and provide relief. People get fully behind treating patients dying of cancer with “unnatural” solutions like chemotherapy. Most people even find it morally justified for humans to mete out decisions as to who lives or dies in the form of organ transplant lists that prize certain people’s survival over others.
So what is it—if not the imitation of a deity or the creation of life—that inspires people to invoke the idea of “playing God” to warn against, or even stop, particular technologies? A presidential commission charged in the early 1980s with studying the ethics of genetic engineering of humans, in the wake of the recombinant DNA revolution, sheds some light on underlying motivations. The commission sought to understand the concerns expressed by leaders of three major religious groups in the United States—representing Protestants, Jews, and Catholics—who had used the phrase “playing God” in a 1980 letter to President Jimmy Carter urging government oversight. Scholars from the three faiths, the commission concluded, did not see a theological reason to flat-out prohibit genetic engineering. Their concerns, it turned out, weren’t exactly moral objections to scientists acting as God. Instead, they echoed those of the secular public; namely, they feared possible negative effects from creating new human traits or new species. In other words, the religious leaders who called recombinant DNA tools “playing God” wanted precautions taken against bad consequences but did not inherently oppose the use of the technology as an act of human hubris.
She presents an interesting argument and offers this as a solution,
The lesson for contemporary science, then, is not that we should cease creating and discovering at the boundaries of current human knowledge. It’s that scientists and technologists ought to steward their inventions into society, and to more rigorously participate in public debate about their work’s social and ethical consequences. Frankenstein’s proper legacy today would be to encourage researchers to address the unsavory implications of their technologies, whether it’s the cognitive and social effects of ubiquitous smartphone use or the long-term consequences of genetically engineered organisms on ecosystems and biodiversity.
Some will undoubtedly argue that this places an undue burden on innovators. Here, again, Shelley’s novel offers a lesson. Scientists who cloister themselves as Dr. Frankenstein did—those who do not fully contemplate the consequences of their work—risk later encounters with the horror of their own inventions.
At a guess, Venkataraman seems to be assuming that if scientists communicate and make their case that the public will cease to panic with reference moralistic and other concerns. My understanding is that social scientists have found this is not the case. Someone may understand the technology quite well and still oppose it.
Frankenstein and anti-vaxxers
The Jan. 16, 2017 essay by Charles Kenny is the weakest of the lot, so far (Note: Links have been removed),
In 1780, University of Bologna physician Luigi Galvani found something peculiar: When he applied an electric current to the legs of a dead frog, they twitched. Thirty-seven years later, Mary Shelley had Galvani’s experiments in mind as she wrote her fable of Faustian overreach, wherein Dr. Victor Frankenstein plays God by reanimating flesh.
And a little less than halfway between those two dates, English physician Edward Jenner demonstrated the efficacy of a vaccine against smallpox—one of the greatest killers of the age. Given the suspicion with which Romantic thinkers like Shelley regarded scientific progress, it is no surprise that many at the time damned the procedure as against the natural order. But what is surprising is how that suspicion continues to endure, even after two centuries of spectacular successes for vaccination. This anti-vaccination stance—which now infects even the White House—demonstrates the immense harm that can be done by excessive distrust of technological advance.
Kenny employs history as a framing device. Crudely, Galvani’s experiments led to Mary Shelley’s Frankenstein which is a fable about ‘playing God’. (Kenny seems unaware there are many other readings of and perspectives on the book.) As for his statement ” … the suspicion with which Romantic thinkers like Shelley regarded scientific progress … ,” I’m not sure how he arrived at his conclusion about Romantic thinkers. According to Richard Holmes (in his book, The Age of Wonder: How the Romantic Generation Discovered the Beauty and Terror of Science), their relationship to science was more complex. Percy Bysshe Shelley ran ballooning experiments and wrote poetry about science, which included footnotes for the literature and concepts he was referencing; John Keats was a medical student prior to his establishment as a poet; and Samuel Taylor Coleridge (The Rime of the Ancient Mariner, etc.) maintained a healthy correspondence with scientists of the day sometimes influencing their research. In fact, when you analyze the matter, you realize even scientists are, on occasion, suspicious of science.
As for the anti-vaccination wars, I wish this essay had been more thoughtful. Yes, Andrew Wakefield’s research showing a link between MMR (measles, mumps, and rubella) vaccinations and autism is a sham. However, having concerns and suspicions about technology does not render you a fool who hasn’t progressed from 18th/19th Century concerns and suspicions about science and technology. For example, vaccines are being touted for all kinds of things, the latest being a possible antidote to opiate addiction (see Susan Gados’ June 28, 2016 article for ScienceNews). Are we going to be vaccinated for everything? What happens when you keep piling vaccination on top of vaccination? Instead of a debate, the discussion has devolved to: “I’m right and you’re wrong.”
For the record, I’m grateful for the vaccinations I’ve had and the diminishment of diseases that were devastating and seem to be making a comeback with this current anti-vaccination fever. That said, I think there are some important questions about vaccines.
Kenny’s essay could have been a nuanced discussion of vaccines that have clearly raised the bar for public health and some of the concerns regarding the current pursuit of yet more vaccines. Instead, he’s been quite dismissive of anyone who questions vaccination orthodoxy.
The end of this piece
There will be more essays in Slate’s Frankenstein series but I don’t have time to digest and write commentary for all of them.
Please use this piece as a critical counterpoint to some of the series and, if I’ve done my job, you’ll critique this critique. Please do let me know if you find any errors or want to add an opinion or add your own critique in the Comments of this blog.
ETA Jan. 25, 2017: Here’s the Frankenstein webspace on Slate’s Futurography which lists all the essays in this series. It’s well worth looking at the list. There are several that were not covered here.
This news bit concerns a science fiction short story anthology and novel series from scientists and experts and a now completed fundraising campaign. From a Nov. 14, 2016 Springer Books press release on EurekAlert,
Springer Nature and Humble Bundle have raised a charitable contribution of $22,000 through the science fiction book campaign “Science Fiction by Real Scientists.” One half of the proceeds, $11,000, goes to the Science Fiction & Fantasy Writers of America’s Givers Fund. The same amount goes to the U.S. Fund for UNICEF as part of the global children’s charity’s annual Halloween fundraising drive. Humble Bundle supports a number of charities by offering media packages to its customers on a pay-what-you-want basis.
During the campaign, Springer offered a specially priced eBook bundle from its Science and Fiction series, consisting of nine full novels, two books of short stories and five nonfiction books. Readers were able to choose how their purchase dollars were allocated between the publisher and charity. Starting at just one dollar, customers could name their price, increasing their contribution to upgrade their bundles or contribute more to charity.
The Science and Fiction series, launched in 2012 by Springer, is a unique publishing program for fiction written by actual scientists and experts in scientific fields. Each novel or anthology of short stories is accompanied by an extensive afterword that explains, in lay terms, the current scientific theory or findings that serve as the basis for the fictional work.
Mia Kravitz, Director Global eRetail at Springer Nature, said, “Springer was so pleased to work with Humble Bundle on this worthwhile effort to aid children globally as well as support writers and artists in the science fiction genre. Pushing the envelope for scientific inquiry is part of our mission, and this is a fun way to bring current research to a wider audience.”
The Springer series Science and Fiction was launched in 2012 and comprises entertaining and thought-provoking books which appeal equally to science buffs, scientists and science fiction fans. The idea was born out of the recognition that scientific discovery and the creation of plausible fictional scenarios are often two sides of the same coin. Each science fiction book, with an afterword on the science underlying the tale, relies on an understanding of the way the world works, coupled with the imaginative ability to invent new or alternative explanations and even other worlds.
Christian Caron, Executive Editor Physics at Springer, said the concept developed when a Springer author, astrobiologist Dirk Schulze-Makuch, published his first hard science fiction novel on Amazon. “Our very first thought was, why couldn’t we do this?” he said. “Our authors, all of them scientists and experts at some forefront of research, would of course have an interface with speculative science in their fields.”
The books in Springer’s Science and Fiction series explore and exploit the borderlands between accepted science and its fictional counterpart. Uncovering mutual influences, promoting fruitful interaction, and narrating and analyzing fictional scenarios, they serve as a reaction vessel for inspired new ideas in science, technology and beyond.
You can find a list of books in the series here. Note: I found forthcoming titles in 2017 and titles dating back to 2014. Springer made the announcement in 2012 but didn’t publish any books in the series until 2014.
Metaphors can be powerful in both good ways and bad. I once read that there was a ‘lighthouse’ metaphor used to explain a scientific concept to high school students which later caused problems for them when they were studying the biological sciences as university students. It seems there’s research now to back up the assertion about metaphors and their powers. From an Oct. 7, 2016 news item on phys.org,
Whether ideas are “like a light bulb” or come forth as “nurtured seeds,” how we describe discovery shapes people’s perceptions of both inventions and inventors. Notably, Kristen Elmore (Bronfenbrenner Center for Translational Research at Cornell University) and Myra Luna-Lucero (Teachers College, Columbia University) have shown that discovery metaphors influence our perceptions of the quality of an idea and of the ability of the idea’s creator. The research appears in the journal Social Psychological and Personality Science.
While the metaphor that ideas appear “like light bulbs” is popular and appealing, new research shows that discovery metaphors influence our understanding of the scientific process and perceptions of the ability of inventors based on their gender. [downloaded from http://www.spsp.org/news-center/press-release/metaphors-bias-perception]
While those involved in research know there are many trials and errors and years of work before something is understood, discovered or invented, our use of words for inspiration may have an unintended and underappreciated effect of portraying good ideas as a sudden and exceptional occurrence.
In a series of experiments, Elmore and Luna-Lucero tested how people responded to ideas that were described as being “like a light bulb,” “nurtured like a seed,” or a neutral description.
According the authors, the “light bulb metaphor implies that ‘brilliant’ ideas result from sudden and spontaneous inspiration, bestowed upon a chosen few (geniuses) while the seed metaphor implies that ideas are nurtured over time, ‘cultivated’ by anyone willing to invest effort.”
The first study looked at how people reacted to a description of Alan Turing’s invention of a precursor to the modern computer. It turns out light bulbs are more remarkable than seeds.
“We found that an idea was seen as more exceptional when described as appearing like a light bulb rather than nurtured like a seed,” said Elmore.
But this pattern changed when they used these metaphors to describe a female inventor’s ideas. When using the “like a light bulb” and “nurtured seed” metaphors, the researchers found “women were judged as better idea creators than men when ideas were described as nurtured over time like seeds.”
The results suggest gender stereotypes play a role in how people perceived the inventors.
In the third study, the researchers presented participants with descriptions of the work of either a female (Hedy Lamarr) or a male (George Antheil) inventor, who together created the idea for spread-spectrum technology (a precursor to modern wireless communications). Indeed, the seed metaphor “increased perceptions that a female inventor was a genius, while the light bulb metaphor was more consistent with stereotypical views of male genius,” stated Elmore.
Elmore plans to expand upon their research on metaphors by examining the interactions of teachers and students in real world classroom settings.
“The ways that teachers and students talk about ideas may impact students’ beliefs about how good ideas are created and who is likely to have them,” said Elmore. “Having good ideas is relevant across subjects—whether students are creating a hypothesis in science or generating a thesis for their English paper—and language that stresses the role of effort rather than inspiration in creating ideas may have real benefits for students’ motivation.”
While Elmore and Luna-Lucero are focused on a nuanced analysis of specific metaphors, Richard Holmes’s book, ‘The Age of Wonder: How the Romantic Generation Discovered the Beauty and Terror of Science’, notes that the ‘Eureka’ (light bulb) moment for scientific discovery and the notion of a ‘single great man’ (a singular genius) as the discoverer has its roots in romantic (Shelley, Keats, etc.) poetry.
Before getting to the latest information for applying, Matt Miller has written an exuberant and enticing description of his experiences as a 2016 American Association for the Advancement of Science (AAAS) Mass Media Fellow for Slate.com in his Oct. 17, 2016 article for them (Note: Links have been removed),
If you’ve ever wanted to write for Slate (or other major media organizations), now is your chance—provided you’re a graduate student or postdoc in science, math, engineering, or medicine [enrolled in a university and with a US citizenship or visa that allows you to receive paymet for work].* The American Association for the Advancement of Science will soon be opening applications for its 2017 Mass Media Fellowship. Along with Slate, publications like Wired, Scientific American, NPR [National Public Radio], and the Los Angeles Times will be hosting fellows who will work as science writers for 10 weeks starting in June of next year.
While many of my classmates were drawing blood and administering vaccines [Miller is a student in a School of Veterinary Medicine], I flew up to New York and started learning how to be a journalist. In Slate’s Brooklyn office, I read the abstracts of newly released journal articles and pitched countless story ideas. I drank lots of coffee, sat in on editorial meetings, and interviewed scientists from almost every field imaginable (entomologists are the best). Perhaps the highlight of the whole summer was being among the first to cover the rising cost of EpiPens, a scandal that has recently led to a congressional hearing.
A large part of what I did this summer involved explaining the scientific fundamentals behind the research and making the findings more accessible and exciting to a general audience. Science writing involves a great deal of translation; scientists often get so tied up in the particulars of their research—exactly how an enzyme cleaves this protein, or whether a newly discovered bird is technically a new species—that they forget to talk about the wider societal implications their research might have on culture and civilization. But science writing also matters for the same reason all journalism matters. Science journalism can play the important role of watchdog, holding the powerful accountable and airing out things that don’t quite seem right.
You can find the application here. Don’t forget to read the eligibility rules (no students enrolled in English, journalism, science journalism, or other non-technical fields need apply).
*ETA Oct. 18, 2016 9:52 am PDT: The deadline for applications is midnight EST Jan. 15, 2017.
It’s nice to see writers using technology in their literary work to create new forms although I do admit to a pang at the thought that this might have a deleterious effect on book clubs as the headline (Ditch Your Book Club: This AI-Powered Memoir Wants To Chat With You) for Claire Zulkey’s Sept. 1, 2016 article for Fast Company suggests,
Instead of attempting to write a book that would defeat the distractions of a smartphone, author Amy Krouse Rosenthal decided to make the two kiss and make up with her new memoir.
“I have this habit of doing interactive stuff,” says the Chicago writer and filmmaker, whose previous projects have enticed readers to communicate via email, website, or in person, and before all that, a P.O. box. As she pondered a logical follow-up to her 2005 memoir Encyclopedia of an Ordinary Life (which, among other prompts, offered readers a sample of her favorite perfume if they got in touch via her website), Rosenthal hit upon the concept of a textbook. The idea appealed to her, for its bibliographical elements and as a new way of conversing with her readers. And also, of course, because of the double meaning of the title. Textbook, which went on sale August 9 , is a book readers can send texts to, and the book will text them back. “When I realized the wordplay opportunity, and that nobody had done that before, I loved it,” Rosenthal says. “Most people would probably be reading with a phone in their hands anyway.”
Rosenthal may be best known for the dozens of children’s books she’s published, but Encyclopedia was listed in Amazon’s top 10 memoirs of the decade for its alphabetized musings gathered together under the premise, “I have not survived against all odds. I have not lived to tell. I have not witnessed the extraordinary. This is my story.” Her writing often celebrates the serendipitous moment, the smallness of our world, the misheard sentence that was better than the real one—always in praise of the flashes of magic in our mundane lives. Textbook, Rosenthal says, is not a prequel or a sequel but “an equal” to Encyclopedia. It is organized by subject, and Rosenthal shares her favorite anagrams, admits a bias against people who sign emails with just their initials, and exhorts readers, next time they are at a party, to attempt to write a “group biography.” …
… when she sent the book out to publishers, Rosenthal explains, “Pretty much everybody got it. Nobody said, ‘We want to do this book but we don’t want to do that texting thing.’”
Zulkey also covers some of the nitty gritty elements of getting this book published and developed,
After she signed with Dutton, Rosenthal’s editors got in touch with OneReach, a Denver company that specializes in providing multichannel, conversational bot experiences, “This book is a great illustration of what we’re going to see a lot more of in the future,” says OneReach cofounder Robb Wilson. “It’s conversational and has some basic AI components in it.”
Textbook has nearly 20 interactive elements to it, some of which involve email or going to the book’s website, but many are purely text-message-based. One example is a prompt to send in good thoughts, which Rosenthal will then print and send out in a bottle to sea. Another asks readers to text photos of a rainbow they are witnessing in real time. The rainbow and its location are then posted on the book’s website in a live rainbow feed. And yet another puts out a call for suggestions for matching tattoos that at least one reader and Rosenthal will eventually get. Three weeks after its publication date, the book has received texts from over 600 readers.
Nearly anyone who has received a text from Walgreens saying a prescription is ready, gotten an appointment confirmation from a dentist, or even voted on American Idol has interacted with the type of technology OneReach handles. But behind the scenes of that technology were artistic quandaries that Rosenthal and the team had to solve or work around.
For instance, the reader has the option to pick and choose which prompts to engage with and in what order, which is not typically how text chains work. “Normally, with an automated text message you’re in kind of a lineal format,” says Justin Biel, who built Textbook’s system and made sure that if you skipped the best-wishes text, for instance, and go right to the rainbow, you wouldn’t get an error message. At one point Rosenthal and her assistant manually tried every possible permutation of text to confirm that there were no hitches jumping from one prompt to another.
Engineers also made lots of revisions so that the system felt like readers were having a realistic text conversation with a person, rather than a bot or someone who had obviously written out the messages ahead of time. “It’s a fine line between robotic and poetic,” Rosenthal says.
Unlike your Instacart shopper whom you hope doesn’t need to text to ask you about substitutions, Textbook readers will never receive a message alerting them to a new Rosenthal signing or a discount at Amazon. No promo or marketing messages, ever. “In a way, that’s a betrayal,” Wilson says. Texting, to him, is “a personal channel, and to try to use that channel for blatant reasons, I think, hurts you more than it helps you.
Zulkey’s piece is a good read and includes images and an embedded video.
July 28, 2016 was the 150th anniversary of Beatrix Potter‘s birthday. Known by many through her children’s books, she has left an indelible mark on many of us. Hop-skip-jump.com has a description of an extraordinary woman, from their Beatrix Potter 150 years page,
An artist, storyteller, botanist, environmentalist, farmer and impeccable businesswoman, Potter was a visionary and a trailblazer. Single-mindedly determined and ambitious she overcame professional rejection, academic humiliation, and personal heartbreak, going on to earn her fortune and a formidable reputation.
A July 27, 2016 posting by Alex Jackson on the Guardian science blogs provides more information about Potter’s science (Note: Links have been removed),
Influenced by family holidays in Scotland, Potter was fascinated by the natural world from a young age. Encouraged to follow her interests, she explored the outdoors with sketchbook and camera, honing her skills as an artist, by drawing and sketching her school room pets: mice, rabbits and hedgehogs. Led first by her imagination, she developed a broad interest in the natural sciences: particularly archaeology, entomology and mycology, producing accurate watercolour drawings of unusual fossils, fungi, and archaeological artefacts.
Potter’s uncle, Sir Henry Enfield Roscoe FRS, an eminent nineteenth-century chemist, recognised her artistic talent and encouraged her scientific interests. By the 1890s, Potter’s skills in mycology drew Roscoe’s attention when he learned she had successfully germinated spores of a class of fungi, and had ideas on how they reproduced. He used his scientific connections with botanists at Kew’s Royal Botanic Gardens to gain a student card for his niece and to introduce her to Kew botanists interested in mycology.
Although Potter had good reason to think that her success might break some new ground, the botanists at Kew were sceptical. One Kew scientist, George Massee, however, was sufficiently interested in Potter’s drawings, encouraging her to continue experimenting. Although the director of Kew, William Thistleton-Dyer refused to give Potter’s theories or her drawings much attention both because she was an amateur and a female, Roscoe encouraged his niece to write up her investigations and offer her drawings in a paper to the Linnean Society.
In 1897, Potter put forward her paper, which Massee presented to the Linnean Society, since women could not be members or attend a meeting. Her paper, On the Germination of the Spores of the Agaricineae, was not given much notice and she quickly withdrew it, recognising that her samples were likely contaminated. Sadly, her paper has since been lost, so we can only speculate on what Potter actually concluded.
Until quite recently, Potter’s accomplishments and her experiments in natural science went unrecognised. Upon her death in 1943, Potter left hundreds of her mycological drawings and paintings to the Armitt Museum and Library in Ambleside, where she and her husband had been active members. Today, they are valued not only for their beauty and precision, but also for the assistance they provide modern mycologists in identifying a variety of fungi.
In 1997, the Linnean Society issued a posthumous apology to Potter, noting the sexism displayed in the handling of her research and its policy toward the contributions of women.
A rarely seen very early Beatrix Potter drawing, A Dream of Toasted Cheese was drawn to celebrate the publication of Henry Roscoe’s chemistry textbook in 1899. Illustration: Beatrix Potter/reproduced courtesy of the Lord Clwyd collection (image by way of The Guardian newspaper)
I’m sure you recognized the bunsen burner. From the James posting (Note: A link has been removed),
London-born, Henry Roscoe, whose family roots were in Liverpool, studied at University College London, before moving to Heidelberg, Germany, where he worked under Robert Bunsen, inventor of the new-fangled apparatus that inspired Potter’s drawing. Together, using magnesium as a light source, Roscoe and Bunsen reputedly carried out the first flashlight photography in 1864. Their research laid the foundations of comparative photochemistry.
These excerpts do not give full justice to James’ piece which I encourage you to read in its entirety.
Drat! I’ve gotten the information about the first Frankenstein dare (a short story challenge) a little late in the game since the deadline is 11:59 pm PDT on July 31, 2016. In any event, here’s more about the two dares,
And for those who like their information in written form, here are the details from the Arizona State University’s (ASU) Frankenstein Bicentennial Dare (on The Franklin Bicentennial Project website),
Two centuries ago, on a dare to tell the best scary story, 19-year-old Mary Shelley imagined an idea that became the basis for Frankenstein. Mary’s original concept became the novel that arguably kick-started the genres of science fiction and Gothic horror, but also provided an enduring myth that shapes how we grapple with creativity, science, technology, and their consequences.
Two hundred years later, inspired by that classic dare, we’re challenging you to create new myths for the 21st century along with our partners National Novel Writing Month (NaNoWriMo), Chabot Space and Science Center, and Creative Nonfiction magazine.
Presented by NaNoWriMo and the Chabot Space and Science Center
Frankenstein is a classic of Gothic literature – a gripping, tragic story about Victor Frankenstein’s failure to accept responsibility for the consequences of bringing new life into the world. In this dare, we’re challenging you to write a scary story that explores the relationship between creators and the “monsters” they create.
Almost anything that we create can become monstrous: a misinterpreted piece of architecture; a song whose meaning has been misappropriated; a big, but misunderstood idea; or, of course, an actual creature. And in Frankenstein, Shelley teaches us that monstrous does not always mean evil – in fact, creators can prove to be more destructive and inhuman than the things they bring into being
Tell us your story in 1,000 – 1,800 words on Medium.com and use the hashtag #Frankenstein200. Read other #Frankenstein200 stories, and use the recommend button at the bottom of each post for the stories you like. Winners in the short fiction contest will receive personal feedback from Hugo and Sturgeon Award-winning science fiction and fantasy author Elizabeth Bear, as well as a curated selection of classic and contemporary science fiction books and Frankenstein goodies, courtesy of the NaNoWriMo team.
Rules and Mechanics
There are no restrictions on content. Entry is limited to one submission per author. Submissions must be in English and between 1,000 to 1,800 words. You must follow all Medium Terms of Service, including the Rules.
All entries submitted and tagged as #Frankenstein200 and in compliance with the rules outlined here will be considered.
The deadline for submissions is 11:59 PM on July 31, 2016.
Three winners will be selected at random on August 1, 2016.
Each winner receives the following prize package including:
Lynd Ward’s edition of Frankenstein with woodcut illustrations
Penguin Horror’s edition of Frankenstein featuring an introduction by Guillermo del Toro
Additionally, one of the three winners, chosen at random, will receive written coaching/feedback from Elizabeth Bear on his or her entry.
Select stories will be featured on Frankenscape, a public geo-storytelling project hosted by ASU’s Frankenstein Bicentennial Project. Stories may also be featured in National Novel Writing Month communications and social media platforms.
U.S. residents only [emphasis mine]; void where prohibited by law. No purchase is necessary to enter or win.
Creative Nonfiction magazine is daring writers to write original and true stories that explore humans’ efforts to control and redirect nature, the evolving relationships between humanity and science/technology, and contemporary interpretations of monstrosity.
Essays must be vivid and dramatic; they should combine a strong and compelling narrative with an informative or reflective element and reach beyond a strictly personal experience for some universal or deeper meaning. We’re open to a broad range of interpretations of the “Frankenstein” theme, with the understanding that all works submitted must tell true stories and be factually accurate. Above all, we’re looking for well-written prose, rich with detail and a distinctive voice.
Creative Nonfiction editors and a judge (to be announced) will award $10,000 and publication for Best Essay and two $2,500 prizes and publication for runners-up. All essays submitted will be considered for publication in the winter 2018 issue of the magazine.
[Note: There is a submission fee for the nonfiction dare and no indication as to whether or not there are residency requirements.]
A July 27, 2016 email received from The Frankenstein Bicentennial Project (which is how I learned about the dares somewhat belatedly) has this about the first dare,
Planetary Design, Transhumanism, and Pork Products
Our #Frankenstein200 Contest Took Us in Some Unexpected Directions
Last month [June 2016], we partnered with National Novel Writing Month (NaNoWriMo) and The Chabot Space and Science Center to dare the world to create stories in the spirit of Mary Shelley’s Frankenstein, to celebrate the 200th anniversary of the novel’s conception.
We received a bevy of intriguing and sometimes frightening submissions that explore the complex relationships between creators and their “monsters.” Here are a few tales that caught our eye:
The Man Who Harnessed the Sun
By Sandra Knisely
Eliza has to choose between protecting the scientist who once gave her the world and punishing him for letting it all slip away. Read the story…
You can find the stories that have been submitted to date for the creative short story dare at Medium.com.
Good luck! And, don’t forget to tag your short story with #Frankenstein200 and submit it by July 31, 2016 (if you are a US resident). There’s still lots of time to enter a submission for a creative nonfiction piece.
Violent metaphors in medicine are not unusual although the reference is often to war rather than boxing as it is in this news from the University of Waterloo (Canada). Still, it seems counter-intuitive to closely link violence with healing but the practice is well entrenched and it seems attempts to counteract it are a ‘losing battle’ (pun intended).
Credit: Gabriel Picolo “2-in-1 punch.” Courtesy: University of Waterloo
Math, biology and nanotechnology are becoming strange, yet effective bed-fellows in the fight against cancer treatment resistance. Researchers at the University of Waterloo and Harvard Medical School have engineered a revolutionary new approach to cancer treatment that pits a lethal combination of drugs together into a single nanoparticle.
Their work, published online on June 3, 2016 in the leading nanotechnology journal ACS Nano, finds a new method of shrinking tumors and prevents resistance in aggressive cancers by activating two drugs within the same cell at the same time.
Every year thousands of patients die from recurrent cancers that have become resistant to therapy, resulting in one of the greatest unsolved challenges in cancer treatment. By tracking the fate of individual cancer cells under pressure of chemotherapy, biologists and bioengineers at Harvard Medical School studied a network of signals and molecular pathways that allow the cells to generate resistance over the course of treatment.
Using this information, a team of applied mathematicians led by Professor Mohammad Kohandel at the University of Waterloo, developed a mathematical model that incorporated algorithms that define the phenotypic cell state transitions of cancer cells in real-time while under attack by an anticancer agent. The mathematical simulations enabled them to define the exact molecular behavior and pathway of signals, which allow cancer cells to survive treatment over time.
They discovered that the PI3K/AKT kinase, which is often over-activated in cancers, enables cells to undergo a resistance program when pressured with the cytotoxic chemotherapy known as Taxanes, which are conventionally used to treat aggressive breast cancers. This revolutionary window into the life of a cell reveals that vulnerabilities to small molecule PI3K/AKT kinase inhibitors exist, and can be targeted if they are applied in the right sequence with combinations of other drugs.
Previously theories of drug resistance have relied on the hypothesis that only certain, “privileged” cells can overcome therapy. The mathematical simulations demonstrate that, under the right conditions and signaling events, any cell can develop a resistance program.
“Only recently have we begun to appreciate how important mathematics and physics are to understanding the biology and evolution of cancer,” said Professor Kohandel. “In fact, there is now increasing synergy between these disciplines, and we are beginning to appreciate how critical this information can be to create the right recipes to treat cancer.”
Although previous studies explored the use of drug combinations to treat cancer, the one-two punch approach is not always successful. In the new study, led by Professor Aaron Goldman, a faculty member in the division of Engineering in Medicine at Brigham and Women’s Hospital, the scientists realized a major shortcoming of the combination therapy approach is that both drugs need to be active in the same cell, something that current delivery methods can’t guarantee.
“We were inspired by the mathematical understanding that a cancer cell rewires the mechanisms of resistance in a very specific order and time-sensitive manner,” said Professor Goldman. “By developing a 2-in-1 nanomedicine, we could ensure the cell that was acquiring this new resistance saw the lethal drug combination, shutting down the survival program and eliminating the evidence of resistance. This approach could redefine how clinicians deliver combinations of drugs in the clinic.”
The approach the bioengineers took was to build a single nanoparticle, inspired by computer models, that exploit a technique known as supramolecular chemistry. This nanotechnology enables scientists to build cholesterol-tethered drugs together from “tetris-like” building blocks that self-assemble, incorporating multiple drugs into stable, individual nano-vehicles that target tumors through the leaky vasculature. This 2-in-1 strategy ensures that resistance to therapy never has a chance to develop, bringing together the right recipe to destroy surviving cancer cells.
Using mouse models of aggressive breast cancer, the scientists confirmed the predictions from the mathematical model that both drugs must be deterministically delivered to the same cell.