Tag Archives: Frankenstein

Frankenstein and Switzerland in 2016

The Frankenstein Bicentennial celebration is in process as various events and projects are now being launched. In a Nov. 12, 2015 posting I made mention of the Frankenstein Bicentennial Project 1818-2018 at Arizona State University (ASU; scroll down about 15% of the way),

… the Transmedia Museum (Frankenstein Bicentennial Project 1818-2018).  This project is being hosted by Arizona State University. From the project homepage,

No work of literature has done more to shape the way people imagine science and its moral consequences than Frankenstein; or The Modern Prometheus, Mary Shelley’s enduring tale of creation and responsibility. The novel’s themes and tropes—such as the complex dynamic between creator and creation—continue to resonate with contemporary audiences. Frankenstein continues to influence the way we confront emerging technologies, conceptualize the process of scientific research, imagine the motivations and ethical struggles of scientists, and weigh the benefits of innovation with its unforeseen pitfalls.

The Frankenstein Bicentennial Project will infuse science and engineering endeavors with considerations of ethics. It will use the power of storytelling and art to shape processes of innovation and empower public appraisal of techno-scientific research and creation. It will offer humanists and artists a new set of concerns around research, public policy, and the ramifications of exploration and invention. And it will inspire new scientific and technological advances inspired by Shelley’s exploration of our inspiring and terrifying ability to bring new life into the world. Frankenstein represents a landmark fusion of science, ethics, and literary expression.

The bicentennial provides an opportunity for vivid reflection on how science is culturally framed and understood by the public, as well as our ethical limitations and responsibility for nurturing the products of our creativity. It is also a moment to unveil new scientific and technological marvels, especially in the areas of synthetic biology and artificial intelligence. Engaging with Frankenstein allows scholars and educators, artists and writers, and the public at large to consider the history of scientific invention, reflect on contemporary research, and question the future of our technological society. Acting as a network hub for the bicentennial celebration, ASU will encourage and coordinate collaboration across institutions and among diverse groups worldwide.

2016 Frankenstein events

Now, there’s an exhibition in Switzerland where Frankenstein was ‘born’ according to a May 12, 2016 news item on phys.org,

Frankenstein, the story of a scientist who brings to life a cadaver and causes his own downfall, has for two centuries given voice to anxiety surrounding the unrelenting advance of science.

To mark the 200 years since England’s Mary Shelley first imagined the ultimate horror story during a visit to a frigid, rain-drenched Switzerland, an exhibit opens in Geneva Friday called “Frankenstein, Creation of Darkness”.

In the dimly-lit, expansive basement at the Martin Bodmer Foundation, a long row of glass cases holds 15 hand-written, yellowed pages from a notebook where Shelley in 1816 wrote the first version of what is considered a masterpiece of romantic literature.

The idea for her “miserable monster” came when at just 18 she and her future husband, English poet Percy Bysshe Shelley, went to a summer home—the Villa Diodati—rented by literary great Lord Byron on the outskirts of Geneva.

The current private owners of the picturesque manor overlooking Lake Geneva will also open their lush gardens to guided tours during the nearby exhibit which runs to October 9 [May 13 – Oct. 9, 2016].

While the spot today is lovely, with pink and purple lilacs spilling from the terraces and gravel walkways winding through rose-covered arches, in the summer of 1816 the atmosphere was more somber.

A massive eruption from the Tambora volcano in Indonesia wreaked havoc with the global climate that year, and a weather report for Geneva in June on display at the exhibit mentions “not a single leaf” had yet appeared on the oak trees.

To pass the time, poet Lord Byron challenged the band of literary bohemians gathered at the villa to each invent a ghost story, resulting in several famous pieces of writing.

English doctor and author John Polidori came up with the idea for “The Vampyre”, which was published three years later and is considered to have pioneered the romantic vampyre genre, including works like Bram Stoker’s “Dracula”.

That book figures among a multitude of first editions at the Geneva exhibit, including three of Mary Shelley’s “Frankenstein, or the Modern Prometheus”—the most famous story to emerge from the competition.

Here’s a description of the exhibit, from the Martin Bodmer Foundation’s Frankenstein webpage,

To celebrate the 200th anniversary of the writing of this historically influential work of literature, the Martin Bodmer Foundation presents a major exhibition on the origins of Frankenstein, the perspectives it opens and the questions it raises.

A best seller since its first publication in 1818, Mary Shelley’s novel continues to demand attention. The questions it raises remain at the heart of literary and philosophical concerns: the ethics of science, climate change, the technologisation of the human body, the unconscious, human otherness, the plight of the homeless and the dispossessed.

The exposition Frankenstein: Creation of Darkness recreates the beginnings of the novel in its first manuscript and printed forms, along with paintings and engravings that evoke the world of 1816. A variety of literary and scientific works are presented as sources of the novel’s ideas. While exploring the novel’s origins, the exhibition also evokes the social and scientific themes of the novel that remain important in our own day.

For what it’s worth, I have come across analyses which suggest science and technology may not have been the primary concern at the time. There are interpretations which suggest issues around childbirth (very dangerous until modern times) and fear of disfigurement and disfigured individuals. What makes Frankenstein and the book so fascinating is how flexible interpretations can be. (For more about Frankenstein and flexibility, read Susan Tyler Hitchcock’s 2009 book, Frankenstein: a cultural history.)

There’s one more upcoming Frankenstein event, from The Frankenstein Bicentennial announcement webpage,

On June 14 and 15, 2016, the Brocher Foundation, Arizona State University, Duke University, and the University of Lausanne will host “Frankenstein’s Shadow,” a symposium in Geneva, Switzerland to commemorate the origin of Frankenstein and assess its influence in different times and cultures, particularly its resonance in debates about public policy governing biotechnology and medicine. These dates place the symposium almost exactly 200 years after Mary Shelley initially conceived the idea for Frankenstein on June 16, 1816, and in almost exactly the same geographical location on the shores of Lake Geneva.

If you’re interested in details such as the programme schedule, there’s this PDF,

Frankenstein¹s_ShadowConference

Enjoy!

Intelligence, computers, and robots

Starting tonight, Feb. 14, 2011, you’ll be able to watch a computer compete against two former champions on the US television quiz programme, Jeopardy.  The match between the IBM computer, named Watson, and the most accomplished champions that have ever played on Jeopardy, Ken Jennings and Brad Rutter, has been four years in the making. From the article by Julie Beswald on physorg.com,

“Let’s finish, ‘Chicks Dig Me’,” intones the somewhat monotone, but not unpleasant, voice of Watson, IBM’s new supercomputer built to compete on the game show Jeopardy!

The audience chuckles in response to the machine-like voice and its all-too-human assertion. But fellow contestant Ken Jennings gets the last laugh as he buzzes in and garners $1,000.

This exchange is part of a January 13 practice round for the world’s first man vs. machine game show. Scheduled to air February 14-16, the match pits Watson against the two best Jeopardy! players of all time. Jennings holds the record for the most consecutive games won, at 74. The other contestant, Brad Rutter, has winnings totaling over $3.2 million.

On Feb. 9, 2011, PBS’s NOVA science program broadcast a documentary about Watson whose name is derived from the company founder, Paul Watson, and not Sherlock Holmes’s companion and biographer, Dr. Watson. Titled the Smartest Machine on Earth, the show highlighted Watson’s learning process and some of the principles behind artificial intelligence. PBS’s website is featuring a live blogging event of tonight’s and the Feb. 15 and 16 matches. From the website,

On Monday [Feb. 14, 2011], our bloggers will be Nico Schlaefer and Hideki Shima, two Ph.D. students at Carnegie Mellon University’s Language Technologies Institute who worked on the Watson project.

At the same time that the ‘Watson’ event was being publicized last week, another news item on artificial intelligence and learning was making the rounds. From a Feb. 9, 2011 article by Mark Ward on BBC News ,

Robots could soon have an equivalent of the internet and Wikipedia.

European scientists have embarked on a project to let robots share and store what they discover about the world.

Called RoboEarth it will be a place that robots can upload data to when they master a task, and ask for help in carrying out new ones.

Researchers behind it hope it will allow robots to come into service more quickly, armed with a growing library of knowledge about their human masters. [emphasis mine]

You can read a first person account of the RoboEarth project on the IEEE (Institute of Electrical and Electronics Engineering) Spectrum’s Automaton Robotics blog in a posting by Markus Waibel,

As part of the European project RoboEarth, I am currently one of about 30 people working towards building an Internet for robots: a worldwide, open-source platform that allows any robot with a network connection to generate, share, and reuse data. The project is set up to deliver a proof of concept to show two things:

* RoboEarth greatly speeds up robot learning and adaptation in complex tasks.

* Robots using RoboEarth can execute tasks that were not explicitly planned for at design time.

The vision behind RoboEarth is much larger: Allow robots to encode, exchange, and reuse knowledge to help each other accomplish complex tasks. This goes beyond merely allowing robots to communicate via the Internet, outsourcing computation to the cloud, or linked data.

But before you yell “Skynet!,” think again. While the most similar things science fiction writers have imagined may well be the artificial intelligences in Terminator, the Space Odyssey series, or the Ender saga, I think those analogies are flawed. [emphasis mine] RoboEarth is about building a knowledge base, and while it may include intelligent web services or a robot app store, it will probably be about as self-aware as Wikipedia.

That said, my colleagues and I believe that if robots are to move out of the factories and work alongside humans, they will need to systematically share data and build on each other’s experience.

Unfortunately, Markus Waibel doesn’t explain why he thinks the analogies are flawed but he does lay out the reasoning for why robots should share information. For a more approachable and much briefer account, you can check out Ariel Schwartz’s Feb. 10, 2011 article on the Fast Company website,

The EU-funded [European Union] RoboEarth project is bringing together European scientists to build a network and database repository for robots to share information about the world. They will, if all goes as planned, use the network to store and retrieve information about objects, locations (including maps), and instructions about completing activities. Robots will be both the contributors and the editors of the repository.

With RoboEarth, one robot’s learning experiences are never lost–the data is passed on for other robots to mine. As RedOrbit explains, that means one robot’s experiences with, say, setting a dining room table could be passed on to others, so the butler robot of the future might know how to prepare for dinner guests without any prior programming.

There is a RoboEarth website, so we humans can get more information and hopefully keep up with the robots.

Happily and as there is with increasing frequency, there’s a Youtube video. This one features a robot downloading information from RoboEarth and using that information in a quasi hospital setting,

I find this use of popular entertainment, particularly obvious with Watson, to communicate about scientific advances quite interesting. On this same theme of popular culture as a means of science communication, I featured a Lady Gaga parody by a lab working on Alzheimer’s in my Jan. 28, 2011 posting.  I also find the reference to “human masters” in the BBC article along with Waibel’s flat assertion that some science fiction analogies about artificial intelligence are flawed indicative of some very old anxieties as expressed in Mary Shelley’s Frankenstein.

ETA Feb. 14, 2011: The latest posting on the Pasco Phronesis blog, I, For One, Welcome Our Robot Game Show Overlords, features another opinion about the Watson appearances on Jeopardy. From the posting,

What will this mean? Given that a cursory search suggests opinion is divided on whether Watson will win this week, I have no idea. While it will likely be entertaining, and does represent a significant step forward in computing capabilities, I can’t help but think about the supercomputing race that makes waves only when a new computational record is made. It’s nice, and might prompt government action should they lose the number one standing. But what does it mean? What new outcomes do we have because of this? The conversation is rarely about what, to me, seems more important.

Pop culture, science communication, and nanotechnology

A few years back I wrote a paper for the  Cascadia Nanotech Symposium (March 2007 held in Vancouver) called: Engaging Nanotechnology: pop culture, media, and public awareness. I was reminded it of a few days ago when I saw a mention on Andrew Maynard’s, 2020 Science blog about a seminar titled, Biopolitics of Popular Culture being held in Irvine, California on Dec. 4, 2009 by the Institute of Ethics for Emerging Technologies. (You can read more of Andrew’s comments here or you can check out the meeting details here.) From the meeting website,

Popular culture is full of tropes and cliches that shape our debates about emerging technologies. Our most transcendent expectations for technology come from pop culture, and the most common objections to emerging technologies come from science fiction and horror, from Frankenstein and Brave New World to Gattaca and the Terminator.

Why is it that almost every person in fiction who wants to live a longer than normal life is evil or pays some terrible price? What does it say about attitudes towards posthuman possibilities when mutants in Heroes or the X-Men, or cyborgs in Battlestar Galactica or Iron Man, or vampires in True Blood or Twilight are depicted as capable of responsible citizenship?

Is Hollywood reflecting a transhuman turn in popular culture, helping us imagine a day when magical and muggle can live together in a peaceful Star Trek federation? Will the merging of pop culture, social networking and virtual reality into a heightened augmented reality encourage us all to make our lives a form of participative fiction?

During this day long seminar we will engage with culture critics, artists, writers, and filmmakers to explore the biopolitics that are implicit in depictions of emerging technology in literature, film and television.

I’m not sure what they mean by biopolitics, especially after the lecture I attended at Simon Fraser University’s downtown campus last night (Nov. 12, 2009), Liminal Livestock. Last night’s lecture by Susan Squier highlighted (this is oversimplified) the relationship between women and chickens in the light of reproductive technologies.  From the lecture description,

Adapting SubRosa Art Collective’s memorable question, this talk asks: “What does it mean, to feminism and to agriculture, that women are like chickens and chickens are like women?” As liminal livestock, chickens play a central role in our gendered agricultural imaginary: the zone where we find the “speculative, propositional fabric of agricultural thought.” Analyzing several children’s stories, a novel, and a documentary film, the talk seeks to discover some of the factors that help to shape the role of women in agriculture, and the role of agriculture in women’s lives.

Squier did also discuss reproductive technologies at some length although it’s not obvious from the description that the topic will arise. She discussed the transition of chicken raising as a woman’s job to a man’s job which coincided with the rise of  chicken factory farms. Squier also noted the current interest in raising chickens in city and suburban areas without speculating on possible cultural impacts.

The lecture covered  selective breeding and the shift of university  poultry science departments from the study of science to the study of increasing chicken productivity, which led to tampering with genes and other reproductive technologies. One thing I didn’t realize is that chicken eggs are used for studies on human reproduction. Disturbingly, Squier talked to an American scientist, whose work concerns human reproduction, who moved to Britain because the chicken eggs are of such poor quality in the US.

The relationship between women and chickens was metaphorical and illustrated through popular children’s stories and pop culture artifacts (i.e. poultry beauty pageants featuring women not chickens) in a way that would require reproducing far more of the lecture than I can here. So if you are interested, I understand that Squier does have a book about women and chickens being published although I can’t find a publication date.

Squier’s lecture and the meeting for the Institute of Ethics for Emerging Technologies present different ways of integrating pop culture elements into the discussion about science and emerging technologies. Since I’m tooting my horn, I’m going to finish with my thoughts on the matter as written in my Cascadia Nanotechnology Symposium paper,

The process of accepting, rejecting, or changing new sciences and new technologies seems more akin to a freewheeling, creative conversation with competing narratives than a transfer of information from experts to nonexperts as per the science literacy model.

The focus on establishing how much awareness the public has about nanotechnology by measuring the number of articles in the newspaper or items in the broadcast media or even tracking the topic in the blogosphere is useful as one of a set of tools.

Disturbing as it is to think that it could be used for purely manipulative purposes, finding out how people develop their attitudes towards new technologies and the interplay between cognition, affect, and values has the potential to help us better understand ourselves and our relationship to the sciences. (In this paper, the terms science and technology are being used interchangeably, as is often the case with nanotechnology.)

Pop culture provides a valuable view into how nonexperts learn about science (books, television, etc.) and accept technological innovations (e.g. rejecting the phonograph as a talking book technology but accepting it for music listening).

There is a collaborative and interactive process at the heart of the nanotechnology ‘discussion’. For example, Drexler appears to be responding to some of his critics by revising some of his earlier suppositions about how nanotechnology would work. Interestingly, he also appears to be downplaying his earlier concerns about nanoassemblers running amok and unleashing the ‘goo’ scenario on us all. (BBC News, June 9, 2004)

In reviewing all of the material about communicating science, public attitudes, and values, one thing stands out: time. Electricity was seen by some as deeply disturbing to the cosmic forces of the universe. There was resistance to the idea for decades and, in some cases (the Amish), that resistance lives on. Despite all this, there is not a country in the world today that doesn’t have electricity.

One final note: I didn’t mean to suggest the inexorable adoption of any and all technologies, my intent was to point out the impossibility of determining a technology’s future adoption or rejection by measuring contemporary attitudes, hostile or otherwise.

’nuff said for today. Happy weekend!

Art conservation and nanotechnology; the science of social networks; carbon nanotubes and possible mesothelioma; Eric Drexler has a few words

It looks like nanotechnology innovations in the field of art conservation may help preserve priceless works for longer and with less damage. The problem as articulated in Michael Berger’s article on Nanowerk is,

“Nowadays, one of the most important problems faced during the cleaning of works of art is the removal of organic materials, mainly acrylic polymers, applied in the past as consolidants or protective coatings,” explains Piero Baglioni, a professor of Physical Chemistry at the University of Florence. “Unfortunately, their application induces a drastic alteration of the interfacial properties of the artwork and leads to increased degradation. These organic materials must therefore be removed.”

Baglioni and his colleagues at the University of Florence have developed “… a micro-emulsion cleaning agent that is designed to dissolve only the organic molecules on the surface of a painting …”

This is a little off Azonano’s usual beat (and mine too) but Rensselaer Polytechnic Institute’s Army Research Laboratory is launching an interdisciplinary research center for the study of social and cognitive networks.  From the news item,

“Rensselaer offers a unique research environment to lead this important new network science center,” said Rensselaer President Shirley Ann Jackson. “We have assembled an outstanding team of researchers, and built powerful new research platforms. The team will work with one of the largest academic supercomputing centers in the world – the Rensselaer Computational Center for Nanotechnology Innovations – and the leading visualization and simulation capabilities within our new Experimental Media and Performing Arts Center. The Center for Social and Cognitive Networks will bring together our world-class scientists in the areas of computer science, cognitive science, physics, Web science, and mathematics in an unprecedented collaboration to investigate all aspects of the ever-changing and global social climate of today.”

The center will study the fundamentals of social and cognitive networks and their roles in today’s society and organizations, including the U.S. Army. The goal will be to gain a deeper understanding of these networks and build a firm scientific basis in the field of network science. The work will include research on large social networks, with a focus on networks with mobile agents. An example of a mobile agent is someone who is interacting (e.g., communicating, observing, helping, distracting, interrupting, etc.) with others while moving around the environment.

My suspicion is that the real goal for the work is to exploit the data for military advantage, if possible. Any other benefits would be incidental. Of course, a fair chunk of the technology we enjoy today (for example, tv and the internet) was investigated by the military first.

I’ve mentioned carbon nanotubes and possible toxicology before. Specifically, some carbon nanotubes resemble asbestos fibers and pilot studies have suggested they may behave the same way when ingested by one means or another  into the body. There is a new confirmation of this hypothesis with a study where mice inhaled carbon nanotubes. From the news item on Nanowerk,

Using mice in an animal model study, the researchers set out to determine what happens when multi-walled carbon nanotubes are inhaled. Specifically, researchers wanted to determine whether the nanotubes would be able to reach the pleura, which is the tissue that lines the outside of the lungs and is affected by exposure to certain types of asbestos fibers which cause the cancer mesothelioma. The researchers used inhalation exposure and found that inhaled nanotubes do reach the pleura and cause health effects.

This was one exposure and the mice recovered after three months. More studies will be needed to determine the effects of repeated exposure. This study (Inhaled Carbon Nanotubes Reach the Sub-Pleural Tissue in Mice by Dr. James Bonner, Dr. Jessica Ryman-Rasmussen, Dr. Arnold Brody, et. al.) can be found in the Oct. 25, 2009 issue of Nature Nanotechnology.

On Friday (Oct. 23, 2009) I mentioned an essay by Chris Toumey on the forthcoming 50th anniversary of Richard Feynman’s seminal talk, There’s plenty of room at the bottom. Today I found a response to the essay by Eric Drexler.  From Drexler’s essay on Nanowerk,

Unfortunately, yesterday’s backward-looking guest article in Nanowerk reinforces the widespread but quite mistaken idea that my views are essentially the opposite of what I’ve stated above, and that those perverse ideas are also those of the Foresight Institute. I cannot speak for that organization, or vice versa, because I left it years ago. Contrary to what the article may suggest, I have no affiliation with the organization whatsoever. Regarding terminology, it is of course entirely appropriate to use the term “nanotechnology” to describe nanoscale technologies. The idea that there is a conflict between progress in the field and future applications of that progress is puzzling. This idea appears to stem from a strange episode that came to a head during the political push for the bill that established and funded the U.S. National Nanotechnology Initiative, an episode in which some leading science spokesmen quite properly rejected a collection of popular fantasies, but quite improperly attributed those fantasies to me. Reading claims by confused enthusiasts and the press that “Drexler says this” or “Drexler says that” is no substitute for reading my journal articles, or the technical analysis in my book, Nanosystems, and in my MIT dissertation). The failure of these leaders to do their homework has had substantial and lingering toxic effects.

(My own focus was on the ‘origin’ story for nanotechnology and not on Drexler’s theories.) If I understand the situation rightly, much of the controversy has its roots in Drexler’s popular book, Engines of Creation. It was written over 20 years ago and struck a note which reverberates to this day. The irony is that there are writers who’d trade places with Drexler in a nano second. Imagine having that kind of impact on society and culture (in the US primarily). The downside as Drexler has discovered is that the idea or story has taken on its own life. For a similar example, take Mary Shelley’s book where Frankenstein is not the monster’s name, it’s the scientist’s name. However, the character took its own life and name.

The availability heuristic and the perception of risk

It’s taking a lot longer to go through the Risk Management Principles for Nanotechnology article than I expected. But, let’s move onwards. “Availability” is the other main heuristic used when trying to understand how people perceive risk. This one is about how we assess the likelihood of one or more risks.

According to researchers, individuals who can easily recall a memory specific to a given harm are predisposed to overestimating the probability of its recurrence, compared to to other more likely harms to which no memory is attached. p. 49 in Nanoethics, 2008, vol. 2

This memory extends beyond your personal experience (although it remains the most powerful) all the way to reading or hearing about an incident.  The effect can also be exacerbated by imagery and social reinforcement. Probably the most powerful, recent example would be ‘frankenfoods’. We read about the cloning of Dolly the sheep who died soon after her ‘brith’, there was the ‘stem cell debate, and ‘mad cow disease’ which somehow got mixed together in a debate on genetically modified food evolving into a discussion about biotechnology in general. The whole thing was summed as ‘frankenfood’ a term which fused a very popular icon of science gone mad, Frankenstein, with the food we put in our mouths. (Note: It is a little more complicated than that but I’m not in the mood to write a long paper or dissertation where every nuance and development is discussed.) It was propelled by the media and activists had one of their most successful campaigns.

Getting back to ‘availability’ it is a very powerful heuristic to use when trying to understand how people perceive risk.

The thing with ‘frankenfoods’ is that wasn’t planned. Susan Tyler Hitchcock in her book, ‘Frankensein; a cultural history’ (2007), traces the birth of the term in a 1992 letter written by Paul Lewis to the New York Times through to its use as a clarion cry for activists, the media, and a newly worried public. Lewis coined the phrase and one infers from the book that it was done casually. The phrase was picked up by other media outlets and other activists (Lewis is both a professor and an activist). For the full story, check out Tyler’s book pp. 288-294.

I have heard the ETC Group as being credited with the ‘frankenfoods’ debate and pushing the activist agenda. While they may have been active in the debate, I have not been able to find any documentation to support the contention that the ETC Group made it happen. (Please let me know if you have found something.)

The authors (Marchant, Sylvester, and Abbott) of this risk management paper feel that nanotechnology is vulnerable to the same sort of cascading effects that the ‘availability’ heuristic provides a framework for understanding. Coming next, a ‘new’ risk management model.