Tag Archives: Centre for the Study of Existential Risk

Nanotechnology at the movies: Transcendence opens April 18, 2014 in the US & Canada

Screenwriter Jack Paglen has an intriguing interpretation of nanotechnology, one he (along with the director) shares in an April 13, 2014 article by Larry Getlen for the NY Post and in his movie, Transcendence. which is opening in the US and Canada on April 18, 2014. First, here are a few of the more general ideas underlying his screenplay,

In “Transcendence” — out Friday [April 18, 2014] and directed by Oscar-winning cinematographer Wally Pfister (“Inception,” “The Dark Knight”) — Johnny Depp plays Dr. Will Caster, an artificial-intelligence researcher who has spent his career trying to design a sentient computer that can hold, and even exceed, the world’s collective intelligence.

After he’s shot by antitechnology activists, his consciousness is uploaded to a computer network just before his body dies.

“The theories associated with the film say that when a strong artificial intelligence wakes up, it will quickly become more intelligent than a human being,” screenwriter Jack Paglen says, referring to a concept known as “the singularity.”

It should be noted that there are anti-technology terrorists. I don’t think I’ve covered that topic in a while so an Aug. 31, 2012 posting is the most recent and, despite the title, “In depth and one year later—the nanotechnology bombings in Mexico” provides an overview of sorts. For a more up-to-date view, you can read Eric Markowitz’s April 9, 2014 article for Vocative.com. I do have one observation about the article where Markowitz has linked some recent protests in San Francisco to the bombings in Mexico. Those protests in San Francisco seem more like a ‘poor vs. the rich’ situation where the rich happen to come from the technology sector.

Getting back to “Transcendence” and singularity, there’s a good Wikipedia entry describing the ideas and some of the thinkers behind the notion of a singularity or technological singularity, as it’s sometimes called (Note: Links have been removed),

The technological singularity, or simply the singularity, is a hypothetical moment in time when artificial intelligence will have progressed to the point of a greater-than-human intelligence, radically changing civilization, and perhaps human nature.[1] Because the capabilities of such an intelligence may be difficult for a human to comprehend, the technological singularity is often seen as an occurrence (akin to a gravitational singularity) beyond which the future course of human history is unpredictable or even unfathomable.

The first use of the term “singularity” in this context was by mathematician John von Neumann. In 1958, regarding a summary of a conversation with von Neumann, Stanislaw Ulam described “ever accelerating progress of technology and changes in the mode of human life, which gives the appearance of approaching some essential singularity in the history of the race beyond which human affairs, as we know them, could not continue”.[2] The term was popularized by science fiction writer Vernor Vinge, who argues that artificial intelligence, human biological enhancement, or brain-computer interfaces could be possible causes of the singularity.[3] Futurist Ray Kurzweil cited von Neumann’s use of the term in a foreword to von Neumann’s classic The Computer and the Brain.

Proponents of the singularity typically postulate an “intelligence explosion”,[4][5] where superintelligences design successive generations of increasingly powerful minds, that might occur very quickly and might not stop until the agent’s cognitive abilities greatly surpass that of any human.

Kurzweil predicts the singularity to occur around 2045[6] whereas Vinge predicts some time before 2030.[7] At the 2012 Singularity Summit, Stuart Armstrong did a study of artificial generalized intelligence (AGI) predictions by experts and found a wide range of predicted dates, with a median value of 2040. His own prediction on reviewing the data is that there is an 80% probability that the singularity will occur between 2017 and 2112.[8]

The ‘technological singularity’ is controversial and contested (from the Wikipedia entry).

In addition to general criticisms of the singularity concept, several critics have raised issues with Kurzweil’s iconic chart. One line of criticism is that a log-log chart of this nature is inherently biased toward a straight-line result. Others identify selection bias in the points that Kurzweil chooses to use. For example, biologist PZ Myers points out that many of the early evolutionary “events” were picked arbitrarily.[104] Kurzweil has rebutted this by charting evolutionary events from 15 neutral sources, and showing that they fit a straight line on a log-log chart. The Economist mocked the concept with a graph extrapolating that the number of blades on a razor, which has increased over the years from one to as many as five, will increase ever-faster to infinity.[105]

By the way, this movie is mentioned briefly in the pop culture portion of the Wikipedia entry.

Getting back to Paglen and his screenplay, here’s more from Getlen’s article,

… as Will’s powers grow, he begins to pull off fantastic achievements, including giving a blind man sight, regenerating his own body and spreading his power to the water and the air.

This conjecture was influenced by nanotechnology, the field of manipulating matter at the scale of a nanometer, or one-billionth of a meter. (By comparison, a human hair is around 70,000-100,000 nanometers wide.)

“In some circles, nanotechnology is the holy grail,” says Paglen, “where we could have microscopic, networked machines [emphasis mine] that would be capable of miracles.”

The potential uses of, and implications for, nanotechnology are vast and widely debated, but many believe the effects could be life-changing.

“When I visited MIT,” says Pfister, “I visited a cancer research institute. They’re talking about the ability of nanotechnology to be injected inside a human body, travel immediately to a cancer cell, and deliver a payload of medicine directly to that cell, eliminating [the need to] poison the whole body with chemo.”

“Nanotechnology could help us live longer, move faster and be stronger. It can possibly cure cancer, and help with all human ailments.”

I find the ‘golly gee wizness’ of Paglen’s and Pfister’s take on nanotechnology disconcerting but they can’t be dismissed. There are projects where people are testing retinal implants which allow them to see again. There is a lot of work in the field of medicine designed to make therapeutic procedures that are gentler on the body by making their actions specific to diseased tissue while ignoring healthy tissue (sadly, this is still not possible). As for human enhancement, I have so many pieces that it has its own category on this blog. I first wrote about it in a four-part series starting with this one: Nanotechnology enables robots and human enhancement: part 1, (You can read the series by scrolling past the end of the posting and clicking on the next part or search the category and pick through the more recent pieces.)

I’m not sure if this error is Paglen’s or Getlen’s but nanotechnology is not “microscopic, networked machines” as Paglen’s quote strongly suggests. Some nanoscale devices could be described as machines (often called nanobots) but there are also nanoparticles, nanotubes, nanowires, and more that cannot be described as machines or devices, for that matter. More importantly, it seems Paglen’s main concern is this,

“One of [science-fiction author] Arthur C. Clarke’s laws is that any sufficiently advanced technology is indistinguishable from magic. That very quickly would become the case if this happened, because this artificial intelligence would be evolving technologies that we do not understand, and it would be capable of miracles by that definition,” says Paglen. [emphasis mine]

This notion of “evolving technologies that we do not understand” brings to mind a  project that was announced at the University of Cambridge (from my Nov. 26, 2012 posting),

The idea that robots of one kind or another (e.g. nanobots eating up the world and leaving grey goo, Cylons in both versions of Battlestar Galactica trying to exterminate humans, etc.) will take over the world and find humans unnecessary  isn’t especially new in works of fiction. It’s not always mentioned directly but the underlying anxiety often has to do with intelligence and concerns over an ‘explosion of intelligence’. The question it raises,’ what if our machines/creations become more intelligent than humans?’ has been described as existential risk. According to a Nov. 25, 2012 article by Sylvia Hui for Huffington Post, a group of eminent philosophers and scientists at the University of Cambridge are proposing to found a Centre for the Study of Existential Risk,

While I do have some reservations about how Paglen and Pfister describe the science, I appreciate their interest in communicating the scientific ideas, particularly those underlying Paglen’s screenplay.

For anyone who may be concerned about the likelihood of emulating  a human brain and uploading it to a computer, there’s an April 13, 2014 article by Luke Muehlhauser and Stuart Armstrong for Slate discussing that very possibility (Note 1: Links have been removed; Note 2: Armstrong is mentioned in this posting’s excerpt from the Wikipedia entry on Technological Singularity),

Today scientists can’t even emulate the brain of a tiny worm called C. elegans, which has 302 neurons, compared with the human brain’s 86 billion neurons. Using models of expected technological progress on the three key problems, we’d estimate that we wouldn’t be able to emulate human brains until at least 2070 (though this estimate is very uncertain).

But would an emulation of your brain be you, and would it be conscious? Such questions quickly get us into thorny philosophical territory, so we’ll sidestep them for now. For many purposes—estimating the economic impact of brain emulations, for instance—it suffices to know that the brain emulations would have humanlike functionality, regardless of whether the brain emulation would also be conscious.

Paglen/Pfister seem to be equating intelligence (brain power) with consciousness while Muehlhauser/Armstrong simply sidestep the issue. As they (Muehlhauser/Armstrong) note, it’s “thorny.”

If you consider thinkers like David Chalmers who suggest everything has consciousness, then it follows that computers/robots/etc. may not appreciate having a human brain emulation which takes us back into Battlestar Galactica territory. From my March 19, 2014 posting (one of the postings where I recounted various TED 2014 talks in Vancouver), here’s more about David Chalmers,

Finally, I wasn’t expecting to write about David Chalmers so my notes aren’t very good. A philosopher, here’s an excerpt from Chalmers’ TED biography,

In his work, David Chalmers explores the “hard problem of consciousness” — the idea that science can’t ever explain our subjective experience.

David Chalmers is a philosopher at the Australian National University and New York University. He works in philosophy of mind and in related areas of philosophy and cognitive science. While he’s especially known for his theories on consciousness, he’s also interested (and has extensively published) in all sorts of other issues in the foundations of cognitive science, the philosophy of language, metaphysics and epistemology.

Chalmers provided an interesting bookend to a session started with a brain researcher (Nancy Kanwisher) who breaks the brain down into various processing regions (vastly oversimplified but the easiest way to summarize her work in this context). Chalmers reviewed the ‘science of consciousness’ and noted that current work in science tends to be reductionist, i.e., examining parts of things such as brains and that same reductionism has been brought to the question of consciousness.

Rather than trying to prove consciousness, Chalmers proposes that we consider it a fundamental in the same way that we consider time, space, and mass to be fundamental. He noted that there’s precedence for additions and gave the example of James Clerk Maxwell and his proposal to consider electricity and magnetism as fundamental.

Chalmers next suggestion is a little more outré and based on some thinking (sorry I didn’t catch the theorist’s name) that suggests everything, including photons, has a type of consciousness (but not intelligence).

Have a great time at the movie!

Almost Human (tv series), smartphones, and anxieties about life/nonlife

The US-based Fox Broadcasting Company is set to premiere a new futuristic television series, Almost Human, over two nights, Nov. 17, and 18, 2013 for US and Canadian viewers. Here’s a description of the premise from its Wikipedia essay (Note: Links have been removed),

The series is set thirty-five years in the future when humans in the Los Angeles Police Department are paired up with lifelike androids; a detective who has a dislike for robots partners with an android capable of emotion.

One of the showrunners, Naren Shankar, seems to have also been functioning both as a science consultant and as a crime writing consultant,in addition to his other duties. From a Sept. 4, 2013 article by Lisa Tsering for Indiawest.com,

FOX is the latest television network to utilize the formidable talents of Naren Shankar, an Indian American writer and producer best known to fans for his work on “Star Trek: Deep Space Nine,” “Star Trek: Voyager” and “Star Trek: The Next Generation” as well as “Farscape,” the recently cancelled ABC series “Zero Hour” and “The Outer Limits.”

Set 35 years in the future, “Almost Human” stars Karl Urban and Michael Ealy as a crimefighting duo of a cop who is part-machine and a robot who is part-human. [emphasis mine]

“We are extrapolating the things we see today into the near future,” he explained. For example, the show will comment on the pervasiveness of location software, he said. “There will also be issues of technology such as medical ethics, or privacy; or how technology enables the rich but not the poor, who can’t afford it.”

Speaking at Comic-Con July 20 [2013], Shankar told media there, “Joel [J.H. Wyman] was looking for a collaboration with someone who had come from the crime world, and I had worked on ‘CSI’ for eight years.

“This is like coming back to my first love, since for many years I had done science fiction. It’s a great opportunity to get away from dismembered corpses and autopsy scenes.”

There’s plenty of drama — in the new series, the year is 2048, and police officer John Kennex (Karl Urban, “Dr. Bones” from the new “Star Trek” films) is trying to bounce back from one of the most catastrophic attacks ever made against the police department. Kennex wakes up from a 17-month coma and can’t remember much, except that his partner was killed; his girlfriend left him and one of his legs has been amputated and is now outfitted with a high-tech synthetic appendage. According to police department policy, every cop must partner with a robot, so Kennex is paired with Dorian (Ealy), an android with an unusual glitch that makes it have human emotions.

Shankar took an unusual path into television. He started college at age 16 and attended Cornell University, where he earned a B. Sc., an M.S. and a Ph.D. in engineering physics and electrical engineering, and was a member of the elite Kappa Alpha Society, he decided he didn’t want to work as a scientist and moved to Los Angeles to try to become a writer.

Shankar is eager to move in a new direction with “Almost Human,” which he says comes at the right time. “People are so technologically sophisticated now that maybe the audience is ready for a show like this,” he told India-West.

I am particularly intrigued by the ‘man who’s part machine and the machine that’s part human’ concept (something I’ve called machine/flesh in previous postings such as this May 9, 2012 posting titled ‘Everything becomes part machine’) and was looking forward to seeing how they would be integrating this concept along with some of the more recent scientific work being done on prosthetics and robots, given they had an engineer as part of the team (albeit with lots of crime writing experience), into the stories. Sadly, only days after Tserling’s article was published, Shankar parted ways with Almost Human according to the Sept. 10, 2013 posting on the Almost Human blog,

So this was supposed to be the week that I posted a profile of Naren Shankar, for whom I have developed a full-on crush–I mean, he has a PhD in Electrical Engineering from Cornell, he was hired by Gene Roddenberry to be science consultant on TNG, he was saying all sorts of great things about how he wanted to present the future in AH…aaaand he quit as co-showrunner yesterday, citing “creative differences.” That leaves Wyman as sole showrunner, with no plans to replace Shankar.

I’d like to base some of my comments on the previews, unfortunately, Fox Broadcasting,, in its infinite wisdom, has decided to block Canadians from watching Almost Human previews online. (Could someone please explain why? I mean, Canadians will be tuning in to watch or record for future viewing  the series premiere on the 17th & 18th of November 2013 just like our US neighbours, so, why can’t we watch the previews online?)

Getting back to machine/flesh (human with prosthetic)s and life/nonlife (android with feelings), it seems that Almost Human (as did the latest version of Battlestar Galactica, from 2004-2009) may be giving a popular culture voice to some contemporary anxieties being felt about the boundary or lack thereof between humans and machines and life/nonlife. I’ve touched on this topic many times both within and without the popular culture context. Probably one of my more comprehensive essays on machine/flesh is Eye, arm, & leg prostheses, cyborgs, eyeborgs, Deus Ex, and ableism from August 30, 2011, which includes this quote from a still earlier posting on this topic,

Here’s an excerpt from my Feb. 2, 2010 posting which reinforces what Gregor [Gregor Wolbring, University of Calgary] is saying,

This influx of R&D cash, combined with breakthroughs in materials science and processor speed, has had a striking visual and social result: an emblem of hurt and loss has become a paradigm of the sleek, modern, and powerful. Which is why Michael Bailey, a 24-year-old student in Duluth, Georgia, is looking forward to the day when he can amputate the last two fingers on his left hand.

“I don’t think I would have said this if it had never happened,” says Bailey, referring to the accident that tore off his pinkie, ring, and middle fingers. “But I told Touch Bionics I’d cut the rest of my hand off if I could make all five of my fingers robotic.” [originally excerpted from Paul Hochman's Feb. 1, 2010 article, Bionic Legs, i-Limbs, and Other Super Human Prostheses You'll Envy for Fast Company]

Here’s something else from the Hochman article,

But Bailey is most surprised by his own reaction. “When I’m wearing it, I do feel different: I feel stronger. As weird as that sounds, having a piece of machinery incorporated into your body, as a part of you, well, it makes you feel above human. [semphasis mine] It’s a very powerful thing.”

Bailey isn’t  almost human’, he’s ‘above human’. As Hochman points out. repeatedly throughout his article, this sentiment is not confined to Bailey. My guess is that Kennex (Karl Urban’s character) in Almost Human doesn’t echo Bailey’s sentiments and, instead feels he’s not quite human while the android, Dorian, (Michael Ealy’s character) struggles with his feelings in a human way that clashes with Kennex’s perspective on what is human and what is not (or what we might be called the boundary between life and nonlife).

Into this mix, one could add the rising anxiety around ‘intelligent’ machines present in real life, as well as, fiction as per this November 12 (?), 2013 article by Ian Barker for Beta News,

The rise of intelligent machines has long been fertile ground for science fiction writers, but a new report by technology research specialists Gartner suggests that the future is closer than we think.

“Smartphones are becoming smarter, and will be smarter than you by 2017,” says Carolina Milanesi, research vice president at Gartner. “If there is heavy traffic, it will wake you up early for a meeting with your boss, or simply send an apology if it is a meeting with your colleague. The smartphone will gather contextual information from its calendar, its sensors, the user’s location and personal data”.

Your smartphone will be able to predict your next move or your next purchase based on what it knows about you. This will be made possible by gathering data using a technique called “cognizant computing”.

Gartner analysts will be discussing the future of smart devices at the Gartner Symposium/ITxpo 2013 in Barcelona from November 10-14 [2013].

The Gartner Symposium/Txpo in Barcelona is ending today (Nov. 14, 2013) but should you be curious about it, you can go here to learn more.

This notion that machines might (or will) get smarter or more powerful than humans (or wizards) is explored by Will.i.am (of the Black Eyed Peas) and, futurist, Brian David Johnson in their upcoming comic book, Wizards and Robots (mentioned in my Oct. 6, 2013 posting),. This notion of machines or technology overtaking human life is also being discussed at the University of Cambridge where there’s talk of founding a Centre for the Study of Existential Risk (from my Nov. 26, 2012 posting)

The idea that robots of one kind or another (e.g. nanobots eating up the world and leaving grey goo, Cylons in both versions of Battlestar Galactica trying to exterminate humans, etc.) will take over the world and find humans unnecessary  isn’t especially new in works of fiction. It’s not always mentioned directly but the underlying anxiety often has to do with intelligence and concerns over an ‘explosion of intelligence’. The question it raises,’ what if our machines/creations become more intelligent than humans?’ has been described as existential risk. According to a Nov. 25, 2012 article by Sylvia Hui for Huffington Post, a group of eminent philosophers and scientists at the University of Cambridge are proposing to found a Centre for the Study of Existential Risk,

Could computers become cleverer than humans and take over the world? Or is that just the stuff of science fiction?

Philosophers and scientists at Britain’s Cambridge University think the question deserves serious study. A proposed Center for the Study of Existential Risk will bring together experts to consider the ways in which super intelligent technology, including artificial intelligence, could “threaten our own existence,” the institution said Sunday.

“In the case of artificial intelligence, it seems a reasonable prediction that some time in this or the next century intelligence will escape from the constraints of biology,” Cambridge philosophy professor Huw Price said.

When that happens, “we’re no longer the smartest things around,” he said, and will risk being at the mercy of “machines that are not malicious, but machines whose interests don’t include us.”

Our emerging technologies give rise to questions abut what constitutes life and where human might fit in. For example,

  • are sufficiently advanced machines a new form of life,?
  • what does it mean when human bodies are partially integrated at the neural level with machinery?
  • what happens when machines have feelings?
  • etc.

While this doesn’t exactly fit into my theme of life/nonlife or machine/flesh, this does highlight how some popular culture efforts are attempting to integrate real science into the storytelling. Here’s an excerpt from an interview with Cosima Herter, the science consultant and namesake/model for one of the characters on Orphan Black (from the March 29, 2013 posting on the space.ca blog),

Cosima Herter is Orphan Black’s Science Consultant, and the inspiration for her namesake character in the series. In real-life, Real Cosima is a PhD. student in the History of Science, Technology, and Medicine Program at the University of Minnesota, working on the History and Philosophy of Biology. Hive interns Billi Knight & Peter Rowley spoke with her about her role on the show and the science behind it…

Q: Describe your role in the making of Orphan Black.

A: I’m a resource for the biology, particularly insofar as evolutionary biology is concerned. I study the history and the philosophy of biology, so I do offer some suggestions and some creative ideas, but also help correct some of the misconceptions about science.  I offer different angles and alternatives to look at the way biological science is represented, so (it’s) not reduced to your stereotypical tropes about evolutionary biology and cloning, but also to provide some accuracy for the scripts.

– See more at: http://www.space.ca/article/Orphan-Black-science-consultant#sthash.7P36bbPa.dpuf

For anyone not familiar with the series, from the Wikipedia essay (Note: Links have been removed),

Orphan Black is a Canadian science fiction television series starring Tatiana Maslany as several identical women who are revealed to be clones.

Existential risk

The idea that robots of one kind or another (e.g. nanobots eating up the world and leaving grey goo, Cylons in both versions of Battlestar Galactica trying to exterminate humans, etc.) will take over the world and find humans unnecessary  isn’t especially new in works of fiction. It’s not always mentioned directly but the underlying anxiety often has to do with intelligence and concerns over an ‘explosion of intelligence’. The question it raises,’ what if our machines/creations become more intelligent than humans?’ has been described as existential risk. According to a Nov. 25, 2012 article by Sylvia Hui for Huffington Post, a group of eminent philosophers and scientists at the University of Cambridge are proposing to found a Centre for the Study of Existential Risk,

Could computers become cleverer than humans and take over the world? Or is that just the stuff of science fiction?

Philosophers and scientists at Britain’s Cambridge University think the question deserves serious study. A proposed Center for the Study of Existential Risk will bring together experts to consider the ways in which super intelligent technology, including artificial intelligence, could “threaten our own existence,” the institution said Sunday.

“In the case of artificial intelligence, it seems a reasonable prediction that some time in this or the next century intelligence will escape from the constraints of biology,” Cambridge philosophy professor Huw Price said.

When that happens, “we’re no longer the smartest things around,” he said, and will risk being at the mercy of “machines that are not malicious, but machines whose interests don’t include us.”

Price along with Martin Rees, Emeritus Professor of Cosmology and Astrophysics, and Jaan Tallinn, Co-Founder of Skype, are the driving forces behind this proposed new centre at Cambridge University. From the Cambridge Project for Existential Risk webpage,

Many scientists are concerned that developments in human technology may soon pose new, extinction-level risks to our species as a whole. Such dangers have been suggested from progress in AI, from developments in biotechnology and artificial life, from nanotechnology, and from possible extreme effects of anthropogenic climate change. The seriousness of these risks is difficult to assess, but that in itself seems a cause for concern, given how much is at stake. …

The Cambridge Project for Existential Risk — a joint initiative between a philosopher, a scientist, and a software entrepreneur — begins with the conviction that these issues require a great deal more scientific investigation than they presently receive. Our aim is to establish within the University of Cambridge a multidisciplinary research centre dedicated to the study and mitigation of risks of this kind.

Price and Tallinn co-wrote an Aug. 6, 2012 article for the Australia-based, The Conversation website, about their concerns,

We know how to deal with suspicious packages – as carefully as possible! These days, we let robots take the risk. But what if the robots are the risk? Some commentators argue we should be treating AI (artificial intelligence) as a suspicious package, because it might eventually blow up in our faces. Should we be worried?

Asked whether there will ever be computers as smart as people, the US mathematician and sci-fi author Vernor Vinge replied: “Yes, but only briefly”.

He meant that once computers get to this level, there’s nothing to prevent them getting a lot further very rapidly. Vinge christened this sudden explosion of intelligence the “technological singularity”, and thought that it was unlikely to be good news, from a human point of view.

Was Vinge right, and if so what should we do about it? Unlike typical suspicious parcels, after all, what the future of AI holds is up to us, at least to some extent. Are there things we can do now to make sure it’s not a bomb (or a good bomb rather than a bad bomb, perhaps)?

It appears Price, Rees, and Tallinn are not the only concerned parties, from the Nov. 25, 2012 research news piece on the Cambridge University website,

With luminaries in science, policy, law, risk and computing from across the University and beyond signing up to become advisors, the project is, even in its earliest days, gathering momentum. “The basic philosophy is that we should be taking seriously the fact that we are getting to the point where our technologies have the potential to threaten our own existence – in a way that they simply haven’t up to now, in human history,” says Price. “We should be investing a little of our intellectual resources in shifting some probability from bad outcomes to good ones.”

Price acknowledges that some of these ideas can seem far-fetched, the stuff of science fiction, but insists that that’s part of the point.

According to the Huffington Post article by Lui, they expect to launch the centre next year (2013). In the meantime, for anyone who’s looking for more information about the ‘intelligence explosion’ or  ‘singularity’ as it’s also known, there’s a Wikipedia essay on the topic.  Also, you may want to stay tuned to this channel (blog) as I expect to have some news about an artificial intelligence project based at the University of Waterloo (Ontario, Canada) and headed by Chris Eliasmith at the university’s Centre for Theoretical Neuroscience, later this week.