Tag Archives: singularity

Nanotechnology at the movies: Transcendence opens April 18, 2014 in the US & Canada

Screenwriter Jack Paglen has an intriguing interpretation of nanotechnology, one he (along with the director) shares in an April 13, 2014 article by Larry Getlen for the NY Post and in his movie, Transcendence. which is opening in the US and Canada on April 18, 2014. First, here are a few of the more general ideas underlying his screenplay,

In “Transcendence” — out Friday [April 18, 2014] and directed by Oscar-winning cinematographer Wally Pfister (“Inception,” “The Dark Knight”) — Johnny Depp plays Dr. Will Caster, an artificial-intelligence researcher who has spent his career trying to design a sentient computer that can hold, and even exceed, the world’s collective intelligence.

After he’s shot by antitechnology activists, his consciousness is uploaded to a computer network just before his body dies.

“The theories associated with the film say that when a strong artificial intelligence wakes up, it will quickly become more intelligent than a human being,” screenwriter Jack Paglen says, referring to a concept known as “the singularity.”

It should be noted that there are anti-technology terrorists. I don’t think I’ve covered that topic in a while so an Aug. 31, 2012 posting is the most recent and, despite the title, “In depth and one year later—the nanotechnology bombings in Mexico” provides an overview of sorts. For a more up-to-date view, you can read Eric Markowitz’s April 9, 2014 article for Vocative.com. I do have one observation about the article where Markowitz has linked some recent protests in San Francisco to the bombings in Mexico. Those protests in San Francisco seem more like a ‘poor vs. the rich’ situation where the rich happen to come from the technology sector.

Getting back to “Transcendence” and singularity, there’s a good Wikipedia entry describing the ideas and some of the thinkers behind the notion of a singularity or technological singularity, as it’s sometimes called (Note: Links have been removed),

The technological singularity, or simply the singularity, is a hypothetical moment in time when artificial intelligence will have progressed to the point of a greater-than-human intelligence, radically changing civilization, and perhaps human nature.[1] Because the capabilities of such an intelligence may be difficult for a human to comprehend, the technological singularity is often seen as an occurrence (akin to a gravitational singularity) beyond which the future course of human history is unpredictable or even unfathomable.

The first use of the term “singularity” in this context was by mathematician John von Neumann. In 1958, regarding a summary of a conversation with von Neumann, Stanislaw Ulam described “ever accelerating progress of technology and changes in the mode of human life, which gives the appearance of approaching some essential singularity in the history of the race beyond which human affairs, as we know them, could not continue”.[2] The term was popularized by science fiction writer Vernor Vinge, who argues that artificial intelligence, human biological enhancement, or brain-computer interfaces could be possible causes of the singularity.[3] Futurist Ray Kurzweil cited von Neumann’s use of the term in a foreword to von Neumann’s classic The Computer and the Brain.

Proponents of the singularity typically postulate an “intelligence explosion”,[4][5] where superintelligences design successive generations of increasingly powerful minds, that might occur very quickly and might not stop until the agent’s cognitive abilities greatly surpass that of any human.

Kurzweil predicts the singularity to occur around 2045[6] whereas Vinge predicts some time before 2030.[7] At the 2012 Singularity Summit, Stuart Armstrong did a study of artificial generalized intelligence (AGI) predictions by experts and found a wide range of predicted dates, with a median value of 2040. His own prediction on reviewing the data is that there is an 80% probability that the singularity will occur between 2017 and 2112.[8]

The ‘technological singularity’ is controversial and contested (from the Wikipedia entry).

In addition to general criticisms of the singularity concept, several critics have raised issues with Kurzweil’s iconic chart. One line of criticism is that a log-log chart of this nature is inherently biased toward a straight-line result. Others identify selection bias in the points that Kurzweil chooses to use. For example, biologist PZ Myers points out that many of the early evolutionary “events” were picked arbitrarily.[104] Kurzweil has rebutted this by charting evolutionary events from 15 neutral sources, and showing that they fit a straight line on a log-log chart. The Economist mocked the concept with a graph extrapolating that the number of blades on a razor, which has increased over the years from one to as many as five, will increase ever-faster to infinity.[105]

By the way, this movie is mentioned briefly in the pop culture portion of the Wikipedia entry.

Getting back to Paglen and his screenplay, here’s more from Getlen’s article,

… as Will’s powers grow, he begins to pull off fantastic achievements, including giving a blind man sight, regenerating his own body and spreading his power to the water and the air.

This conjecture was influenced by nanotechnology, the field of manipulating matter at the scale of a nanometer, or one-billionth of a meter. (By comparison, a human hair is around 70,000-100,000 nanometers wide.)

“In some circles, nanotechnology is the holy grail,” says Paglen, “where we could have microscopic, networked machines [emphasis mine] that would be capable of miracles.”

The potential uses of, and implications for, nanotechnology are vast and widely debated, but many believe the effects could be life-changing.

“When I visited MIT,” says Pfister, “I visited a cancer research institute. They’re talking about the ability of nanotechnology to be injected inside a human body, travel immediately to a cancer cell, and deliver a payload of medicine directly to that cell, eliminating [the need to] poison the whole body with chemo.”

“Nanotechnology could help us live longer, move faster and be stronger. It can possibly cure cancer, and help with all human ailments.”

I find the ‘golly gee wizness’ of Paglen’s and Pfister’s take on nanotechnology disconcerting but they can’t be dismissed. There are projects where people are testing retinal implants which allow them to see again. There is a lot of work in the field of medicine designed to make therapeutic procedures that are gentler on the body by making their actions specific to diseased tissue while ignoring healthy tissue (sadly, this is still not possible). As for human enhancement, I have so many pieces that it has its own category on this blog. I first wrote about it in a four-part series starting with this one: Nanotechnology enables robots and human enhancement: part 1, (You can read the series by scrolling past the end of the posting and clicking on the next part or search the category and pick through the more recent pieces.)

I’m not sure if this error is Paglen’s or Getlen’s but nanotechnology is not “microscopic, networked machines” as Paglen’s quote strongly suggests. Some nanoscale devices could be described as machines (often called nanobots) but there are also nanoparticles, nanotubes, nanowires, and more that cannot be described as machines or devices, for that matter. More importantly, it seems Paglen’s main concern is this,

“One of [science-fiction author] Arthur C. Clarke’s laws is that any sufficiently advanced technology is indistinguishable from magic. That very quickly would become the case if this happened, because this artificial intelligence would be evolving technologies that we do not understand, and it would be capable of miracles by that definition,” says Paglen. [emphasis mine]

This notion of “evolving technologies that we do not understand” brings to mind a  project that was announced at the University of Cambridge (from my Nov. 26, 2012 posting),

The idea that robots of one kind or another (e.g. nanobots eating up the world and leaving grey goo, Cylons in both versions of Battlestar Galactica trying to exterminate humans, etc.) will take over the world and find humans unnecessary  isn’t especially new in works of fiction. It’s not always mentioned directly but the underlying anxiety often has to do with intelligence and concerns over an ‘explosion of intelligence’. The question it raises,’ what if our machines/creations become more intelligent than humans?’ has been described as existential risk. According to a Nov. 25, 2012 article by Sylvia Hui for Huffington Post, a group of eminent philosophers and scientists at the University of Cambridge are proposing to found a Centre for the Study of Existential Risk,

While I do have some reservations about how Paglen and Pfister describe the science, I appreciate their interest in communicating the scientific ideas, particularly those underlying Paglen’s screenplay.

For anyone who may be concerned about the likelihood of emulating  a human brain and uploading it to a computer, there’s an April 13, 2014 article by Luke Muehlhauser and Stuart Armstrong for Slate discussing that very possibility (Note 1: Links have been removed; Note 2: Armstrong is mentioned in this posting’s excerpt from the Wikipedia entry on Technological Singularity),

Today scientists can’t even emulate the brain of a tiny worm called C. elegans, which has 302 neurons, compared with the human brain’s 86 billion neurons. Using models of expected technological progress on the three key problems, we’d estimate that we wouldn’t be able to emulate human brains until at least 2070 (though this estimate is very uncertain).

But would an emulation of your brain be you, and would it be conscious? Such questions quickly get us into thorny philosophical territory, so we’ll sidestep them for now. For many purposes—estimating the economic impact of brain emulations, for instance—it suffices to know that the brain emulations would have humanlike functionality, regardless of whether the brain emulation would also be conscious.

Paglen/Pfister seem to be equating intelligence (brain power) with consciousness while Muehlhauser/Armstrong simply sidestep the issue. As they (Muehlhauser/Armstrong) note, it’s “thorny.”

If you consider thinkers like David Chalmers who suggest everything has consciousness, then it follows that computers/robots/etc. may not appreciate having a human brain emulation which takes us back into Battlestar Galactica territory. From my March 19, 2014 posting (one of the postings where I recounted various TED 2014 talks in Vancouver), here’s more about David Chalmers,

Finally, I wasn’t expecting to write about David Chalmers so my notes aren’t very good. A philosopher, here’s an excerpt from Chalmers’ TED biography,

In his work, David Chalmers explores the “hard problem of consciousness” — the idea that science can’t ever explain our subjective experience.

David Chalmers is a philosopher at the Australian National University and New York University. He works in philosophy of mind and in related areas of philosophy and cognitive science. While he’s especially known for his theories on consciousness, he’s also interested (and has extensively published) in all sorts of other issues in the foundations of cognitive science, the philosophy of language, metaphysics and epistemology.

Chalmers provided an interesting bookend to a session started with a brain researcher (Nancy Kanwisher) who breaks the brain down into various processing regions (vastly oversimplified but the easiest way to summarize her work in this context). Chalmers reviewed the ‘science of consciousness’ and noted that current work in science tends to be reductionist, i.e., examining parts of things such as brains and that same reductionism has been brought to the question of consciousness.

Rather than trying to prove consciousness, Chalmers proposes that we consider it a fundamental in the same way that we consider time, space, and mass to be fundamental. He noted that there’s precedence for additions and gave the example of James Clerk Maxwell and his proposal to consider electricity and magnetism as fundamental.

Chalmers next suggestion is a little more outré and based on some thinking (sorry I didn’t catch the theorist’s name) that suggests everything, including photons, has a type of consciousness (but not intelligence).

Have a great time at the movie!

Human, Soul & Machine: The Coming Singularity! exhibition Oct. 5, 2013 – August 31, 2014 at Baltimore’s Visionary Art Museum

Doug Rule’s Oct. 4, 2013 article for the Baltimore (Maryland, US) edition of the Metro Weekly highlights a rather unusual art/science exhibition (Note: Links have been removed),

Maybe the weirdest, wildest museum you’ll ever visit, Baltimore’s American Visionary Art Museum opens its 19th original thematic yearlong exhibition this weekend. Human, Soul & Machine: The Coming Singularity! is what the quirky museum, focused on presenting self-taught artists, bills as its most complex subject yet, a playful examination of the serious impact of technology — in all its forms, from artificial intelligence to nanotechnology to Big Data — on our lives, as seen through the eyes of more than 40 artists, futurists and inventors in a hot-wired blend of art, science, humor and imagination.

The show opened Oct. 5, 2013 and runs until August 31, 2014. The exhibition webpage offers a description of the show and curation,

Curated by AVAM founder and director Rebecca Alban Hoffberger, this stirring show harnesses the enchanting visual delights of remarkable visionary artists and their masterworks. Among them: Kenny Irwin’s Robotmas—a special installation from his Palm Springs Robo-Lights display, glowing inside of a central black box theater at the heart of this exhibition; a selection of Alex Grey’s Sacred Mirrors; O.L. Samuels’ 7-ft tall Godzilla—a creation first imagined in response to the devastating use of the A-bomb on Hiroshima and Nagasaki; Rigo 23′s delicate anti-drone drawings; Allen Christian’s life-sized Piano Family—a love song to string theory; Fred Carter’s massive wooden carvings—created as a warning of destruction from industry’s manipulation of nature; and much more!

The exhibition media kit features a striking (imo) graphic image representing the show,

American Visionary Art Museum graphic for Human Soul exhibition [downloaded from http://www.avam.org/news-and-events/pdf/press-kits/Singularity/HSM-MediaKit-Web.pdf]

American Visionary Art Museum graphic for Human, Soul, and Machine exhibition [downloaded from http://www.avam.org/news-and-events/pdf/press-kits/Singularity/HSM-MediaKit-Web.pdf]

The list of artists includes one person familiar to anyone following the ‘singularity’ story even occasionally, Ray Kurzweil.

Global Futures (GF) 2045 International Congress and transhumanism at the June 2013 meeting

Stuart Mason Dambrot has written a special article (part 1 only, part 2 has yet to be published) about the recent Global Futures 2045 Congress held June 15-16, 2013 (program) in New York City. Dambrot’s piece draws together contemporary research and frames it within the context of transhumanism. From the Aug. 1, 2013 feature on phys.org (Note: Links have been removed),

Futurists, visionaries, scientists, technologists, philosophers, and others who take this view to heart convened on June 15-16, 2013 in New York City at Global Futures 2045 International Congress: Towards a New Strategy for Human Evolution. GF2045 was organized by the 2045 Strategic Social Initiative founded by Russian entrepreneur Dmitry Itskov in February 2011 with the main goals of creating and realizing a new strategy for the development of humanity – one based upon our unique emerging capability to effect self-directed evolution. The initiative’s two main science projects are focused largely on Transhumanism – a multidisciplinary approach to analyzing the dynamic interplay between humanity and the acceleration of technology. Specifically, the 2045 Initiative’s projects seek to (1) enable an individual’s personality to be transferred to a more advanced non-biological substrate, and (2) extend life to the point of immortality …

Attendees were given a very dire view of the future followed by glimpses of another possible future provided we put our faith in science and technology. From Dambrot’s article (Note: Link has been removed),

… the late Dr. James Martin, who tragically passed away on June 24, 2013, gave a sweeping, engaging talk on The Transformation of Humankind—Extreme Paradigm Shifts Are Ahead of Us. An incredibly prolific author of books on computing and related technology, Dr. Martin founded the Oxford Martin School at Oxford University – an interdisciplinary research community comprising over 30 institutes and projects addressing the most pressing global challenges and opportunities of the 21st century. Dr. Martin – in the highly engaging manner for which he was renowned – presented a remarkably accessible survey of the interdependent trends that will increasingly threaten humanity over the coming decades. Dr. Martin made it disturbingly clear that population growth, resource consumption, water depletion, desertification, deforestation, ocean pollution and fish depopulation, atmospheric carbon dioxide, what he termed gigafamine (the death of more than a billion people as a consequence of food shortage by mid-century), and other factors are ominously close to their tipping points – after which their effects will be irreversible. (For example, he points out that in 20 years we’ll be consuming an obviously unsustainable 200 percent of then-available resources.) Taken together, he cautioned, these developments will constitute a “perfect storm” that will cause a Darwinian survival of the fittest in which “the Earth could be like a lifeboat that’s too small to save everyone.”

However, Dr. Martin also emphasized that there are solutions discussing the trends and technologies that – even as he acknowledged the resistance to implementing or even understanding them – could have a positive impact on our future:

The Singularity and an emerging technocracy

Genetic engineering and Transhumanism, in particular, a synthetic 24th human   chromosome that would contain non-inheritable genetic modifications and synthetic DNA sequences

Artificial Intelligence and nanorobotics

Yottascale computers capable of executing 1024 operations per second

 Quantum computing

Graphene – a one-atom thick layer of graphite with an ever-expanding portfolio of electronic, optical, excitonic, thermal, mechanical, and quantum properties, and an even longer list of potential applications

Autonomous automobiles

Nuclear batteries in the form of small, ultra-safe and maintenance-free underground Tokamak nuclear fusion reactors

Photovoltaics that make electricity more cheaply than coal Capturing rainwater and floodwater to increase water supply

Eco-influence – Dr. Martin’s term for a rich, enjoyable and sometimes complex way of life that does no ecological harm

Dambrot goes on to cover day one (I think that’s how he has this organized) of the event at length and provides a number of video panels and discussions. I was hoping he’d have part two posted by now but given how much work he’s put into part 1 it’s understandable that part 2 might take a while. So, I’ll keep an eye open for it and add a link here when it’s posted.

I did check Dambrot’s website and found this on the ‘Critical Thought’ bio webpage,

Stuart Mason Dambrot is an interdisciplinary science synthesist and communicator. He analyzes deep-structure conceptual and neural connections between multiple areas of knowledge and creativity, and monitors and extrapolates convergent and emergent trends in a wide range of research activities. Stuart is also the creator and host of Critical Thought | TV, an online discussion channel examining convergent and emergent trends in the sciences, arts and humanities. As an invited speaker, he has given talks on Exocortical Cognition, Emergent Technologies, Synthetic Biology, Transhumanism, Philosophy of Mind, Sociopolitical Futures, and other topics at New York Academy of Sciences, Cooper-Union, Science House, New York Future Salon, and other venues.

Stuart has a diverse background in Physiological Psychology, integrating Neuroscience, Cognitive Psychology, Artificial Intelligence, Neural Networks, Complexity Theory, Epistemology, Ethics, and Philosophy of Science. His memberships and affiliations include American Association for the Advancement of Science, New York Academy of Sciences, Lifeboat Foundation Advisory Board, Center for Inquiry, New York Futurist Society, Linnaean Society National Association of Science Writers, Science Writers in New York, and Foreign Correspondents Club of Japan.

I have yet to find any written material by Dambrot which challenges transhumanism in any way despite the fact that his website is called Critical Thought.  This reservation aside, his pieces cover an interesting range of topics and I will try to get back to read more.

As for the GF 2045 initiative, I found this on their About us webpage,

The main goals of the 2045 Initiative: the creation and realization of a new strategy for the development of humanity which meets global civilization challenges; the creation of optimale conditions promoting the spiritual enlightenment of humanity; and the realization of a new futuristic reality based on 5 principles: high spirituality, high culture, high ethics, high science and high technologies.

The main science mega-project of the 2045 Initiative aims to create technologies enabling the transfer of a individual’s personality to a more advanced non-biological carrier, and extending life, including to the point of immortality. We devote particular attention to enabling the fullest possible dialogue between the world’s major spiritual traditions, science and society.

A large-scale transformation of humanity, comparable to some of the major spiritual and sci-tech revolutions in history, will require a new strategy. We believe this to be necessary to overcome existing crises, which threaten our planetary habitat and the continued existence of humanity as a species. With the 2045 Initiative, we hope to realize a new strategy for humanity’s development, and in so doing, create a more productive, fulfilling, and satisfying future.

The “2045″ team is working towards creating an international research center where leading scientists will be engaged in research and development in the fields of anthropomorphic robotics, living systems modeling and brain and consciousness modeling with the goal of transferring one’s individual consciousness to an artificial carrier and achieving cybernetic immortality.

An annual congress “The Global Future 2045″ is organized by the Initiative to give platform for discussing mankind’s evolutionary strategy based on technologies of cybernetic immortality as well as the possible impact of such technologies on global society, politics and economies of the future.

Future prospects of “2045″ Initiative for society


The emergence and widespread use of affordable android “avatars” controlled by a “brain-computer” interface. Coupled with related technologies “avatars’ will give people a number of new features: ability to work in dangerous environments, perform rescue operations, travel in extreme situations etc.

Avatar components will be used in medicine for the rehabilitation of fully or partially disabled patients giving them prosthetic limbs or recover lost senses.


Creation of an autonomous life-support system for the human brain linked to a robot, ‘avatar’, will save people whose body is completely worn out or irreversibly damaged. Any patient with an intact brain will be able to return to a fully functioning  bodily life. Such technologies will  greatly enlarge  the possibility of hybrid bio-electronic devices, thus creating a new IT revolution and will make  all  kinds of superimpositions of electronic and biological systems possible.


Creation of a computer model of the brain and human consciousness  with the subsequent development of means to transfer individual consciousness  onto an artificial carrier. This development will profoundly change the world, it will not only give everyone the possibility of  cybernetic immortality but will also create a friendly artificial intelligence,  expand human capabilities  and provide opportunities for ordinary people to restore or modify their own brain multiple times.  The final result  at this stage can be a real revolution in the understanding of human nature that will completely change the human and technical prospects for humanity.


This is the time when substance-independent minds will receive new bodies with capacities far exceeding those of ordinary humans. A new era for humanity will arrive!  Changes will occur in all spheres of human activity – energy generation, transportation, politics, medicine, psychology, sciences, and so on.

Today it is hard to imagine a future when bodies consisting of nanorobots  will become affordable  and capable of taking any form. It is also hard to imagine body holograms featuring controlled matter. One thing is clear however:  humanity, for the first time in its history, will make a fully managed evolutionary transition and eventually become a new species. Moreover,  prerequisites for a large-scale  expansion into outer space will be created as well.

It all seems a bit grandiose to me and, frankly, I’ve never found the prospect of being downloaded onto a nonbiological substrate particularly appealing. As well, how are they going to tackle the incredibly complex process of downloading or is it duplicating a brain? There’s still a lot of debate as to how a brain works (any brain: a rat brain, a dog brain, etc.).

It all gets more complicated the more you think about it. Is a duplicate/downloaded brain exactly the same as the original? Digitized print materials are relatively simple compared to a brain and yet archivists are still trying to determine how one establishes authenticity with print materials that have been digitized and downloaded/uploaded.

As well, I wonder if these grand dreamers have ever come across ‘the law of unintended consequences’. E.g. cane toads in Australia or DDT and other pesticides, which were intended as solutions and are now problems themselves.

Existential risk

The idea that robots of one kind or another (e.g. nanobots eating up the world and leaving grey goo, Cylons in both versions of Battlestar Galactica trying to exterminate humans, etc.) will take over the world and find humans unnecessary  isn’t especially new in works of fiction. It’s not always mentioned directly but the underlying anxiety often has to do with intelligence and concerns over an ‘explosion of intelligence’. The question it raises,’ what if our machines/creations become more intelligent than humans?’ has been described as existential risk. According to a Nov. 25, 2012 article by Sylvia Hui for Huffington Post, a group of eminent philosophers and scientists at the University of Cambridge are proposing to found a Centre for the Study of Existential Risk,

Could computers become cleverer than humans and take over the world? Or is that just the stuff of science fiction?

Philosophers and scientists at Britain’s Cambridge University think the question deserves serious study. A proposed Center for the Study of Existential Risk will bring together experts to consider the ways in which super intelligent technology, including artificial intelligence, could “threaten our own existence,” the institution said Sunday.

“In the case of artificial intelligence, it seems a reasonable prediction that some time in this or the next century intelligence will escape from the constraints of biology,” Cambridge philosophy professor Huw Price said.

When that happens, “we’re no longer the smartest things around,” he said, and will risk being at the mercy of “machines that are not malicious, but machines whose interests don’t include us.”

Price along with Martin Rees, Emeritus Professor of Cosmology and Astrophysics, and Jaan Tallinn, Co-Founder of Skype, are the driving forces behind this proposed new centre at Cambridge University. From the Cambridge Project for Existential Risk webpage,

Many scientists are concerned that developments in human technology may soon pose new, extinction-level risks to our species as a whole. Such dangers have been suggested from progress in AI, from developments in biotechnology and artificial life, from nanotechnology, and from possible extreme effects of anthropogenic climate change. The seriousness of these risks is difficult to assess, but that in itself seems a cause for concern, given how much is at stake. …

The Cambridge Project for Existential Risk — a joint initiative between a philosopher, a scientist, and a software entrepreneur — begins with the conviction that these issues require a great deal more scientific investigation than they presently receive. Our aim is to establish within the University of Cambridge a multidisciplinary research centre dedicated to the study and mitigation of risks of this kind.

Price and Tallinn co-wrote an Aug. 6, 2012 article for the Australia-based, The Conversation website, about their concerns,

We know how to deal with suspicious packages – as carefully as possible! These days, we let robots take the risk. But what if the robots are the risk? Some commentators argue we should be treating AI (artificial intelligence) as a suspicious package, because it might eventually blow up in our faces. Should we be worried?

Asked whether there will ever be computers as smart as people, the US mathematician and sci-fi author Vernor Vinge replied: “Yes, but only briefly”.

He meant that once computers get to this level, there’s nothing to prevent them getting a lot further very rapidly. Vinge christened this sudden explosion of intelligence the “technological singularity”, and thought that it was unlikely to be good news, from a human point of view.

Was Vinge right, and if so what should we do about it? Unlike typical suspicious parcels, after all, what the future of AI holds is up to us, at least to some extent. Are there things we can do now to make sure it’s not a bomb (or a good bomb rather than a bad bomb, perhaps)?

It appears Price, Rees, and Tallinn are not the only concerned parties, from the Nov. 25, 2012 research news piece on the Cambridge University website,

With luminaries in science, policy, law, risk and computing from across the University and beyond signing up to become advisors, the project is, even in its earliest days, gathering momentum. “The basic philosophy is that we should be taking seriously the fact that we are getting to the point where our technologies have the potential to threaten our own existence – in a way that they simply haven’t up to now, in human history,” says Price. “We should be investing a little of our intellectual resources in shifting some probability from bad outcomes to good ones.”

Price acknowledges that some of these ideas can seem far-fetched, the stuff of science fiction, but insists that that’s part of the point.

According to the Huffington Post article by Lui, they expect to launch the centre next year (2013). In the meantime, for anyone who’s looking for more information about the ‘intelligence explosion’ or  ‘singularity’ as it’s also known, there’s a Wikipedia essay on the topic.  Also, you may want to stay tuned to this channel (blog) as I expect to have some news about an artificial intelligence project based at the University of Waterloo (Ontario, Canada) and headed by Chris Eliasmith at the university’s Centre for Theoretical Neuroscience, later this week.

Waiting for Martha

Last April (2008), Canada’s National Institute of Nanotechnology (NINT) announced a new chairperson for their board, Martha Cook Piper. I was particularly interested in the news since she was the president of the University of British Columbia (UBC) for a number of years during which she maintained a pretty high profile locally and, I gather, nationally. She really turned things around at UBC and helped it gain more national prominence.

I contacted NINT and sent some interview questions in May or June last year. After some months (as I recall it was Sept. or Oct. 2008), I got an email address for Martha and redirected my queries to her. She was having a busy time during the fall and through Christmas into 2009 with the consequence that my questions have only recently been answered. At this point, someone at NINT is reviewing the answers and I’m hopeful that I will finally have the interview in the near future.

There is a documentary about Ray Kurzweil (‘Mr. Singularity’) making the rounds. You can see a trailer and a preview article here at Fast Company.

As you may have guessed, there’s not a lot of news today.