Category Archives: human enhancement

Nanotechnology at the movies: Transcendence opens April 18, 2014 in the US & Canada

Screenwriter Jack Paglen has an intriguing interpretation of nanotechnology, one he (along with the director) shares in an April 13, 2014 article by Larry Getlen for the NY Post and in his movie, Transcendence. which is opening in the US and Canada on April 18, 2014. First, here are a few of the more general ideas underlying his screenplay,

In “Transcendence” — out Friday [April 18, 2014] and directed by Oscar-winning cinematographer Wally Pfister (“Inception,” “The Dark Knight”) — Johnny Depp plays Dr. Will Caster, an artificial-intelligence researcher who has spent his career trying to design a sentient computer that can hold, and even exceed, the world’s collective intelligence.

After he’s shot by antitechnology activists, his consciousness is uploaded to a computer network just before his body dies.

“The theories associated with the film say that when a strong artificial intelligence wakes up, it will quickly become more intelligent than a human being,” screenwriter Jack Paglen says, referring to a concept known as “the singularity.”

It should be noted that there are anti-technology terrorists. I don’t think I’ve covered that topic in a while so an Aug. 31, 2012 posting is the most recent and, despite the title, “In depth and one year later—the nanotechnology bombings in Mexico” provides an overview of sorts. For a more up-to-date view, you can read Eric Markowitz’s April 9, 2014 article for Vocative.com. I do have one observation about the article where Markowitz has linked some recent protests in San Francisco to the bombings in Mexico. Those protests in San Francisco seem more like a ‘poor vs. the rich’ situation where the rich happen to come from the technology sector.

Getting back to “Transcendence” and singularity, there’s a good Wikipedia entry describing the ideas and some of the thinkers behind the notion of a singularity or technological singularity, as it’s sometimes called (Note: Links have been removed),

The technological singularity, or simply the singularity, is a hypothetical moment in time when artificial intelligence will have progressed to the point of a greater-than-human intelligence, radically changing civilization, and perhaps human nature.[1] Because the capabilities of such an intelligence may be difficult for a human to comprehend, the technological singularity is often seen as an occurrence (akin to a gravitational singularity) beyond which the future course of human history is unpredictable or even unfathomable.

The first use of the term “singularity” in this context was by mathematician John von Neumann. In 1958, regarding a summary of a conversation with von Neumann, Stanislaw Ulam described “ever accelerating progress of technology and changes in the mode of human life, which gives the appearance of approaching some essential singularity in the history of the race beyond which human affairs, as we know them, could not continue”.[2] The term was popularized by science fiction writer Vernor Vinge, who argues that artificial intelligence, human biological enhancement, or brain-computer interfaces could be possible causes of the singularity.[3] Futurist Ray Kurzweil cited von Neumann’s use of the term in a foreword to von Neumann’s classic The Computer and the Brain.

Proponents of the singularity typically postulate an “intelligence explosion”,[4][5] where superintelligences design successive generations of increasingly powerful minds, that might occur very quickly and might not stop until the agent’s cognitive abilities greatly surpass that of any human.

Kurzweil predicts the singularity to occur around 2045[6] whereas Vinge predicts some time before 2030.[7] At the 2012 Singularity Summit, Stuart Armstrong did a study of artificial generalized intelligence (AGI) predictions by experts and found a wide range of predicted dates, with a median value of 2040. His own prediction on reviewing the data is that there is an 80% probability that the singularity will occur between 2017 and 2112.[8]

The ‘technological singularity’ is controversial and contested (from the Wikipedia entry).

In addition to general criticisms of the singularity concept, several critics have raised issues with Kurzweil’s iconic chart. One line of criticism is that a log-log chart of this nature is inherently biased toward a straight-line result. Others identify selection bias in the points that Kurzweil chooses to use. For example, biologist PZ Myers points out that many of the early evolutionary “events” were picked arbitrarily.[104] Kurzweil has rebutted this by charting evolutionary events from 15 neutral sources, and showing that they fit a straight line on a log-log chart. The Economist mocked the concept with a graph extrapolating that the number of blades on a razor, which has increased over the years from one to as many as five, will increase ever-faster to infinity.[105]

By the way, this movie is mentioned briefly in the pop culture portion of the Wikipedia entry.

Getting back to Paglen and his screenplay, here’s more from Getlen’s article,

… as Will’s powers grow, he begins to pull off fantastic achievements, including giving a blind man sight, regenerating his own body and spreading his power to the water and the air.

This conjecture was influenced by nanotechnology, the field of manipulating matter at the scale of a nanometer, or one-billionth of a meter. (By comparison, a human hair is around 70,000-100,000 nanometers wide.)

“In some circles, nanotechnology is the holy grail,” says Paglen, “where we could have microscopic, networked machines [emphasis mine] that would be capable of miracles.”

The potential uses of, and implications for, nanotechnology are vast and widely debated, but many believe the effects could be life-changing.

“When I visited MIT,” says Pfister, “I visited a cancer research institute. They’re talking about the ability of nanotechnology to be injected inside a human body, travel immediately to a cancer cell, and deliver a payload of medicine directly to that cell, eliminating [the need to] poison the whole body with chemo.”

“Nanotechnology could help us live longer, move faster and be stronger. It can possibly cure cancer, and help with all human ailments.”

I find the ‘golly gee wizness’ of Paglen’s and Pfister’s take on nanotechnology disconcerting but they can’t be dismissed. There are projects where people are testing retinal implants which allow them to see again. There is a lot of work in the field of medicine designed to make therapeutic procedures that are gentler on the body by making their actions specific to diseased tissue while ignoring healthy tissue (sadly, this is still not possible). As for human enhancement, I have so many pieces that it has its own category on this blog. I first wrote about it in a four-part series starting with this one: Nanotechnology enables robots and human enhancement: part 1, (You can read the series by scrolling past the end of the posting and clicking on the next part or search the category and pick through the more recent pieces.)

I’m not sure if this error is Paglen’s or Getlen’s but nanotechnology is not “microscopic, networked machines” as Paglen’s quote strongly suggests. Some nanoscale devices could be described as machines (often called nanobots) but there are also nanoparticles, nanotubes, nanowires, and more that cannot be described as machines or devices, for that matter. More importantly, it seems Paglen’s main concern is this,

“One of [science-fiction author] Arthur C. Clarke’s laws is that any sufficiently advanced technology is indistinguishable from magic. That very quickly would become the case if this happened, because this artificial intelligence would be evolving technologies that we do not understand, and it would be capable of miracles by that definition,” says Paglen. [emphasis mine]

This notion of “evolving technologies that we do not understand” brings to mind a  project that was announced at the University of Cambridge (from my Nov. 26, 2012 posting),

The idea that robots of one kind or another (e.g. nanobots eating up the world and leaving grey goo, Cylons in both versions of Battlestar Galactica trying to exterminate humans, etc.) will take over the world and find humans unnecessary  isn’t especially new in works of fiction. It’s not always mentioned directly but the underlying anxiety often has to do with intelligence and concerns over an ‘explosion of intelligence’. The question it raises,’ what if our machines/creations become more intelligent than humans?’ has been described as existential risk. According to a Nov. 25, 2012 article by Sylvia Hui for Huffington Post, a group of eminent philosophers and scientists at the University of Cambridge are proposing to found a Centre for the Study of Existential Risk,

While I do have some reservations about how Paglen and Pfister describe the science, I appreciate their interest in communicating the scientific ideas, particularly those underlying Paglen’s screenplay.

For anyone who may be concerned about the likelihood of emulating  a human brain and uploading it to a computer, there’s an April 13, 2014 article by Luke Muehlhauser and Stuart Armstrong for Slate discussing that very possibility (Note 1: Links have been removed; Note 2: Armstrong is mentioned in this posting’s excerpt from the Wikipedia entry on Technological Singularity),

Today scientists can’t even emulate the brain of a tiny worm called C. elegans, which has 302 neurons, compared with the human brain’s 86 billion neurons. Using models of expected technological progress on the three key problems, we’d estimate that we wouldn’t be able to emulate human brains until at least 2070 (though this estimate is very uncertain).

But would an emulation of your brain be you, and would it be conscious? Such questions quickly get us into thorny philosophical territory, so we’ll sidestep them for now. For many purposes—estimating the economic impact of brain emulations, for instance—it suffices to know that the brain emulations would have humanlike functionality, regardless of whether the brain emulation would also be conscious.

Paglen/Pfister seem to be equating intelligence (brain power) with consciousness while Muehlhauser/Armstrong simply sidestep the issue. As they (Muehlhauser/Armstrong) note, it’s “thorny.”

If you consider thinkers like David Chalmers who suggest everything has consciousness, then it follows that computers/robots/etc. may not appreciate having a human brain emulation which takes us back into Battlestar Galactica territory. From my March 19, 2014 posting (one of the postings where I recounted various TED 2014 talks in Vancouver), here’s more about David Chalmers,

Finally, I wasn’t expecting to write about David Chalmers so my notes aren’t very good. A philosopher, here’s an excerpt from Chalmers’ TED biography,

In his work, David Chalmers explores the “hard problem of consciousness” — the idea that science can’t ever explain our subjective experience.

David Chalmers is a philosopher at the Australian National University and New York University. He works in philosophy of mind and in related areas of philosophy and cognitive science. While he’s especially known for his theories on consciousness, he’s also interested (and has extensively published) in all sorts of other issues in the foundations of cognitive science, the philosophy of language, metaphysics and epistemology.

Chalmers provided an interesting bookend to a session started with a brain researcher (Nancy Kanwisher) who breaks the brain down into various processing regions (vastly oversimplified but the easiest way to summarize her work in this context). Chalmers reviewed the ‘science of consciousness’ and noted that current work in science tends to be reductionist, i.e., examining parts of things such as brains and that same reductionism has been brought to the question of consciousness.

Rather than trying to prove consciousness, Chalmers proposes that we consider it a fundamental in the same way that we consider time, space, and mass to be fundamental. He noted that there’s precedence for additions and gave the example of James Clerk Maxwell and his proposal to consider electricity and magnetism as fundamental.

Chalmers next suggestion is a little more outré and based on some thinking (sorry I didn’t catch the theorist’s name) that suggests everything, including photons, has a type of consciousness (but not intelligence).

Have a great time at the movie!

Printing food, changing prostheses, and talking with Google (Larry Page) at TED 2014′s Session 6: Wired

I’m covering two speakers and an interview from this session. First, Avi Reichental, CEO (Chief Executive Officer) 3D Sytems, from his TED biography (Note: A link has been removed),

At 3D Systems, Avi Reichental is helping to imagine a future where 3D scanning-and-printing is an everyday act, and food, clothing, objects are routinely output at home.

Lately, he’s been demo-ing the Cube, a tabletop 3D printer that can print a basketball-sized object, and the ChefJet, a food-grade machine that prints in sugar and chocolate. His company is also rolling out consumer-grade 3D scanning cameras that clip to a tablet to capture three-dimensional objects for printing out later. He’s an instructor at Singularity University (watch his 4-minute intro to 3D printing).

Reichental started by talking about his grandfather, a cobbler who died in the Holocaust and whom he’d never met. Nonetheless, his grandfather had inspired him to be a maker of things in a society where craftsmanship and crafting atrophied until recently with the rise of ‘maker’ culture and 3D printing.

There were a number of items on the stage, shoes, a cake, a guitar and more, all of which had been 3D printed. Reichental’s shoes had also been produced on a 3D printer. If I understand his dream properly, it is to enable everyone to make what they need more cheaply and better.

Next, Hugh Herr, bionics designer, from his TED biography,

Hugh Herr directs the Biomechatronics research group at the MIT Media Lab, where he is pioneering a new class of biohybrid smart prostheses and exoskeletons to improve the quality of life for thousands of people with physical challenges. A computer-controlled prosthesis called the Rheo Knee, for instance, is outfitted with a microprocessor that continually senses the joint’s position and the loads applied to the limb. A powered ankle-foot prosthesis called the BiOM emulates the action of a biological leg to create a natural gait, allowing amputees to walk with normal levels of speed and metabolism as if their legs were biological.

Herr is the founder and chief technology officer of BiOM Inc., which markets the BiOM as the first in a series of products that will emulate or even augment physiological function through electromechanical replacement. You can call it (as they do) “personal bionics.”

Herr walked on his two bionic limbs onto the TED stage. He not only researches and works in the field of bionics, he lives it. His name was mentioned in a previous presentation by David Sengeh (can be found in my March 17, 2014 posting), a 2014 TED Fellow.

Herr talked about biomimcry, i.e., following nature’s lead in design but he also suggested that design is driving (affecting) nature.  If I understand him rightly, he was referencing some of the work with proteins, ligands, etc. and creating devices that are not what we would consider biological or natural as we have tended to use the term.

His talk contrasted somewhat with Reichental’s as Herr wants to remove the artisanal approach to developing prosthetics and replacing the artisanal with data-driven strategies. Herr covered the mechanical, the dynamic, and the electrical as applied to bionic limbs. I think the term prosthetic is being applied the older, artisanal limbs as opposed to these mechanical, electrical, dynamic marvels known as bionic limbs.

The mechanical aspect has to do with figuring out how your specific limbs are formed and used and getting precise measurements (with robotic tools) because everyone is a little bit different. The dynamic aspect, also highly individual, is how your muscles work. For example, standing still, walking, etc. all require dynamic responses from your muscles. Finally, there’s the integration with the nervous system so you can feel your limb.

Herr shows a few videos including one of a woman who lost part of her leg in last year’s Boston Marathon bombing (April 15, 2013). A ballroom dancer, Herr invites her to the stage so she can perform in front of the TED 2014 audience. She got a standing ovation.

In the midst of session 6, there was an interview conducted by Charlie Rose (US television presenter) with Larry Page, a co-founder of Google.

Very briefly, I was mildly relieved (although I’m not convinced) to hear that Page is devoted to the notion that search is important. I’ve been concerned about the Google search results I get. Those results seem less rich and interesting than they were a few years ago. I attribute the situation to the chase for advertising dollars and a decreasing interest in ‘search’ as the company expands with initiatives such as ‘Google glass’, artificial intelligence, and pursues other interests distinct from what had been the company’s core focus.

I didn’t find much else of interest. Larry Page wants to help people and he’s interested in artificial intelligence and transportation. His perspective seemed a bit simplistic (technology will solve our problems) but perhaps that was for the benefit of people like me. I suspect one of a speaker’s challenges at TED is finding the right level. Certainly, I’ve experienced difficulties with some of the more technical presentations.

One more observation, there was no mention of a current scandal at Google profiled in the April 2014 issue of Vanity Fair, (by Vanessa Grigoriadis)

 O.K., Glass: Make Google Eyes

The story behind Google co-founder Sergey Brin’s liaison with Google Glass marketing manager Amanda Rosenberg—and his split from his wife, genetic-testing entrepreneur Anne Wojcicki— has a decidedly futuristic edge. But, as Vanessa Grigoriadis reports, the drama leaves Silicon Valley debating emotional issues, from office romance to fear of mortality.

Given that Page agreed to be on the TED stage in the last 10 days, this appearance seems like an attempt at damage control especially with the mention of Brin who had his picture taken with the telepresent Ed Snowden on Tuesday, March 18, 2014 at TED 2014.

Chemistry of Cyborgs: review of the state of the art by German researchers

Communication between man and machine – a fascinating area at the interface of chemistry, biomedicine, and engineering. (Figure: KIT/S. Giselbrecht, R. Meyer, B. Rapp)

Communication between man and machine – a fascinating area at the interface of chemistry, biomedicine, and engineering. (Figure: KIT/S. Giselbrecht, R. Meyer, B. Rapp)

German researchers from the Karlsruhe Institute of Technology (KIT), Professor Christof M. Niemeyer and Dr. Stefan Giselbrecht of the Institute for Biological Interfaces 1 (IBG 1) and Dr. Bastian E. Rapp, Institute of Microstructure Technology (IMT) have written a good overview of the current state of cyborgs while pointing out some of the ethical issues associated with this field. From the Jan. 10, 2014 news item on ScienceDaily,

Medical implants, complex interfaces between brain and machine or remotely controlled insects: Recent developments combining machines and organisms have great potentials, but also give rise to major ethical concerns. In a new review, KIT scientists discuss the state of the art of research, opportunities, and risks.

The Jan. ?, 2014 KIT press release (also on EurekAlert with a release date of Jan. 10, 2014), which originated the news item, describes the innovations and the work at KIT in more detail,

They are known from science fiction novels and films – technically modified organisms with extraordinary skills, so-called cyborgs. This name originates from the English term “cybernetic organism”. In fact, cyborgs that combine technical systems with living organisms are already reality. The KIT researchers Professor Christof M. Niemeyer and Dr. Stefan Giselbrecht of the Institute for Biological Interfaces 1 (IBG 1) and Dr. Bastian E. Rapp, Institute of Microstructure Technology (IMT), point out that this especially applies to medical implants.

In recent years, medical implants based on smart materials that automatically react to changing conditions, computer-supported design and fabrication based on magnetic resonance tomography datasets or surface modifications for improved tissue integration allowed major progress to be achieved. For successful tissue integration and the prevention of inflammation reactions, special surface coatings were developed also by the KIT under e.g. the multidisciplinary Helmholtz program “BioInterfaces”.

Progress in microelectronics and semiconductor technology has been the basis of electronic implants controlling, restoring or improving the functions of the human body, such as cardiac pacemakers, retina implants, hearing implants, or implants for deep brain stimulation in pain or Parkinson therapies. Currently, bioelectronic developments are being combined with robotics systems to design highly complex neuroprostheses. Scientists are working on brain-machine interfaces (BMI) for the direct physical contacting of the brain. BMI are used among others to control prostheses and complex movements, such as gripping. Moreover, they are important tools in neurosciences, as they provide insight into the functioning of the brain. Apart from electric signals, substances released by implanted micro- and nanofluidic systems in a spatially or temporarily controlled manner can be used for communication between technical devices and organisms.

BMI are often considered data suppliers. However, they can also be used to feed signals into the brain, which is a highly controversial issue from the ethical point of view. “Implanted BMI that feed signals into nerves, muscles or directly into the brain are already used on a routine basis, e.g. in cardiac pacemakers or implants for deep brain stimulation,” Professor Christof M. Niemeyer, KIT, explains. “But these signals are neither planned to be used nor suited to control the entire organism – brains of most living organisms are far too complex.”

Brains of lower organisms, such as insects, are less complex. As soon as a signal is coupled in, a certain movement program, such as running or flying, is started. So-called biobots, i.e. large insects with implanted electronic and microfluidic control units, are used in a new generation of tools, such as small flying objects for monitoring and rescue missions. In addition, they are applied as model systems in neurosciences in order to understand basic relationships.

Electrically active medical implants that are used for longer terms depend on reliable power supply. Presently, scientists are working on methods to use the patient body’s own thermal, kinetic, electric or chemical energy.

In their review the KIT researchers sum up that developments combining technical devices with organisms have a fascinating potential. They may considerably improve the quality of life of many people in the medical sector in particular. However, ethical and social aspects always have to be taken into account.

After briefly reading the paper, I can say the researchers are most interested in the science and technology aspects but they do have this to say about ethical and social issues in the paper’s conclusion (Note: Links have been removed),

The research and development activities summarized here clearly raise significant social and ethical concerns, in particular, when it comes to the use of BMIs for signal injection into humans, which may lead to modulation or even control of behavior. The ethical issues of this new technology have been discussed in the excellent commentary of Jens Clausen,33 which we highly recommend for further reading. The recently described engineering of a synthetic polymer construct, which is capable of propulsion in water through a collection of adhered rat cardiomyocytes,77 a “medusoid” also described as a “cyborg jellyfish with a rat heart”, brings up an additional ethical aspect. The motivation of the work was to reverse-engineer muscular pumps, and it thus represents fundamental research in tissue engineering for biomedical applications. However, it is also an impressive, early demonstration that autonomous control of technical devices can be achieved through small populations of cells or microtissues. It seems reasonable that future developments along this line will strive, for example, to control complex robots through the use of brain tissue. Given the fact that the robots of today are already capable of autonomously performing complex missions, even in unknown territories,78 this approach might indeed pave the way for yet another entirely new generation of cybernetic organisms.

Here’s a link to and a citation for the English language version of the paper, which is open access (as of Jan. 10, 2014),

The Chemistry of Cyborgs—Interfacing Technical Devices with Organisms by Dr. Stefan Giselbrecht, Dr. Bastian E. Rapp, & Prof.Dr. Christof M. Niemeyer. Angewandte Chemie International Edition Volume 52, Issue 52, pages 13942–13957, December 23, 2013 Article first published online: 29 NOV 2013 DOI: 10.1002/anie.201307495

Copyright © 2013 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim

For those with German language skills,

Chemie der Cyborgs – zur Verknüpfung technischer Systeme mit Lebewesen.  by Stefan Giselbrecht, Bastian E. Rapp, and Christof M. Niemeyer. Angewandte Chemie. Volume 125, issue 52, pages 14190, December 23, 2013. DOI: 10.1002/ange.201307495

I have written many times about cyborgs and neuroprosthetics including this Aug. 30, 2011 posting titled:  Eye, arm, & leg prostheses, cyborgs, eyeborgs, Deus Ex, and ableism, where I mention Gregor Wolbring, a Canadian academic (University of Calgary) who has written extensively on the social and ethical issues of human enhancement technologies. You can find out more on his blog, Nano and Nano- Bio, Info, Cogno, Neuro, Synbio, Geo, Chem…

For anyone wanting to search this blog for these pieces, try using the term machine/flesh as a tag, as well as, human enhancement, neuroprostheses, cyborgs …

Almost Human (tv series), smartphones, and anxieties about life/nonlife

The US-based Fox Broadcasting Company is set to premiere a new futuristic television series, Almost Human, over two nights, Nov. 17, and 18, 2013 for US and Canadian viewers. Here’s a description of the premise from its Wikipedia essay (Note: Links have been removed),

The series is set thirty-five years in the future when humans in the Los Angeles Police Department are paired up with lifelike androids; a detective who has a dislike for robots partners with an android capable of emotion.

One of the showrunners, Naren Shankar, seems to have also been functioning both as a science consultant and as a crime writing consultant,in addition to his other duties. From a Sept. 4, 2013 article by Lisa Tsering for Indiawest.com,

FOX is the latest television network to utilize the formidable talents of Naren Shankar, an Indian American writer and producer best known to fans for his work on “Star Trek: Deep Space Nine,” “Star Trek: Voyager” and “Star Trek: The Next Generation” as well as “Farscape,” the recently cancelled ABC series “Zero Hour” and “The Outer Limits.”

Set 35 years in the future, “Almost Human” stars Karl Urban and Michael Ealy as a crimefighting duo of a cop who is part-machine and a robot who is part-human. [emphasis mine]

“We are extrapolating the things we see today into the near future,” he explained. For example, the show will comment on the pervasiveness of location software, he said. “There will also be issues of technology such as medical ethics, or privacy; or how technology enables the rich but not the poor, who can’t afford it.”

Speaking at Comic-Con July 20 [2013], Shankar told media there, “Joel [J.H. Wyman] was looking for a collaboration with someone who had come from the crime world, and I had worked on ‘CSI’ for eight years.

“This is like coming back to my first love, since for many years I had done science fiction. It’s a great opportunity to get away from dismembered corpses and autopsy scenes.”

There’s plenty of drama — in the new series, the year is 2048, and police officer John Kennex (Karl Urban, “Dr. Bones” from the new “Star Trek” films) is trying to bounce back from one of the most catastrophic attacks ever made against the police department. Kennex wakes up from a 17-month coma and can’t remember much, except that his partner was killed; his girlfriend left him and one of his legs has been amputated and is now outfitted with a high-tech synthetic appendage. According to police department policy, every cop must partner with a robot, so Kennex is paired with Dorian (Ealy), an android with an unusual glitch that makes it have human emotions.

Shankar took an unusual path into television. He started college at age 16 and attended Cornell University, where he earned a B. Sc., an M.S. and a Ph.D. in engineering physics and electrical engineering, and was a member of the elite Kappa Alpha Society, he decided he didn’t want to work as a scientist and moved to Los Angeles to try to become a writer.

Shankar is eager to move in a new direction with “Almost Human,” which he says comes at the right time. “People are so technologically sophisticated now that maybe the audience is ready for a show like this,” he told India-West.

I am particularly intrigued by the ‘man who’s part machine and the machine that’s part human’ concept (something I’ve called machine/flesh in previous postings such as this May 9, 2012 posting titled ‘Everything becomes part machine’) and was looking forward to seeing how they would be integrating this concept along with some of the more recent scientific work being done on prosthetics and robots, given they had an engineer as part of the team (albeit with lots of crime writing experience), into the stories. Sadly, only days after Tserling’s article was published, Shankar parted ways with Almost Human according to the Sept. 10, 2013 posting on the Almost Human blog,

So this was supposed to be the week that I posted a profile of Naren Shankar, for whom I have developed a full-on crush–I mean, he has a PhD in Electrical Engineering from Cornell, he was hired by Gene Roddenberry to be science consultant on TNG, he was saying all sorts of great things about how he wanted to present the future in AH…aaaand he quit as co-showrunner yesterday, citing “creative differences.” That leaves Wyman as sole showrunner, with no plans to replace Shankar.

I’d like to base some of my comments on the previews, unfortunately, Fox Broadcasting,, in its infinite wisdom, has decided to block Canadians from watching Almost Human previews online. (Could someone please explain why? I mean, Canadians will be tuning in to watch or record for future viewing  the series premiere on the 17th & 18th of November 2013 just like our US neighbours, so, why can’t we watch the previews online?)

Getting back to machine/flesh (human with prosthetic)s and life/nonlife (android with feelings), it seems that Almost Human (as did the latest version of Battlestar Galactica, from 2004-2009) may be giving a popular culture voice to some contemporary anxieties being felt about the boundary or lack thereof between humans and machines and life/nonlife. I’ve touched on this topic many times both within and without the popular culture context. Probably one of my more comprehensive essays on machine/flesh is Eye, arm, & leg prostheses, cyborgs, eyeborgs, Deus Ex, and ableism from August 30, 2011, which includes this quote from a still earlier posting on this topic,

Here’s an excerpt from my Feb. 2, 2010 posting which reinforces what Gregor [Gregor Wolbring, University of Calgary] is saying,

This influx of R&D cash, combined with breakthroughs in materials science and processor speed, has had a striking visual and social result: an emblem of hurt and loss has become a paradigm of the sleek, modern, and powerful. Which is why Michael Bailey, a 24-year-old student in Duluth, Georgia, is looking forward to the day when he can amputate the last two fingers on his left hand.

“I don’t think I would have said this if it had never happened,” says Bailey, referring to the accident that tore off his pinkie, ring, and middle fingers. “But I told Touch Bionics I’d cut the rest of my hand off if I could make all five of my fingers robotic.” [originally excerpted from Paul Hochman's Feb. 1, 2010 article, Bionic Legs, i-Limbs, and Other Super Human Prostheses You'll Envy for Fast Company]

Here’s something else from the Hochman article,

But Bailey is most surprised by his own reaction. “When I’m wearing it, I do feel different: I feel stronger. As weird as that sounds, having a piece of machinery incorporated into your body, as a part of you, well, it makes you feel above human. [semphasis mine] It’s a very powerful thing.”

Bailey isn’t  almost human’, he’s ‘above human’. As Hochman points out. repeatedly throughout his article, this sentiment is not confined to Bailey. My guess is that Kennex (Karl Urban’s character) in Almost Human doesn’t echo Bailey’s sentiments and, instead feels he’s not quite human while the android, Dorian, (Michael Ealy’s character) struggles with his feelings in a human way that clashes with Kennex’s perspective on what is human and what is not (or what we might be called the boundary between life and nonlife).

Into this mix, one could add the rising anxiety around ‘intelligent’ machines present in real life, as well as, fiction as per this November 12 (?), 2013 article by Ian Barker for Beta News,

The rise of intelligent machines has long been fertile ground for science fiction writers, but a new report by technology research specialists Gartner suggests that the future is closer than we think.

“Smartphones are becoming smarter, and will be smarter than you by 2017,” says Carolina Milanesi, research vice president at Gartner. “If there is heavy traffic, it will wake you up early for a meeting with your boss, or simply send an apology if it is a meeting with your colleague. The smartphone will gather contextual information from its calendar, its sensors, the user’s location and personal data”.

Your smartphone will be able to predict your next move or your next purchase based on what it knows about you. This will be made possible by gathering data using a technique called “cognizant computing”.

Gartner analysts will be discussing the future of smart devices at the Gartner Symposium/ITxpo 2013 in Barcelona from November 10-14 [2013].

The Gartner Symposium/Txpo in Barcelona is ending today (Nov. 14, 2013) but should you be curious about it, you can go here to learn more.

This notion that machines might (or will) get smarter or more powerful than humans (or wizards) is explored by Will.i.am (of the Black Eyed Peas) and, futurist, Brian David Johnson in their upcoming comic book, Wizards and Robots (mentioned in my Oct. 6, 2013 posting),. This notion of machines or technology overtaking human life is also being discussed at the University of Cambridge where there’s talk of founding a Centre for the Study of Existential Risk (from my Nov. 26, 2012 posting)

The idea that robots of one kind or another (e.g. nanobots eating up the world and leaving grey goo, Cylons in both versions of Battlestar Galactica trying to exterminate humans, etc.) will take over the world and find humans unnecessary  isn’t especially new in works of fiction. It’s not always mentioned directly but the underlying anxiety often has to do with intelligence and concerns over an ‘explosion of intelligence’. The question it raises,’ what if our machines/creations become more intelligent than humans?’ has been described as existential risk. According to a Nov. 25, 2012 article by Sylvia Hui for Huffington Post, a group of eminent philosophers and scientists at the University of Cambridge are proposing to found a Centre for the Study of Existential Risk,

Could computers become cleverer than humans and take over the world? Or is that just the stuff of science fiction?

Philosophers and scientists at Britain’s Cambridge University think the question deserves serious study. A proposed Center for the Study of Existential Risk will bring together experts to consider the ways in which super intelligent technology, including artificial intelligence, could “threaten our own existence,” the institution said Sunday.

“In the case of artificial intelligence, it seems a reasonable prediction that some time in this or the next century intelligence will escape from the constraints of biology,” Cambridge philosophy professor Huw Price said.

When that happens, “we’re no longer the smartest things around,” he said, and will risk being at the mercy of “machines that are not malicious, but machines whose interests don’t include us.”

Our emerging technologies give rise to questions abut what constitutes life and where human might fit in. For example,

  • are sufficiently advanced machines a new form of life,?
  • what does it mean when human bodies are partially integrated at the neural level with machinery?
  • what happens when machines have feelings?
  • etc.

While this doesn’t exactly fit into my theme of life/nonlife or machine/flesh, this does highlight how some popular culture efforts are attempting to integrate real science into the storytelling. Here’s an excerpt from an interview with Cosima Herter, the science consultant and namesake/model for one of the characters on Orphan Black (from the March 29, 2013 posting on the space.ca blog),

Cosima Herter is Orphan Black’s Science Consultant, and the inspiration for her namesake character in the series. In real-life, Real Cosima is a PhD. student in the History of Science, Technology, and Medicine Program at the University of Minnesota, working on the History and Philosophy of Biology. Hive interns Billi Knight & Peter Rowley spoke with her about her role on the show and the science behind it…

Q: Describe your role in the making of Orphan Black.

A: I’m a resource for the biology, particularly insofar as evolutionary biology is concerned. I study the history and the philosophy of biology, so I do offer some suggestions and some creative ideas, but also help correct some of the misconceptions about science.  I offer different angles and alternatives to look at the way biological science is represented, so (it’s) not reduced to your stereotypical tropes about evolutionary biology and cloning, but also to provide some accuracy for the scripts.

- See more at: http://www.space.ca/article/Orphan-Black-science-consultant#sthash.7P36bbPa.dpuf

For anyone not familiar with the series, from the Wikipedia essay (Note: Links have been removed),

Orphan Black is a Canadian science fiction television series starring Tatiana Maslany as several identical women who are revealed to be clones.

Cyborgian dance at McGill University (Canada)

As noted in the Canadian Council of Academies report ((State of Science and Technology in Canada, 2012), which was mentioned in my Dec. 28, 2012 posting, the field of visual and performing arts is an area of strength and that is due to one province, Québec. Mark Wilson’s Aug. 13, 2013 article for Fast Company and Paul Ridden’s Aug. 7, 2013 article for gizmag.com about McGill University’s Instrumented Bodies: Digital Prostheses for Music and Dance Performance seem to confirm Québec’s leadership.

From Wilson’s Aug. 13, 2013 article (Note: A link has been removed),

One is a glowing exoskeleton spine, while another looks like a pair of cyborg butterfly wings. But these aren’t just costumes; they’re wearable, functional art.

In fact, the team of researchers from the IDML (Input Devices and Music Interaction Laboratory [at McGill University]) who are responsible for the designs go so far as to call their creations “prosthetic instruments.”

Ridden’s Aug. 7, 2013 article offers more about the project’s history and technology,

For the last three years, a small research team at McGill University has been working with a choreographer, a composer, dancers and musicians on a project named Instrumented Bodies. Three groups of sensor-packed, internally-lit digital music controllers that attach to a dancer’s costume have been developed, each capable of wirelessly triggering synthesized music as the performer moves around the stage. Sounds are produced by tapping or stroking transparent Ribs or Visors, or by twisting, turning or moving Spines. Though work on the project continues, the instruments have already been used in a performance piece called Les Gestes which toured Canada and Europe during March and April.

Both articles are interesting but Wilson’s is the fast read and Ridden’s gives you information you can’t find by looking up the Instrumented Bodies: Digital Prostheses for Music and Dance Performance project webpage,

These instruments are the culmination of a three-year long project in which the designers worked closely with dancers, musicians, composers and a choreographer. The goal of the project was to develop instruments that are visually striking, utilize advanced sensing technologies, and are rugged enough for extensive use in performance.

The complex, transparent shapes are lit from within, and include articulated spines, curved visors and ribcages. Unlike most computer music control interfaces, they function both as hand-held, manipulable controllers and as wearable, movement-tracking extensions to the body. Further, since the performers can smoothly attach and detach the objects, these new instruments deliberately blur the line between the performers’ bodies and the instrument being played.

The prosthetic instruments were designed and developed by Ph.D. researchers Joseph Malloch and Ian Hattwick [and Marlon Schumacher] under the supervision of IDMIL director Marcelo Wanderley. Starting with sketches and rough foam prototypes for exploring shape and movement, they progressed through many iterations of the design before arriving at the current versions. The researchers made heavy use of digital fabrication technologies such as laser-cutters and 3D printers, which they accessed through the McGill University School of Architecture and the Centre for Interdisciplinary Research in Music Media and Technology, also hosted by McGill.

Each of the nearly thirty working instruments produced for the project has embedded sensors, power supplies and wireless data transceivers, allowing a performer to control the parameters of music synthesis and processing in real time through touch, movement, and orientation. The signals produced by the instruments are routed through an open-source peer-to-peer software system the IDMIL team has developed for designing the connections between sensor signals and sound synthesis parameters.

For those who prefer to listen and watch, the researchers have created a video documentary,

I usually don’t include videos that run past 5 mins. but I’ve made an exception for this almost 15 mins. documentary.

I was trying to find mention of a dancer and/or choreographer associated with this project and found a name along with another early stage participant, choreographer, Isabelle Van Grimde, and composer, Sean Ferguson, in Ridden’s article.

Nano-Bio Manufacturing Consortium’s request for proposals (RFPs) on human performance monitoring platforms

The requested human performance monitor platform RFPs are for a US Air Force Research Laboratory (AFRL) project being managed by the Nano-Bio Manufacturing Consortium (NBMC), according to a July 17, 2013 news item on Nanowerk,

The Nano-Bio Manufacturing Consortium (NBMC) has released its first Request for Proposals (RFP) focused on developing a technology platform for Human Performance Monitors for military and civilian personnel in high stress situations such as pilots, special operations personnel, firefighters, and trauma care providers. Organized by FlexTech Alliance under a grant from the U.S. Air Force Research Laboratory (AFRL) the RFP comes only 3 month since the group officially formed its technical and leadership teams. The consortium members, working with AFRL, issued this RFP to focus on component development and integration for a lightweight, low-cost, conformal and wearable patch.

The July 17, 2013 NBMC news release, which originated the news item, offers more about this patch/monitor,

The heart of this new patch will be a biosensor device to measure chemicals, called biomarkers, in human sweat.  These biomarkers can provide early warnings of performance issues such as stress, fatigue, vigilance or organ damage.  The platform will contain the sensor, a microfluidic system that delivers sweat to the sensor, printed and hybrid control electronics, interconnects, a power supply, wireless communication, and software – all on a flexible substrate that is comfortable to wear.

“An aircraft has numerous sensors which take over 1500 measurements per second to monitor its condition in flight, whereas the most critical part – the pilot – has no monitors,” Malcolm Thompson, chief executive officer of NBMC stated.  “We are working quickly and efficiently to coordinate the expertise being generated at an array of companies, government labs and academic centers.  NBMC’s goal is to establish this technology chain to more rapidly develop products and manufacturing approaches for the Air Force and commercial markets.”

I gather the reasoning is that we should be able to monitor human beings just as we do equipment and machines.

The news release also offers information about the consortium partners,

Initial consortium membership includes a wide range of organizations.  Fortune 500 technology leaders include General Electric, Lockheed Martin, and DuPont Teijin Films.  More entrepreneurial organizations include PARC (a Xerox Company), MC 10, Soligie, American Semiconductor, Brewer Science and UES.  They are joined by the Air Force Research Laboratory and university leaders such as Cornell University, University of Massachusetts Amherst Center for Hierarchical Manufacturing, University of Arizona Center for Integrative Medicine, UC San Diego, University of Cincinnati, Binghamton University, Johns Hopkins University, Northeastern University NSF Nanoscale Science and Engineering Center for High-rate Nano-manufacturing, and Arizona State University.

The NBMC solicitation was posted July 10, 2013 on this page,

2013 SOLICITATION ON HUMAN PERFORMANCE MONITORING & BIOMARKER DETECTION

Request for Proposals Issued: July 10th, 2013

Proposals Due Date: August 9th, 2013 – 5:00 PM PDT

You can find the 9pp RFP here.

I’ve decided to include this description of the thinking that underlies the consortium, from the NBMC Nano-Bio Manufacturing webpage,

The field of nano-biotechnology is advancing rapidly, with many important discoveries and potential applications being identified.  Much of this work is taking place in academia and advanced research labs around the globe.  Once an application is identified, however, the road is still long to making it available to the markets in need.  One of the final steps on that road is understanding how to manufacture in high volume and the lowest cost.  Often this is the defining decision on whether the product even gets to that market.

With new nano-bio technology solutions, the challenges to produce in volume at low-cost are entirely new to many in the field.  New materials, new substrates, new equipment, and unknown properties are just a few of the hurdles that no one organization has been able to overcome.

To address these challenges, FlexTech Alliance, in collaboration with a nationwide group of partners, has formed a Nano-Bio Manufacturing Consortium (NBMC) for the U.S. Air Force Research Laboratory (AFRL). The mission of this partnership is to bring together leading scientists, engineers, and business development professionals from industry and universities in order to work collaboratively in a consortium, and to mature an integrated suite of nano-bio manufacturing technologies to transition to industrial manufacturing.

Initial activities focus on AFRL/ DoD priorities, e.g., physiological readiness and human performance monitoring. Specifically, NBMC matures nano-bio manufacturing technologies to create an integrated suite of reconfigurable and digitized fabrication methods that are compatible with biological and nanoparticle materials and to transition thin film, mechanically compliant device concepts through a foundry-like manufacturing flow.

The long-term vision is that NBMC operates at the confluence of four core emerging disciplines: nanotechnology, biotechnology, advanced (additive) manufacturing, and flexible electronics. The convergence of these disparate fields enables advanced sensor architectures for real-time, remote physiological and health/medical monitoring.

[downloaded from http://www.nbmc.org/nanobiomanufacturing/nbm_intro/]

[downloaded from http://www.nbmc.org/nanobiomanufacturing/nbm_intro/]

It seems to me that human beings are increasingly being viewed as just another piece of equipment.

The gold beneath your skin (artificial skin, that is)

Artificial skin that can sense as if it were real skin isn’t here yet but scientists at Technion-Israel Institute of Technology have created a flexible sensor that could fulfill that promise. From a July 9, 2013 news item on Azonano,

Using tiny gold particles and a kind of resin, a team of scientists at the Technion-Israel Institute of Technology has discovered how to make a new kind of flexible sensor that one day could be integrated into electronic skin, or e-skin.

If scientists learn how to attach e-skin to prosthetic limbs, people with amputations might once again be able to feel changes in their environments. The findings appear in the June issue of ACS Applied Materials & Interfaces.

The July 8, 2013 American Technion Society news release by Kevin Hattori, which originated the news item, describes the problems with developing flexible sensors that can mimic natural skin,

Researchers have long been interested in flexible sensors, but have had trouble adapting them for real-world use. To make its way into mainstream society, a flexible sensor would have to run on low voltage (so it would be compatible with the batteries in today’s portable devices), measure a wide range of pressures, and make more than one measurement at a time, including humidity, temperature, pressure, and the presence of chemicals. In addition, these sensors would also have to be able to be made quickly, easily, and cheaply.

Here are more details about the sensor and about how the researchers created it,

The Technion team’s sensor has all of these qualities. The secret is the use of monolayer-capped nanoparticles that are only 5-8 nanometers in diameter. They are made of gold and surrounded by connector molecules called ligands. In fact, “monolayer-capped nanoparticles can be thought of as flowers, where the center of the flower is the gold or metal nanoparticle and the petals are the monolayer of organic ligands that generally protect it,” says Haick.

The team discovered that when these nanoparticles are laid on top of a substrate – in this case, made of PET (flexible polyethylene terephthalate), the same plastic found in soda bottles – the resulting compound conducted electricity differently depending on how the substrate was bent. (The bending motion brings some particles closer to others, increasing how quickly electrons can pass between them.) This electrical property means that the sensor can detect a large range of pressures, from tens of milligrams to tens of grams. “The sensor is very stable and can be attached to any surface shape while keeping the function stable,” says Dr. Nir Peled, Head of the Thoracic Cancer Research and Detection Center at Israel’s Sheba Medical Center, who was not involved in the research.

And by varying how thick the substrate is, as well as what it is made of, scientists can modify how sensitive the sensor is. Because these sensors can be customized, they could in the future perform a variety of other tasks, including monitoring strain on bridges and detecting cracks in engines.

According to research team leader Professor Hossam Haick the new sensor is more sensitive (x 10 or more) in touch than existing touch-based e-skin.

Here’s a link to and a citation for the published paper,

Tunable Touch Sensor and Combined Sensing Platform: Toward Nanoparticle-based Electronic Skin by Meital Segev-Bar , Avigail Landman, Maayan Nir-Shapira, Gregory Shuster, and Hossam Haick. ACS Appl. Mater. Interfaces, 2013, 5 (12), pp 5531–5541 DOI: 10.1021/am400757q Publication Date (Web): June 4, 2013

Copyright © 2013 American Chemical Society

The paper is behind a paywall.

Human Bionic Project; amputations, prosthetics. and disabilities

Sydney Brownstone’s June 26, 2013 article about The Human Bionic Project  for Fast Company touches on human tragedy and the ways in which we attempt to cope by focusing on researcher David Sengeh’s work (Note: Links have been removed),

In the Iraq and Afghanistan wars alone, nearly 1,600 American soldiers have woken up without a limb. Fifteen survivors of the Boston marathon bombings are new amputees. And in Sierra Leone, where MIT graduate student David Sengeh is from, brutal tactics during the country’s 11-year civil war resulted in somewhere between 4,000 and 10,000 amputations in a country of less than 6 million people.

Many amputees go through the costly, lengthy process of transitioning to prosthetics, but it’s difficult even for prosthetic research specialists to gather information about the replacement parts outside their narrow fields. That’s part of the reason why, in December of last year, Sengeh and a research team began developing an interactive Inspector Gadget–a repository of all the FDA-approved [US Food and Drug Administration] replacement parts they could find.

So far, the Human Bionic Project has between 40 and 50 points of reference on its corporeal map–everything from artificial hearts to bionic jaws. In addition to photos and descriptions, the team will soon be looking to source videos of prosthetics in action from the public. Sengeh also hopes to integrate a timeline, tracking bionic parts throughout history, from the bionic toes of Ancient Egypt to the 3-D printed fingers of modern times.

“In [Haitian and Sierra Leonian] Creole, the word for disabled, like an amputee, is ‘scrap,’” Sengeh said. “I wanted to change that, because I know that we can get full functionality and become able-bodied.”

Do read Brownstone’s article as I haven’t, by any means, excerpted all the interesting bits.

There’s also more at The Human Bionic Project. Here’s a description (or manifesto) from the home page,

The Human Bionic Project begs for the fundamental redefinition of disability, illness, and disease as we have known it throughout history. It dares us to imagine the seamless interaction between the human being and machines. This interactive learning platform enables the user to visualize and learn about the comprehensive advances in human repair and enhancement that can be achieved with current technology. We can also wonder about what the human being will look like by the 22nd Century (year 2100) based on cutting edge advances in science and technology — more specifically in the fields of biomechanics, and electronics.

The Human Bionic Project serves as a call to action for technologists all around the world to think about the design of bionics in a fundamentally new way; how can we engineer all bionic elements for the human body using a similar protocol and architecture? Could we have the behaviour of the bionic knee be in sync with that of the bionic ankle of an above-knee amputee? How can we design a bionic eye that sees beyond what the biological eye can observe and use that information to help humans in critical situations? We have to imagine bionics not as singular units developed to replace or augment human parts but rather as part of a human-bionic system aimed at redefining what it means to be human.

Some of the ideas presented are already products used today, while others are prototypes explored by various research laboratories and inquisitive humans around the world. The works presented here are not ours and are publicly available. We have credited all the authors who are leading these extraordinary research initiatives.

You can find more about prosthetics, etc. on the ‘Inspector Gadget‘ page (it features an outline of a human body highlighted with red dots (click on a red dot to get details about prosthetics and other forms of augmentation). I don’t find this to be an especially friendly or intuitive interface. I think this is an MIT (Massachusetts Institute of Technology) student project and I find MIT tends to favour minimalism on its institutional and student websites. Still, there’s some fascinating information if you care to persist.

Here are more details about the folks and the funding supporting The Human Bionic Project (from the bottom of the home  page),

A project by David Moinina Sengeh. Collaborator: Reza Naeeni. Web development: Yannik Messerli. Undergraduate research assistant: Nicholas Fine. Funded by The Other Festival at MIT Media Lab (2013). Follow us on twitter: @humanbionicproj. …

I last mentioned human enhancement/augmentation in my June 17, 2013 commentary on You Are Very Star, a transmedia theatre experience taking place in Vancouver until June 29, 2013. I have written many times on the topic of human enhancement including a May 2, 2013 posting about a bionic ear; a Feb. 15, 2013 posting about a bionic eye; and a Jan. 30, 2013 posting about a BBC documentary on building a bionic man, amongst others.

Review of ‘You Are Very Star’ transmedia show (in Vancouver, Canada)

Blasting backwards (1968) and forwards (2048) in time, the You Are Very Star immersive, transmedia experience offered by Vancouver’s Electric Company Theatre is an exciting experiment as I discovered on opening night, June 15, 2013.

Don’t expect to sit passively in a theatre seat. We trotted around the building to visit or remember 1968 in one theatre, then zipped out to a 20 minute 2013 intermission where we played a game (they gave us maps with our programmes, which you are invited to return at the end of the intermission), and, finally, were escorted to the planetarium theatre to encounter 2048.

I’m not sure about the artistic intention for the 1968 portion of the show. It was one of those situations where my tiny bit of knowledge and considerable fund of ignorance combined to create confusion. For example, one of the characters, Earle Birney, a poet, writer, and titan of Canadian literature, did found the creative writing programme at the University of British Columbia as they note in the show but by 1968 he’d left Vancouver for Toronto. One of the other characters in this segment is called Esther, a black feminist and more, with whom Birney’s character appears to establish a liaison. Birney was married to an Esther whom I met some years ago. She was a white Englishwoman and a feminist but of a somewhat different character than the Esther of the play.

In addition, the clothing wasn’t quite right. No tie dye, no macrame, no beads, no granny dresses, and not enough fringe. Plus, I can’t recall seeing any bell bottom pants, mini dresses and skirts, and/or go go boots.

There were some interesting tonal changes in this section ranging from humour, political angst and anger, and pathos. The depiction of the professor who’s decided to let people grade themselves and who takes an hallucinogenic drug in front of his class seemed pretty typical of a few of the crackpot professors of the time.

Unexpectedly, the professor decides to get high on ayahuasca. LSD, magic mushrooms, marijuana and hashish would have been more typical. I can understand clothing and some of the dialogue not being typical of the period but getting the preferred drugs wrong seems odd, which is why I questioned *whether the artists introduced these incongruencies intentionally.

The actors all shone at one time or another as they delivered some pretty snappy dialogue . I’m hoping they tighten this section up so there’s less waiting for the snappy stuff and perhaps they could find some device other than xx hours/days earlier to signify a change in the timeframe. I lost count of how often they flashed a slide onscreen notifying us that the next scene had taken place at an earlier time. Finally, I loved the shadow puppets but they were on for a brief time only, never to return.

Our intermission was pretty active. There were lots of places on the map, given with the programme, where one was meant to discover things. I never did figure out what was happening with the stuffed toys that were being given out but I’m ok with those kinds of mysteries.

The last stop was the planetarium theatre for 2048. Very interesting costuming, especially the head gear. Still, I have to ask why do people in the future, in the more ‘optimistic’ versions of it, tend to wear white?

I found 2048 the most interesting part but that may be due to the references to human enhancement (a topic I’ve covered here a number of times). The playwrights also seem to have spent some time studying Ray Kurzweil and the singularity he’s predicting. From the Technological singularity essay on Wikipedia (Note: Links and footnotes have been removed),

The technological singularity is the theoretical emergence of superintelligence through technological means. Since the capabilities of such intelligence would be difficult for an unaided human mind to comprehend, the technological singularity is seen as an occurrence beyond which events cannot be predicted.

The first use of the term “singularity” in this context was by mathematician John von Neumann. Neumann in the mid-1950s spoke of “ever accelerating progress of technology and changes in the mode of human life, which gives the appearance of approaching some essential singularity in the history of the race beyond which human affairs, as we know them, could not continue”. The term was popularized by science fiction writer Vernor Vinge, who argues that artificial intelligence, human biological enhancement, or brain-computer interfaces could be possible causes of the singularity. Futurist Ray Kurzweil cited von Neumann’s use of the term in a foreword to von Neumann’s classic The Computer and the Brain.

Proponents of the singularity typically postulate an “intelligence explosion”, where superintelligences design successive generations of increasingly powerful minds, that might occur very quickly and might not stop until the agent’s cognitive abilities greatly surpass that of any human.

Kurzweil predicts the singularity to occur around 2045 whereas Vinge predicts some time before 2030. At the 2012 Singularity Summit, Stuart Armstrong did a study of artificial generalized intelligence (AGI) predictions by experts and found a wide range of predicted dates, with a median value of 2040. His own prediction on reviewing the data is that there’s an 80% probability that the singularity will occur in a range of 5 to 100 years. An alternative view, the “mitochondrial singularity,” proposed by microbiologist Joan Slonczewski, holds that the singularity is a gradual process that began centuries ago, as humans have outsourced our intelligence to machines, and that we may become reduced to vestigial power providers analogous to the mitochondria of living cells.

I thank the playwrights for introducing some of the more difficult aspects of the science and technology discussion that are taking place into this piece. For example, those who are enhanced and moving towards the singularity and those who are not enhanced are both represented here and so the playwrights have introduced some ideas about the social implications of employing new and emerging technologies.

You Are Very Star is not a perfect production but it is as I noted earlier very exciting both for the ways the company is trying to immerse audiences in an experience and for the ideas and dialogue they are attempting to stimulate.

The show goes on until June 29, 2013 and tickets are $30,

TO ORDER

youareverystar.brownpapertickets.com

1-800-838-3006

This production is being held at,

H.R. MacMillan Space Centre
1100 Chestnut Street, in Vanier Park
8:00pm Tues – Sun
2:00pm Sun
12:00pm Thurs June 20

Do enjoy!

* Correction June 19,2013: ‘where’ changed to ‘whether’

ETA June 24, 2013: I noticed that where I use the word ‘enhancement’ other reviewers such as Colin Thomas in his June 17, 2013 review for the Georgia Straight are using ‘augment’

Reloading the 21st Century body at University College London

I got an interesting June 3, 2013 announcement yssterday morning (June 3, 2013), a Call for Papers for the 21st Century Body Reloaded Symposium to be held Nov. 7 – 8, 2013 at the University College London (UK). Here are the details,

CALL FOR PAPERS

 ‘The 21st Century Body Reloaded’

Symposium

7-8th November 2013, London

Additional details to be confirmed shortly

Exciting developments in the life sciences and their application in biotechnology are helping to provide pioneering cures and therapies for inherited and degenerative diseases. Consider genomics and genetic based therapies, neuroscience and neuropharmacology, ICT implants and prosthetics, nanomedicine and the required socio-cultural accommodations to ageing and you will see how the way in which we perceive ourselves and those around us is slowly being recast.  As our knowledge and its application continues to grow and expand, the range, scope and magnitude of what we are able to achieve seems to be limitless.

Building on the success of last year’s event and the many positive and encouraging comments from participants, this year’s interdisciplinary symposium is convened in order to further build capacity as well as consolidate existing scholarship on perspectives on the human body and identity in the face of new advances in emerging technologies.

FURTHER DETAILS

Technology forecasters point to advances in nanoscience and nanotechnology as an ‘enabling technology’ which opens up further opportunities when combined with other technologies.  This “convergence” of new emerging technologies therefore becomes a matter of great debate. This is seen, for example, when advances in nanoscience converge with developments in biotechnology, which also utilise developments in information technology to capture and simulate human abilities using artificial intelligence systems and, more controversially, cognitive science.  As the animal-human distinction becomes increasingly blurred, it is plain to see the increasing growth of human power over nature in all of its forms including traditional and contemporary understanding about human nature itself. More than just speculative science fiction, talk of brain implants and neural imaging, cyborg enhancement and virtual reality simulation is suddenly becoming a pressing reality.

At this time we are faced with a key question: what does it mean to be human in the 21st Century? A series of identity crises emerge. Against the backdrop of developments in ICT, and especially in virtual contexts we are keen to ensure that our identities are protected and can be authenticated appropriately, without fear of them being reconstructed by others. Likewise, concern is expressed over the question of privacy and surveillance when we encounter new forms of identifying technologies such as biometrics which could challenge our freedom and dignity. As genetic and neuroscience technologies evolve, they provoke and unsettle some of our traditional perceptions of who and what we are.

It is envisaged that this symposium will contribute to the conversation on this theme and by drawing from insights and ideas from across the disciplines, the aim will be to chart challenges to, and changes in perceptions of identity and the human body in the 21st century.

Some key questions this symposium will aim to address include the following:

●      Is human identity being transformed, redefined or superseded through new developments in medicine and technology?

●      Do these new emerging technologies present as radical and revolutionary changes to how we see ourselves (as is sometimes claimed)? Or, are they in fact no different to their predecessors?

●      How are we to evaluate or assess the moral significance of these new technologies to our identity as humans?

●      What does it mean to have identity and to be identifiable in the 21st Century?

●      Are new technologies helping to redefine what we recognise as the human body? Are they in some ways helping to make the human body redundant? If so, in what ways?

●      What are the social, ethical and policy implications of these changes, both locally and globally, as we increasingly encounter the rapid expansion of biotechnologies worldwide?

●      Is altering the shape and appearance of the body contributing to our loss of contact with the body? How does this affect traditional ideas about the mind/body distinction?

Suggested topics:

  • Ageing and immortality;
  • Artificial intelligence; the Turing test; machine understanding;
  • Artificial life; computational biology;
  • Biometrics;
  • Cognitive science;
  • Converging technologies (nano-bio-info-cogno);
  • Ethical and social implications of advances in emerging technologies;
  • Genetics;
  • Human enhancement;
  • Implant technology;
  • Medical anthropology;
  • Neuroscience.

Organising committee:

Dr Yasemin J. Erden, Lecturer in Philosophy, CBET, St Mary’s University College, Twickenham   [email protected]

Deborah Gale, MA, King’s College London    [email protected]

Matt James MA, Director, BioCentre    [email protected]

Aaron Parkhurst, PhD research candidate, Medical Anthropology, University College London   [email protected]

Dr. Stephen Rainey, Visiting Lecturer in philosophy, St Mary’s University College, Twickenham     [email protected]

SUBMISSION DETAILS

We invite submission of abstracts in the first instance, with a word limit of around 500-750 words (maximum), and not including references. The abstract should clearly outline main arguments and conclusions of the paper.  On the basis of these abstracts, the academic organising committee will compose a short list of speakers to be invited to submit full-length papers for presentation at the symposium, which will be held in London in November 2013.

All abstracts must be submitted through EasyChair (in a Word attachment; without inclusion of personal details to allow for blind reviewing).

https://www.easychair.org/conferences/?conf=c21stbody-reloaded

A selection of successful papers from last year’s symposium were published in a special issue of The New Bioethics: A multidisciplinary journal on biotechnology and the body.

This year a selection of papers which are included in the symposium will also be invited to submit copies for consideration to a special publication on the same theme.

WEB LINKS

http://www.bioethics.ac.uk/news/Call-for-Papers-The-21st-Century-Body-Reloaded-.php

IMPORTANT DATES

Tuesday 9th July 2013 Deadline for submission of abstracts to Easychair (500-750 word limit).
Monday 28th October 2013 Final version of papers to be submitted to Easychair
w.c. 4th November 2013 Symposium, University College London

CONTACT

 For more information on submissions, please contact the organising committee directly.

THANKS

 The organising committee is grateful for the support provided by BioCentre and the Department of Anthropology, University College London.

I cannot find anything more about this conference. There doesn’t seem to be a website or webpage for it nor is it mentioned on the University College London’s website. (In fact, I couldn’t find anything for last year’s event either.)  I hope there’s something more once they’ve received the proposals/abstracts and organized the schedule.