Tag Archives: singularity

AI-led corporate entities as a new species of legal subject

An AI (artificial intelligence) agent running a business? Not to worry, lawyers are busy figuring out the implications according to this October 26, 2023 American Association for the Advancement of Science (AAAS) news release on EurekAlert,

For the first time in human history, say Daniel Gervais and John Nay in a Policy Forum, nonhuman entities that are not directed by humans – such as artificial intelligence (AI)-operated corporations – should enter the legal system as a new “species” of legal subject. AI has evolved to the point where it could function as a legal subject with rights and obligations, say the authors. As such, before the issue becomes too complex and difficult to disentangle, “interspecific” legal frameworks need to be developed by which AI can be treated as legal subjects, they write. Until now, the legal system has been univocal – it allows only humans to speak to its design and use. Nonhuman legal subjects like animals have necessarily instantiated their rights through human proxies. However, their inclusion is less about defining and protecting the rights and responsibilities of these nonhuman subjects and more a vehicle for addressing human interests and obligations as it relates to them. In the United States, corporations are recognized as “artificial persons” within the legal system. However, the laws of some jurisdictions do not always explicitly require corporate entities to have human owners or managers at their helm. Thus, by law, nothing generally prevents an AI from operating a corporate entity. Here, Gervais and Nay highlight the rapidly realizing concept of AI-operated “zero-member LLCs” – or a corporate entity operating autonomously without any direct human involvement in the process. The authors discuss several pathways in which such AI-operated LLCs and their actions could be handled within the legal system. As the idea of ceasing AI development and use is highly unrealistic, Gervais and Nay discuss other options, including regulating AI by treating the machines as legally inferior to humans or engineering AI systems to be law-abiding and bringing them into the legal fold now before it becomes too complicated to do so.

Gervais and Nay have written an October 26, 2023 essay “AIs could soon run businesses – it’s an opportunity to ensure these ‘artificial persons’ follow the law” for The Conversation, which helps clarify matters, Note: Links have been removed,

Only “persons” can engage with the legal system – for example, by signing contracts or filing lawsuits. There are two main categories of persons: humans, termed “natural persons,” and creations of the law, termed “artificial persons.” These include corporations, nonprofit organizations and limited liability companies (LLCs).

Up to now, artificial persons have served the purpose of helping humans achieve certain goals. For example, people can pool assets in a corporation and limit their liability vis-à-vis customers or other persons who interact with the corporation. But a new type of artificial person is poised to enter the scene – artificial intelligence systems, and they won’t necessarily serve human interests.

As scholars who study AI and law we believe that this moment presents a significant challenge to the legal system: how to regulate AI within existing legal frameworks to reduce undesirable behaviors, and how to assign legal responsibility for autonomous actions of AIs.

One solution is teaching AIs to be law-abiding entities.

This is far from a philosophical question. The laws governing LLCs in several U.S. states do not require that humans oversee the operations of an LLC. In fact, in some states it is possible to have an LLC with no human owner, or “member” [emphasis mine] – for example, in cases where all of the partners have died. Though legislators probably weren’t thinking of AI when they crafted the LLC laws, the possibility for zero-member LLCs opens the door to creating LLCs operated by AIs.

Many functions inside small and large companies have already been delegated to AI in part, including financial operations, human resources and network management, to name just three. AIs can now perform many tasks as well as humans do. For example, AIs can read medical X-rays and do other medical tasks, and carry out tasks that require legal reasoning. This process is likely to accelerate due to innovation and economic interests.

I found the essay illuminating and the abstract for the paper (link and citation for paper at end of this post), a little surprising,

Several experts have warned about artificial intelligence (AI) exceeding human capabilities, a “singularity” [emphasis mine] at which it might evolve beyond human control. Whether this will ever happen is a matter of conjecture. A legal singularity is afoot, however: For the first time, nonhuman entities that are not directed by humans may enter the legal system as a new “species” of legal subjects. This possibility of an “interspecific” legal system provides an opportunity to consider how AI might be built and governed. We argue that the legal system may be more ready for AI agents than many believe. Rather than attempt to ban development of powerful AI, wrapping of AI in legal form could reduce undesired AI behavior by defining targets for legal action and by providing a research agenda to improve AI governance, by embedding law into AI agents, and by training AI compliance agents.

it was a little unexpected to see the ‘singularity’ mentioned. it’s a term I associate with the tech and the sci fi communities.For anyone unfamiliar with the term, here’s a description from the ‘Technological singularity’ Wikipedia entry, Note: Links have been removed,

The technological singularity—or simply the singularity[1]—is a hypothetical future point in time at which technological growth becomes uncontrollable and irreversible, resulting in unforeseeable consequences for human civilization.[2][3] According to the most popular version of the singularity hypothesis, I. J. Good’s intelligence explosion model, an upgradable intelligent agent will eventually enter a “runaway reaction” of self-improvement cycles, each new and more intelligent generation appearing more and more rapidly, causing an “explosion” in intelligence and resulting in a powerful superintelligence that qualitatively far surpasses all human intelligence.[4]

The first person to use the concept of a “singularity” in the technological context was the 20th-century Hungarian-American mathematician John von Neumann.[5] Stanislaw Ulam reports in 1958 an earlier discussion with von Neumann “centered on the accelerating progress of technology and changes in the mode of human life, which gives the appearance of approaching some essential singularity in the history of the race beyond which human affairs, as we know them, could not continue”.[6] Subsequent authors have echoed this viewpoint.[3][7]

The concept and the term “singularity” were popularized by Vernor Vinge first in 1983 in an article that claimed that once humans create intelligences greater than their own, there will be a technological and social transition similar in some sense to “the knotted space-time at the center of a black hole”,[8] and later in his 1993 essay The Coming Technological Singularity,[4][7] in which he wrote that it would signal the end of the human era, as the new superintelligence would continue to upgrade itself and would advance technologically at an incomprehensible rate. He wrote that he would be surprised if it occurred before 2005 or after 2030.[4] Another significant contributor to wider circulation of the notion was Ray Kurzweil’s 2005 book The Singularity Is Near, predicting singularity by 2045.[7]

Finally, here’s a link to and a citation for the paper,

Law could recognize nonhuman AI-led corporate entities by Daniel J. Gervais and John J. Nay. Science 26 Oct 2023 Vol 382, Issue 6669 pp. 376-378 DOI: 10.1126/science.adi8678

This paper is behind a paywall.

Revival of dead pig brains raises moral questions about life and death

The line between life and death may not be what we thought it was according to some research that was reported in April 2019. Ed Wong’s April 17, 2019 article (behind a paywall) for The Atlantic was my first inkling about the life-death questions raised by some research performed at Yale University, (Note: Links have been removed)

The brain, supposedly, cannot long survive without blood. Within seconds, oxygen supplies deplete, electrical activity fades, and unconsciousness sets in. If blood flow is not restored, within minutes, neurons start to die in a rapid, irreversible, and ultimately fatal wave.

But maybe not? According to a team of scientists led by Nenad Sestan at Yale School of Medicine, this process might play out over a much longer time frame, and perhaps isn’t as inevitable or irreparable as commonly believed. Sestan and his colleagues showed this in dramatic fashion—by preserving and restoring signs of activity in the isolated brains of pigs that had been decapitated four hours earlier.

The team sourced 32 pig brains from a slaughterhouse, placed them in spherical chambers, and infused them with nutrients and protective chemicals, using pumps that mimicked the beats of a heart. This system, dubbed BrainEx, preserved the overall architecture of the brains, preventing them from degrading. It restored flow in their blood vessels, which once again became sensitive to dilating drugs. It stopped many neurons and other cells from dying, and reinstated their ability to consume sugar and oxygen. Some of these rescued neurons even started to fire. “Everything was surprising,” says Zvonimir Vrselja, who performed most of the experiments along with Stefano Daniele.

… “I don’t see anything in this report that should undermine confidence in brain death as a criterion of death,” says Winston Chiong, a neurologist at the University of California at San Francisco. The matter of when to declare someone dead has become more controversial since doctors began relying more heavily on neurological signs, starting around 1968, when the criteria for “brain death” were defined. But that diagnosis typically hinges on the loss of brainwide activity—a line that, at least for now, is still final and irreversible. After MIT Technology Review broke the news of Sestan’s work a year ago, he started receiving emails from people asking whether he could restore brain function to their loved ones. He very much cannot. BrainEx isn’t a resurrection chamber.

“It’s not going to result in human brain transplants,” adds Karen Rommelfanger, who directs Emory University’s neuroethics program. “And I don’t think this means that the singularity is coming, or that radical life extension is more possible than before.”

So why do the study? “There’s potential for using this method to develop innovative treatments for patients with strokes or other types of brain injuries, and there’s a real need for those kinds of treatments,” says L. Syd M Johnson, a neuroethicist at Michigan Technological University. The BrainEx method might not be able to fully revive hours-dead brains, but Yama Akbari, a critical-care neurologist at the University of California at Irvine, wonders whether it would be more successful if applied minutes after death. Alternatively, it could help to keep oxygen-starved brains alive and intact while patients wait to be treated. “It’s an important landmark study,” Akbari says.

Yong notes that the study still needs to be replicated in his article which also probes some of the ethical issues associated with the latest neuroscience research.

Nature published the Yale study,

Restoration of brain circulation and cellular functions hours post-mortem by Zvonimir Vrselja, Stefano G. Daniele, John Silbereis, Francesca Talpo, Yury M. Morozov, André M. M. Sousa, Brian S. Tanaka, Mario Skarica, Mihovil Pletikos, Navjot Kaur, Zhen W. Zhuang, Zhao Liu, Rafeed Alkawadri, Albert J. Sinusas, Stephen R. Latham, Stephen G. Waxman & Nenad Sestan. Nature 568, 336–343 (2019) DOI: https://doi.org/10.1038/s41586-019-1099-1 Published 17 April 2019 Issue Date 18 April 2019

This paper is behind a paywall.

Two neuroethicists had this to say (link to their commentary in Nature follows) as per an April 71, 2019 news release from Case Western Reserve University (also on EurekAlert), Note: Links have been removed,

The brain is more resilient than previously thought. In a groundbreaking experiment published in this week’s issue of Nature, neuroscientists created an artificial circulation system that successfully restored some functions and structures in donated pig brains–up to four hours after the pigs were butchered at a USDA food processing facility. Though there was no evidence of restored consciousness, brains from the pigs were without oxygen for hours, yet could still support key functions provided by the artificial system. The result challenges the notion that mammalian brains are fully and irreversibly damaged by a lack of oxygen.

“The assumptions have always been that after a couple minutes of anoxia, or no oxygen, the brain is ‘dead,'” says Stuart Youngner, MD, who co-authored a commentary accompanying the study with Insoo Hyun, PhD, both professors in the Department of Bioethics at Case Western Reserve University School of Medicine. “The system used by the researchers begs the question: How long should we try to save people?”

In the pig experiment, researchers used an artificial perfusate (a type of cell-free “artificial blood”), which helped brain cells maintain their structure and some functions. Resuscitative efforts in humans, like CPR, are also designed to get oxygen to the brain and stave off brain damage. After a period of time, if a person doesn’t respond to resuscitative efforts, emergency medical teams declare them dead.

The acceptable duration of resuscitative efforts is somewhat uncertain. “It varies by country, emergency medical team, and hospital,” Youngner said. Promising results from the pig experiment further muddy the waters about the when to stop life-saving efforts.

At some point, emergency teams must make a critical switch from trying to save a patient, to trying to save organs, said Youngner. “In Europe, when emergency teams stop resuscitation efforts, they declare a patient dead, and then restart the resuscitation effort to circulate blood to the organs so they can preserve them for transplantation.”

The switch can involve extreme means. In the commentary, Youngner and Hyun describe how some organ recovery teams use a balloon to physically cut off blood circulation to the brain after declaring a person dead, to prepare the organs for transplantation.

The pig experiment implies that sophisticated efforts to perfuse the brain might maintain brain cells. If technologies like those used in the pig experiment could be adapted for humans (a long way off, caution Youngner and Hyun), some people who, today, are typically declared legally dead after a catastrophic loss of oxygen could, tomorrow, become candidates for brain resuscitation, instead of organ donation.

Said Youngner, “As we get better at resuscitating the brain, we need to decide when are we going to save a patient, and when are we going to declare them dead–and save five or more who might benefit from an organ.”

Because brain resuscitation strategies are in their infancy and will surely trigger additional efforts, the scientific and ethics community needs to begin discussions now, says Hyun. “This study is likely to raise a lot of public concerns. We hoped to get ahead of the hype and offer an early, reasoned response to this scientific advance.”

Both Youngner and Hyun praise the experiment as a “major scientific advancement” that is overwhelmingly positive. It raises the tantalizing possibility that the grave risks of brain damage caused by a lack of oxygen could, in some cases, be reversible.
“Pig brains are similar in many ways to human brains, which makes this study so compelling,” Hyun said. “We urge policymakers to think proactively about what this line of research might mean for ongoing debates around organ donation and end of life care.”

Here’s a link to and a citation to the Nature commentary,

Pig experiment challenges assumptions around brain damage in people by Stuart Youngner and Insoo Hyun. Nature 568, 302-304 (2019) DOI: 10.1038/d41586-019-01169-8 April 17, 2019

This paper is open access.

I was hoping to find out more about BrainEx, but this April 17, 2019 US National Institute of Mental Health news release is all I’ve been able to find in my admittedly brief online search. The news release offers more celebration than technical detail.

Quick comment

Interestingly, there hasn’t been much of a furor over this work. Not yet.

Nanotechnology at the movies: Transcendence opens April 18, 2014 in the US & Canada

Screenwriter Jack Paglen has an intriguing interpretation of nanotechnology, one he (along with the director) shares in an April 13, 2014 article by Larry Getlen for the NY Post and in his movie, Transcendence. which is opening in the US and Canada on April 18, 2014. First, here are a few of the more general ideas underlying his screenplay,

In “Transcendence” — out Friday [April 18, 2014] and directed by Oscar-winning cinematographer Wally Pfister (“Inception,” “The Dark Knight”) — Johnny Depp plays Dr. Will Caster, an artificial-intelligence researcher who has spent his career trying to design a sentient computer that can hold, and even exceed, the world’s collective intelligence.

After he’s shot by antitechnology activists, his consciousness is uploaded to a computer network just before his body dies.

“The theories associated with the film say that when a strong artificial intelligence wakes up, it will quickly become more intelligent than a human being,” screenwriter Jack Paglen says, referring to a concept known as “the singularity.”

It should be noted that there are anti-technology terrorists. I don’t think I’ve covered that topic in a while so an Aug. 31, 2012 posting is the most recent and, despite the title, “In depth and one year later—the nanotechnology bombings in Mexico” provides an overview of sorts. For a more up-to-date view, you can read Eric Markowitz’s April 9, 2014 article for Vocative.com. I do have one observation about the article where Markowitz has linked some recent protests in San Francisco to the bombings in Mexico. Those protests in San Francisco seem more like a ‘poor vs. the rich’ situation where the rich happen to come from the technology sector.

Getting back to “Transcendence” and singularity, there’s a good Wikipedia entry describing the ideas and some of the thinkers behind the notion of a singularity or technological singularity, as it’s sometimes called (Note: Links have been removed),

The technological singularity, or simply the singularity, is a hypothetical moment in time when artificial intelligence will have progressed to the point of a greater-than-human intelligence, radically changing civilization, and perhaps human nature.[1] Because the capabilities of such an intelligence may be difficult for a human to comprehend, the technological singularity is often seen as an occurrence (akin to a gravitational singularity) beyond which the future course of human history is unpredictable or even unfathomable.

The first use of the term “singularity” in this context was by mathematician John von Neumann. In 1958, regarding a summary of a conversation with von Neumann, Stanislaw Ulam described “ever accelerating progress of technology and changes in the mode of human life, which gives the appearance of approaching some essential singularity in the history of the race beyond which human affairs, as we know them, could not continue”.[2] The term was popularized by science fiction writer Vernor Vinge, who argues that artificial intelligence, human biological enhancement, or brain-computer interfaces could be possible causes of the singularity.[3] Futurist Ray Kurzweil cited von Neumann’s use of the term in a foreword to von Neumann’s classic The Computer and the Brain.

Proponents of the singularity typically postulate an “intelligence explosion”,[4][5] where superintelligences design successive generations of increasingly powerful minds, that might occur very quickly and might not stop until the agent’s cognitive abilities greatly surpass that of any human.

Kurzweil predicts the singularity to occur around 2045[6] whereas Vinge predicts some time before 2030.[7] At the 2012 Singularity Summit, Stuart Armstrong did a study of artificial generalized intelligence (AGI) predictions by experts and found a wide range of predicted dates, with a median value of 2040. His own prediction on reviewing the data is that there is an 80% probability that the singularity will occur between 2017 and 2112.[8]

The ‘technological singularity’ is controversial and contested (from the Wikipedia entry).

In addition to general criticisms of the singularity concept, several critics have raised issues with Kurzweil’s iconic chart. One line of criticism is that a log-log chart of this nature is inherently biased toward a straight-line result. Others identify selection bias in the points that Kurzweil chooses to use. For example, biologist PZ Myers points out that many of the early evolutionary “events” were picked arbitrarily.[104] Kurzweil has rebutted this by charting evolutionary events from 15 neutral sources, and showing that they fit a straight line on a log-log chart. The Economist mocked the concept with a graph extrapolating that the number of blades on a razor, which has increased over the years from one to as many as five, will increase ever-faster to infinity.[105]

By the way, this movie is mentioned briefly in the pop culture portion of the Wikipedia entry.

Getting back to Paglen and his screenplay, here’s more from Getlen’s article,

… as Will’s powers grow, he begins to pull off fantastic achievements, including giving a blind man sight, regenerating his own body and spreading his power to the water and the air.

This conjecture was influenced by nanotechnology, the field of manipulating matter at the scale of a nanometer, or one-billionth of a meter. (By comparison, a human hair is around 70,000-100,000 nanometers wide.)

“In some circles, nanotechnology is the holy grail,” says Paglen, “where we could have microscopic, networked machines [emphasis mine] that would be capable of miracles.”

The potential uses of, and implications for, nanotechnology are vast and widely debated, but many believe the effects could be life-changing.

“When I visited MIT,” says Pfister, “I visited a cancer research institute. They’re talking about the ability of nanotechnology to be injected inside a human body, travel immediately to a cancer cell, and deliver a payload of medicine directly to that cell, eliminating [the need to] poison the whole body with chemo.”

“Nanotechnology could help us live longer, move faster and be stronger. It can possibly cure cancer, and help with all human ailments.”

I find the ‘golly gee wizness’ of Paglen’s and Pfister’s take on nanotechnology disconcerting but they can’t be dismissed. There are projects where people are testing retinal implants which allow them to see again. There is a lot of work in the field of medicine designed to make therapeutic procedures that are gentler on the body by making their actions specific to diseased tissue while ignoring healthy tissue (sadly, this is still not possible). As for human enhancement, I have so many pieces that it has its own category on this blog. I first wrote about it in a four-part series starting with this one: Nanotechnology enables robots and human enhancement: part 1, (You can read the series by scrolling past the end of the posting and clicking on the next part or search the category and pick through the more recent pieces.)

I’m not sure if this error is Paglen’s or Getlen’s but nanotechnology is not “microscopic, networked machines” as Paglen’s quote strongly suggests. Some nanoscale devices could be described as machines (often called nanobots) but there are also nanoparticles, nanotubes, nanowires, and more that cannot be described as machines or devices, for that matter. More importantly, it seems Paglen’s main concern is this,

“One of [science-fiction author] Arthur C. Clarke’s laws is that any sufficiently advanced technology is indistinguishable from magic. That very quickly would become the case if this happened, because this artificial intelligence would be evolving technologies that we do not understand, and it would be capable of miracles by that definition,” says Paglen. [emphasis mine]

This notion of “evolving technologies that we do not understand” brings to mind a  project that was announced at the University of Cambridge (from my Nov. 26, 2012 posting),

The idea that robots of one kind or another (e.g. nanobots eating up the world and leaving grey goo, Cylons in both versions of Battlestar Galactica trying to exterminate humans, etc.) will take over the world and find humans unnecessary  isn’t especially new in works of fiction. It’s not always mentioned directly but the underlying anxiety often has to do with intelligence and concerns over an ‘explosion of intelligence’. The question it raises,’ what if our machines/creations become more intelligent than humans?’ has been described as existential risk. According to a Nov. 25, 2012 article by Sylvia Hui for Huffington Post, a group of eminent philosophers and scientists at the University of Cambridge are proposing to found a Centre for the Study of Existential Risk,

While I do have some reservations about how Paglen and Pfister describe the science, I appreciate their interest in communicating the scientific ideas, particularly those underlying Paglen’s screenplay.

For anyone who may be concerned about the likelihood of emulating  a human brain and uploading it to a computer, there’s an April 13, 2014 article by Luke Muehlhauser and Stuart Armstrong for Slate discussing that very possibility (Note 1: Links have been removed; Note 2: Armstrong is mentioned in this posting’s excerpt from the Wikipedia entry on Technological Singularity),

Today scientists can’t even emulate the brain of a tiny worm called C. elegans, which has 302 neurons, compared with the human brain’s 86 billion neurons. Using models of expected technological progress on the three key problems, we’d estimate that we wouldn’t be able to emulate human brains until at least 2070 (though this estimate is very uncertain).

But would an emulation of your brain be you, and would it be conscious? Such questions quickly get us into thorny philosophical territory, so we’ll sidestep them for now. For many purposes—estimating the economic impact of brain emulations, for instance—it suffices to know that the brain emulations would have humanlike functionality, regardless of whether the brain emulation would also be conscious.

Paglen/Pfister seem to be equating intelligence (brain power) with consciousness while Muehlhauser/Armstrong simply sidestep the issue. As they (Muehlhauser/Armstrong) note, it’s “thorny.”

If you consider thinkers like David Chalmers who suggest everything has consciousness, then it follows that computers/robots/etc. may not appreciate having a human brain emulation which takes us back into Battlestar Galactica territory. From my March 19, 2014 posting (one of the postings where I recounted various TED 2014 talks in Vancouver), here’s more about David Chalmers,

Finally, I wasn’t expecting to write about David Chalmers so my notes aren’t very good. A philosopher, here’s an excerpt from Chalmers’ TED biography,

In his work, David Chalmers explores the “hard problem of consciousness” — the idea that science can’t ever explain our subjective experience.

David Chalmers is a philosopher at the Australian National University and New York University. He works in philosophy of mind and in related areas of philosophy and cognitive science. While he’s especially known for his theories on consciousness, he’s also interested (and has extensively published) in all sorts of other issues in the foundations of cognitive science, the philosophy of language, metaphysics and epistemology.

Chalmers provided an interesting bookend to a session started with a brain researcher (Nancy Kanwisher) who breaks the brain down into various processing regions (vastly oversimplified but the easiest way to summarize her work in this context). Chalmers reviewed the ‘science of consciousness’ and noted that current work in science tends to be reductionist, i.e., examining parts of things such as brains and that same reductionism has been brought to the question of consciousness.

Rather than trying to prove consciousness, Chalmers proposes that we consider it a fundamental in the same way that we consider time, space, and mass to be fundamental. He noted that there’s precedence for additions and gave the example of James Clerk Maxwell and his proposal to consider electricity and magnetism as fundamental.

Chalmers next suggestion is a little more outré and based on some thinking (sorry I didn’t catch the theorist’s name) that suggests everything, including photons, has a type of consciousness (but not intelligence).

Have a great time at the movie!

Human, Soul & Machine: The Coming Singularity! exhibition Oct. 5, 2013 – August 31, 2014 at Baltimore’s Visionary Art Museum

Doug Rule’s Oct. 4, 2013 article for the Baltimore (Maryland, US) edition of the Metro Weekly highlights a rather unusual art/science exhibition (Note: Links have been removed),

Maybe the weirdest, wildest museum you’ll ever visit, Baltimore’s American Visionary Art Museum opens its 19th original thematic yearlong exhibition this weekend. Human, Soul & Machine: The Coming Singularity! is what the quirky museum, focused on presenting self-taught artists, bills as its most complex subject yet, a playful examination of the serious impact of technology — in all its forms, from artificial intelligence to nanotechnology to Big Data — on our lives, as seen through the eyes of more than 40 artists, futurists and inventors in a hot-wired blend of art, science, humor and imagination.

The show opened Oct. 5, 2013 and runs until August 31, 2014. The exhibition webpage offers a description of the show and curation,

Curated by AVAM founder and director Rebecca Alban Hoffberger, this stirring show harnesses the enchanting visual delights of remarkable visionary artists and their masterworks. Among them: Kenny Irwin’s Robotmas—a special installation from his Palm Springs Robo-Lights display, glowing inside of a central black box theater at the heart of this exhibition; a selection of Alex Grey’s Sacred Mirrors; O.L. Samuels’ 7-ft tall Godzilla—a creation first imagined in response to the devastating use of the A-bomb on Hiroshima and Nagasaki; Rigo 23’s delicate anti-drone drawings; Allen Christian’s life-sized Piano Family—a love song to string theory; Fred Carter’s massive wooden carvings—created as a warning of destruction from industry’s manipulation of nature; and much more!

The exhibition media kit features a striking (imo) graphic image representing the show,

American Visionary Art Museum graphic for Human Soul exhibition [downloaded from http://www.avam.org/news-and-events/pdf/press-kits/Singularity/HSM-MediaKit-Web.pdf]

American Visionary Art Museum graphic for Human, Soul, and Machine exhibition [downloaded from http://www.avam.org/news-and-events/pdf/press-kits/Singularity/HSM-MediaKit-Web.pdf]


The list of artists includes one person familiar to anyone following the ‘singularity’ story even occasionally, Ray Kurzweil.

Global Futures (GF) 2045 International Congress and transhumanism at the June 2013 meeting

Stuart Mason Dambrot has written a special article (part 1 only, part 2 has yet to be published) about the recent Global Futures 2045 Congress held June 15-16, 2013 (program) in New York City. Dambrot’s piece draws together contemporary research and frames it within the context of transhumanism. From the Aug. 1, 2013 feature on phys.org (Note: Links have been removed),

Futurists, visionaries, scientists, technologists, philosophers, and others who take this view to heart convened on June 15-16, 2013 in New York City at Global Futures 2045 International Congress: Towards a New Strategy for Human Evolution. GF2045 was organized by the 2045 Strategic Social Initiative founded by Russian entrepreneur Dmitry Itskov in February 2011 with the main goals of creating and realizing a new strategy for the development of humanity – one based upon our unique emerging capability to effect self-directed evolution. The initiative’s two main science projects are focused largely on Transhumanism – a multidisciplinary approach to analyzing the dynamic interplay between humanity and the acceleration of technology. Specifically, the 2045 Initiative’s projects seek to (1) enable an individual’s personality to be transferred to a more advanced non-biological substrate, and (2) extend life to the point of immortality …

Attendees were given a very dire view of the future followed by glimpses of another possible future provided we put our faith in science and technology. From Dambrot’s article (Note: Link has been removed),

… the late Dr. James Martin, who tragically passed away on June 24, 2013, gave a sweeping, engaging talk on The Transformation of Humankind—Extreme Paradigm Shifts Are Ahead of Us. An incredibly prolific author of books on computing and related technology, Dr. Martin founded the Oxford Martin School at Oxford University – an interdisciplinary research community comprising over 30 institutes and projects addressing the most pressing global challenges and opportunities of the 21st century. Dr. Martin – in the highly engaging manner for which he was renowned – presented a remarkably accessible survey of the interdependent trends that will increasingly threaten humanity over the coming decades. Dr. Martin made it disturbingly clear that population growth, resource consumption, water depletion, desertification, deforestation, ocean pollution and fish depopulation, atmospheric carbon dioxide, what he termed gigafamine (the death of more than a billion people as a consequence of food shortage by mid-century), and other factors are ominously close to their tipping points – after which their effects will be irreversible. (For example, he points out that in 20 years we’ll be consuming an obviously unsustainable 200 percent of then-available resources.) Taken together, he cautioned, these developments will constitute a “perfect storm” that will cause a Darwinian survival of the fittest in which “the Earth could be like a lifeboat that’s too small to save everyone.”

However, Dr. Martin also emphasized that there are solutions discussing the trends and technologies that – even as he acknowledged the resistance to implementing or even understanding them – could have a positive impact on our future:

The Singularity and an emerging technocracy

Genetic engineering and Transhumanism, in particular, a synthetic 24th human   chromosome that would contain non-inheritable genetic modifications and synthetic DNA sequences

Artificial Intelligence and nanorobotics

Yottascale computers capable of executing 1024 operations per second

 Quantum computing

Graphene – a one-atom thick layer of graphite with an ever-expanding portfolio of electronic, optical, excitonic, thermal, mechanical, and quantum properties, and an even longer list of potential applications

Autonomous automobiles

Nuclear batteries in the form of small, ultra-safe and maintenance-free underground Tokamak nuclear fusion reactors

Photovoltaics that make electricity more cheaply than coal Capturing rainwater and floodwater to increase water supply

Eco-influence – Dr. Martin’s term for a rich, enjoyable and sometimes complex way of life that does no ecological harm

Dambrot goes on to cover day one (I think that’s how he has this organized) of the event at length and provides a number of video panels and discussions. I was hoping he’d have part two posted by now but given how much work he’s put into part 1 it’s understandable that part 2 might take a while. So, I’ll keep an eye open for it and add a link here when it’s posted.

I did check Dambrot’s website and found this on the ‘Critical Thought’ bio webpage,

Stuart Mason Dambrot is an interdisciplinary science synthesist and communicator. He analyzes deep-structure conceptual and neural connections between multiple areas of knowledge and creativity, and monitors and extrapolates convergent and emergent trends in a wide range of research activities. Stuart is also the creator and host of Critical Thought | TV, an online discussion channel examining convergent and emergent trends in the sciences, arts and humanities. As an invited speaker, he has given talks on Exocortical Cognition, Emergent Technologies, Synthetic Biology, Transhumanism, Philosophy of Mind, Sociopolitical Futures, and other topics at New York Academy of Sciences, Cooper-Union, Science House, New York Future Salon, and other venues.

Stuart has a diverse background in Physiological Psychology, integrating Neuroscience, Cognitive Psychology, Artificial Intelligence, Neural Networks, Complexity Theory, Epistemology, Ethics, and Philosophy of Science. His memberships and affiliations include American Association for the Advancement of Science, New York Academy of Sciences, Lifeboat Foundation Advisory Board, Center for Inquiry, New York Futurist Society, Linnaean Society National Association of Science Writers, Science Writers in New York, and Foreign Correspondents Club of Japan.

I have yet to find any written material by Dambrot which challenges transhumanism in any way despite the fact that his website is called Critical Thought.  This reservation aside, his pieces cover an interesting range of topics and I will try to get back to read more.

As for the GF 2045 initiative, I found this on their About us webpage,

The main goals of the 2045 Initiative: the creation and realization of a new strategy for the development of humanity which meets global civilization challenges; the creation of optimale conditions promoting the spiritual enlightenment of humanity; and the realization of a new futuristic reality based on 5 principles: high spirituality, high culture, high ethics, high science and high technologies.

The main science mega-project of the 2045 Initiative aims to create technologies enabling the transfer of a individual’s personality to a more advanced non-biological carrier, and extending life, including to the point of immortality. We devote particular attention to enabling the fullest possible dialogue between the world’s major spiritual traditions, science and society.

A large-scale transformation of humanity, comparable to some of the major spiritual and sci-tech revolutions in history, will require a new strategy. We believe this to be necessary to overcome existing crises, which threaten our planetary habitat and the continued existence of humanity as a species. With the 2045 Initiative, we hope to realize a new strategy for humanity’s development, and in so doing, create a more productive, fulfilling, and satisfying future.

The “2045” team is working towards creating an international research center where leading scientists will be engaged in research and development in the fields of anthropomorphic robotics, living systems modeling and brain and consciousness modeling with the goal of transferring one’s individual consciousness to an artificial carrier and achieving cybernetic immortality.

An annual congress “The Global Future 2045” is organized by the Initiative to give platform for discussing mankind’s evolutionary strategy based on technologies of cybernetic immortality as well as the possible impact of such technologies on global society, politics and economies of the future.

Future prospects of “2045” Initiative for society

2015-2020

The emergence and widespread use of affordable android “avatars” controlled by a “brain-computer” interface. Coupled with related technologies “avatars’ will give people a number of new features: ability to work in dangerous environments, perform rescue operations, travel in extreme situations etc.

Avatar components will be used in medicine for the rehabilitation of fully or partially disabled patients giving them prosthetic limbs or recover lost senses.

2020-2025

Creation of an autonomous life-support system for the human brain linked to a robot, ‘avatar’, will save people whose body is completely worn out or irreversibly damaged. Any patient with an intact brain will be able to return to a fully functioning  bodily life. Such technologies will  greatly enlarge  the possibility of hybrid bio-electronic devices, thus creating a new IT revolution and will make  all  kinds of superimpositions of electronic and biological systems possible.

2030-2035

Creation of a computer model of the brain and human consciousness  with the subsequent development of means to transfer individual consciousness  onto an artificial carrier. This development will profoundly change the world, it will not only give everyone the possibility of  cybernetic immortality but will also create a friendly artificial intelligence,  expand human capabilities  and provide opportunities for ordinary people to restore or modify their own brain multiple times.  The final result  at this stage can be a real revolution in the understanding of human nature that will completely change the human and technical prospects for humanity.

2045

This is the time when substance-independent minds will receive new bodies with capacities far exceeding those of ordinary humans. A new era for humanity will arrive!  Changes will occur in all spheres of human activity – energy generation, transportation, politics, medicine, psychology, sciences, and so on.

Today it is hard to imagine a future when bodies consisting of nanorobots  will become affordable  and capable of taking any form. It is also hard to imagine body holograms featuring controlled matter. One thing is clear however:  humanity, for the first time in its history, will make a fully managed evolutionary transition and eventually become a new species. Moreover,  prerequisites for a large-scale  expansion into outer space will be created as well.

It all seems a bit grandiose to me and, frankly, I’ve never found the prospect of being downloaded onto a nonbiological substrate particularly appealing. As well, how are they going to tackle the incredibly complex process of downloading or is it duplicating a brain? There’s still a lot of debate as to how a brain works (any brain: a rat brain, a dog brain, etc.).

It all gets more complicated the more you think about it. Is a duplicate/downloaded brain exactly the same as the original? Digitized print materials are relatively simple compared to a brain and yet archivists are still trying to determine how one establishes authenticity with print materials that have been digitized and downloaded/uploaded.

As well, I wonder if these grand dreamers have ever come across ‘the law of unintended consequences’. E.g. cane toads in Australia or DDT and other pesticides, which were intended as solutions and are now problems themselves.

Existential risk

The idea that robots of one kind or another (e.g. nanobots eating up the world and leaving grey goo, Cylons in both versions of Battlestar Galactica trying to exterminate humans, etc.) will take over the world and find humans unnecessary  isn’t especially new in works of fiction. It’s not always mentioned directly but the underlying anxiety often has to do with intelligence and concerns over an ‘explosion of intelligence’. The question it raises,’ what if our machines/creations become more intelligent than humans?’ has been described as existential risk. According to a Nov. 25, 2012 article by Sylvia Hui for Huffington Post, a group of eminent philosophers and scientists at the University of Cambridge are proposing to found a Centre for the Study of Existential Risk,

Could computers become cleverer than humans and take over the world? Or is that just the stuff of science fiction?

Philosophers and scientists at Britain’s Cambridge University think the question deserves serious study. A proposed Center for the Study of Existential Risk will bring together experts to consider the ways in which super intelligent technology, including artificial intelligence, could “threaten our own existence,” the institution said Sunday.

“In the case of artificial intelligence, it seems a reasonable prediction that some time in this or the next century intelligence will escape from the constraints of biology,” Cambridge philosophy professor Huw Price said.

When that happens, “we’re no longer the smartest things around,” he said, and will risk being at the mercy of “machines that are not malicious, but machines whose interests don’t include us.”

Price along with Martin Rees, Emeritus Professor of Cosmology and Astrophysics, and Jaan Tallinn, Co-Founder of Skype, are the driving forces behind this proposed new centre at Cambridge University. From the Cambridge Project for Existential Risk webpage,

Many scientists are concerned that developments in human technology may soon pose new, extinction-level risks to our species as a whole. Such dangers have been suggested from progress in AI, from developments in biotechnology and artificial life, from nanotechnology, and from possible extreme effects of anthropogenic climate change. The seriousness of these risks is difficult to assess, but that in itself seems a cause for concern, given how much is at stake. …

The Cambridge Project for Existential Risk — a joint initiative between a philosopher, a scientist, and a software entrepreneur — begins with the conviction that these issues require a great deal more scientific investigation than they presently receive. Our aim is to establish within the University of Cambridge a multidisciplinary research centre dedicated to the study and mitigation of risks of this kind.

Price and Tallinn co-wrote an Aug. 6, 2012 article for the Australia-based, The Conversation website, about their concerns,

We know how to deal with suspicious packages – as carefully as possible! These days, we let robots take the risk. But what if the robots are the risk? Some commentators argue we should be treating AI (artificial intelligence) as a suspicious package, because it might eventually blow up in our faces. Should we be worried?

Asked whether there will ever be computers as smart as people, the US mathematician and sci-fi author Vernor Vinge replied: “Yes, but only briefly”.

He meant that once computers get to this level, there’s nothing to prevent them getting a lot further very rapidly. Vinge christened this sudden explosion of intelligence the “technological singularity”, and thought that it was unlikely to be good news, from a human point of view.

Was Vinge right, and if so what should we do about it? Unlike typical suspicious parcels, after all, what the future of AI holds is up to us, at least to some extent. Are there things we can do now to make sure it’s not a bomb (or a good bomb rather than a bad bomb, perhaps)?

It appears Price, Rees, and Tallinn are not the only concerned parties, from the Nov. 25, 2012 research news piece on the Cambridge University website,

With luminaries in science, policy, law, risk and computing from across the University and beyond signing up to become advisors, the project is, even in its earliest days, gathering momentum. “The basic philosophy is that we should be taking seriously the fact that we are getting to the point where our technologies have the potential to threaten our own existence – in a way that they simply haven’t up to now, in human history,” says Price. “We should be investing a little of our intellectual resources in shifting some probability from bad outcomes to good ones.”

Price acknowledges that some of these ideas can seem far-fetched, the stuff of science fiction, but insists that that’s part of the point.

According to the Huffington Post article by Lui, they expect to launch the centre next year (2013). In the meantime, for anyone who’s looking for more information about the ‘intelligence explosion’ or  ‘singularity’ as it’s also known, there’s a Wikipedia essay on the topic.  Also, you may want to stay tuned to this channel (blog) as I expect to have some news about an artificial intelligence project based at the University of Waterloo (Ontario, Canada) and headed by Chris Eliasmith at the university’s Centre for Theoretical Neuroscience, later this week.

Waiting for Martha

Last April (2008), Canada’s National Institute of Nanotechnology (NINT) announced a new chairperson for their board, Martha Cook Piper. I was particularly interested in the news since she was the president of the University of British Columbia (UBC) for a number of years during which she maintained a pretty high profile locally and, I gather, nationally. She really turned things around at UBC and helped it gain more national prominence.

I contacted NINT and sent some interview questions in May or June last year. After some months (as I recall it was Sept. or Oct. 2008), I got an email address for Martha and redirected my queries to her. She was having a busy time during the fall and through Christmas into 2009 with the consequence that my questions have only recently been answered. At this point, someone at NINT is reviewing the answers and I’m hopeful that I will finally have the interview in the near future.

There is a documentary about Ray Kurzweil (‘Mr. Singularity’) making the rounds. You can see a trailer and a preview article here at Fast Company.

As you may have guessed, there’s not a lot of news today.