Tag Archives: Vernor Vinge

AI-led corporate entities as a new species of legal subject

An AI (artificial intelligence) agent running a business? Not to worry, lawyers are busy figuring out the implications according to this October 26, 2023 American Association for the Advancement of Science (AAAS) news release on EurekAlert,

For the first time in human history, say Daniel Gervais and John Nay in a Policy Forum, nonhuman entities that are not directed by humans – such as artificial intelligence (AI)-operated corporations – should enter the legal system as a new “species” of legal subject. AI has evolved to the point where it could function as a legal subject with rights and obligations, say the authors. As such, before the issue becomes too complex and difficult to disentangle, “interspecific” legal frameworks need to be developed by which AI can be treated as legal subjects, they write. Until now, the legal system has been univocal – it allows only humans to speak to its design and use. Nonhuman legal subjects like animals have necessarily instantiated their rights through human proxies. However, their inclusion is less about defining and protecting the rights and responsibilities of these nonhuman subjects and more a vehicle for addressing human interests and obligations as it relates to them. In the United States, corporations are recognized as “artificial persons” within the legal system. However, the laws of some jurisdictions do not always explicitly require corporate entities to have human owners or managers at their helm. Thus, by law, nothing generally prevents an AI from operating a corporate entity. Here, Gervais and Nay highlight the rapidly realizing concept of AI-operated “zero-member LLCs” – or a corporate entity operating autonomously without any direct human involvement in the process. The authors discuss several pathways in which such AI-operated LLCs and their actions could be handled within the legal system. As the idea of ceasing AI development and use is highly unrealistic, Gervais and Nay discuss other options, including regulating AI by treating the machines as legally inferior to humans or engineering AI systems to be law-abiding and bringing them into the legal fold now before it becomes too complicated to do so.

Gervais and Nay have written an October 26, 2023 essay “AIs could soon run businesses – it’s an opportunity to ensure these ‘artificial persons’ follow the law” for The Conversation, which helps clarify matters, Note: Links have been removed,

Only “persons” can engage with the legal system – for example, by signing contracts or filing lawsuits. There are two main categories of persons: humans, termed “natural persons,” and creations of the law, termed “artificial persons.” These include corporations, nonprofit organizations and limited liability companies (LLCs).

Up to now, artificial persons have served the purpose of helping humans achieve certain goals. For example, people can pool assets in a corporation and limit their liability vis-à-vis customers or other persons who interact with the corporation. But a new type of artificial person is poised to enter the scene – artificial intelligence systems, and they won’t necessarily serve human interests.

As scholars who study AI and law we believe that this moment presents a significant challenge to the legal system: how to regulate AI within existing legal frameworks to reduce undesirable behaviors, and how to assign legal responsibility for autonomous actions of AIs.

One solution is teaching AIs to be law-abiding entities.

This is far from a philosophical question. The laws governing LLCs in several U.S. states do not require that humans oversee the operations of an LLC. In fact, in some states it is possible to have an LLC with no human owner, or “member” [emphasis mine] – for example, in cases where all of the partners have died. Though legislators probably weren’t thinking of AI when they crafted the LLC laws, the possibility for zero-member LLCs opens the door to creating LLCs operated by AIs.

Many functions inside small and large companies have already been delegated to AI in part, including financial operations, human resources and network management, to name just three. AIs can now perform many tasks as well as humans do. For example, AIs can read medical X-rays and do other medical tasks, and carry out tasks that require legal reasoning. This process is likely to accelerate due to innovation and economic interests.

I found the essay illuminating and the abstract for the paper (link and citation for paper at end of this post), a little surprising,

Several experts have warned about artificial intelligence (AI) exceeding human capabilities, a “singularity” [emphasis mine] at which it might evolve beyond human control. Whether this will ever happen is a matter of conjecture. A legal singularity is afoot, however: For the first time, nonhuman entities that are not directed by humans may enter the legal system as a new “species” of legal subjects. This possibility of an “interspecific” legal system provides an opportunity to consider how AI might be built and governed. We argue that the legal system may be more ready for AI agents than many believe. Rather than attempt to ban development of powerful AI, wrapping of AI in legal form could reduce undesired AI behavior by defining targets for legal action and by providing a research agenda to improve AI governance, by embedding law into AI agents, and by training AI compliance agents.

it was a little unexpected to see the ‘singularity’ mentioned. it’s a term I associate with the tech and the sci fi communities.For anyone unfamiliar with the term, here’s a description from the ‘Technological singularity’ Wikipedia entry, Note: Links have been removed,

The technological singularity—or simply the singularity[1]—is a hypothetical future point in time at which technological growth becomes uncontrollable and irreversible, resulting in unforeseeable consequences for human civilization.[2][3] According to the most popular version of the singularity hypothesis, I. J. Good’s intelligence explosion model, an upgradable intelligent agent will eventually enter a “runaway reaction” of self-improvement cycles, each new and more intelligent generation appearing more and more rapidly, causing an “explosion” in intelligence and resulting in a powerful superintelligence that qualitatively far surpasses all human intelligence.[4]

The first person to use the concept of a “singularity” in the technological context was the 20th-century Hungarian-American mathematician John von Neumann.[5] Stanislaw Ulam reports in 1958 an earlier discussion with von Neumann “centered on the accelerating progress of technology and changes in the mode of human life, which gives the appearance of approaching some essential singularity in the history of the race beyond which human affairs, as we know them, could not continue”.[6] Subsequent authors have echoed this viewpoint.[3][7]

The concept and the term “singularity” were popularized by Vernor Vinge first in 1983 in an article that claimed that once humans create intelligences greater than their own, there will be a technological and social transition similar in some sense to “the knotted space-time at the center of a black hole”,[8] and later in his 1993 essay The Coming Technological Singularity,[4][7] in which he wrote that it would signal the end of the human era, as the new superintelligence would continue to upgrade itself and would advance technologically at an incomprehensible rate. He wrote that he would be surprised if it occurred before 2005 or after 2030.[4] Another significant contributor to wider circulation of the notion was Ray Kurzweil’s 2005 book The Singularity Is Near, predicting singularity by 2045.[7]

Finally, here’s a link to and a citation for the paper,

Law could recognize nonhuman AI-led corporate entities by Daniel J. Gervais and John J. Nay. Science 26 Oct 2023 Vol 382, Issue 6669 pp. 376-378 DOI: 10.1126/science.adi8678

This paper is behind a paywall.

Existential risk

The idea that robots of one kind or another (e.g. nanobots eating up the world and leaving grey goo, Cylons in both versions of Battlestar Galactica trying to exterminate humans, etc.) will take over the world and find humans unnecessary  isn’t especially new in works of fiction. It’s not always mentioned directly but the underlying anxiety often has to do with intelligence and concerns over an ‘explosion of intelligence’. The question it raises,’ what if our machines/creations become more intelligent than humans?’ has been described as existential risk. According to a Nov. 25, 2012 article by Sylvia Hui for Huffington Post, a group of eminent philosophers and scientists at the University of Cambridge are proposing to found a Centre for the Study of Existential Risk,

Could computers become cleverer than humans and take over the world? Or is that just the stuff of science fiction?

Philosophers and scientists at Britain’s Cambridge University think the question deserves serious study. A proposed Center for the Study of Existential Risk will bring together experts to consider the ways in which super intelligent technology, including artificial intelligence, could “threaten our own existence,” the institution said Sunday.

“In the case of artificial intelligence, it seems a reasonable prediction that some time in this or the next century intelligence will escape from the constraints of biology,” Cambridge philosophy professor Huw Price said.

When that happens, “we’re no longer the smartest things around,” he said, and will risk being at the mercy of “machines that are not malicious, but machines whose interests don’t include us.”

Price along with Martin Rees, Emeritus Professor of Cosmology and Astrophysics, and Jaan Tallinn, Co-Founder of Skype, are the driving forces behind this proposed new centre at Cambridge University. From the Cambridge Project for Existential Risk webpage,

Many scientists are concerned that developments in human technology may soon pose new, extinction-level risks to our species as a whole. Such dangers have been suggested from progress in AI, from developments in biotechnology and artificial life, from nanotechnology, and from possible extreme effects of anthropogenic climate change. The seriousness of these risks is difficult to assess, but that in itself seems a cause for concern, given how much is at stake. …

The Cambridge Project for Existential Risk — a joint initiative between a philosopher, a scientist, and a software entrepreneur — begins with the conviction that these issues require a great deal more scientific investigation than they presently receive. Our aim is to establish within the University of Cambridge a multidisciplinary research centre dedicated to the study and mitigation of risks of this kind.

Price and Tallinn co-wrote an Aug. 6, 2012 article for the Australia-based, The Conversation website, about their concerns,

We know how to deal with suspicious packages – as carefully as possible! These days, we let robots take the risk. But what if the robots are the risk? Some commentators argue we should be treating AI (artificial intelligence) as a suspicious package, because it might eventually blow up in our faces. Should we be worried?

Asked whether there will ever be computers as smart as people, the US mathematician and sci-fi author Vernor Vinge replied: “Yes, but only briefly”.

He meant that once computers get to this level, there’s nothing to prevent them getting a lot further very rapidly. Vinge christened this sudden explosion of intelligence the “technological singularity”, and thought that it was unlikely to be good news, from a human point of view.

Was Vinge right, and if so what should we do about it? Unlike typical suspicious parcels, after all, what the future of AI holds is up to us, at least to some extent. Are there things we can do now to make sure it’s not a bomb (or a good bomb rather than a bad bomb, perhaps)?

It appears Price, Rees, and Tallinn are not the only concerned parties, from the Nov. 25, 2012 research news piece on the Cambridge University website,

With luminaries in science, policy, law, risk and computing from across the University and beyond signing up to become advisors, the project is, even in its earliest days, gathering momentum. “The basic philosophy is that we should be taking seriously the fact that we are getting to the point where our technologies have the potential to threaten our own existence – in a way that they simply haven’t up to now, in human history,” says Price. “We should be investing a little of our intellectual resources in shifting some probability from bad outcomes to good ones.”

Price acknowledges that some of these ideas can seem far-fetched, the stuff of science fiction, but insists that that’s part of the point.

According to the Huffington Post article by Lui, they expect to launch the centre next year (2013). In the meantime, for anyone who’s looking for more information about the ‘intelligence explosion’ or  ‘singularity’ as it’s also known, there’s a Wikipedia essay on the topic.  Also, you may want to stay tuned to this channel (blog) as I expect to have some news about an artificial intelligence project based at the University of Waterloo (Ontario, Canada) and headed by Chris Eliasmith at the university’s Centre for Theoretical Neuroscience, later this week.