AI-led corporate entities as a new species of legal subject

An AI (artificial intelligence) agent running a business? Not to worry, lawyers are busy figuring out the implications according to this October 26, 2023 American Association for the Advancement of Science (AAAS) news release on EurekAlert,

For the first time in human history, say Daniel Gervais and John Nay in a Policy Forum, nonhuman entities that are not directed by humans – such as artificial intelligence (AI)-operated corporations – should enter the legal system as a new “species” of legal subject. AI has evolved to the point where it could function as a legal subject with rights and obligations, say the authors. As such, before the issue becomes too complex and difficult to disentangle, “interspecific” legal frameworks need to be developed by which AI can be treated as legal subjects, they write. Until now, the legal system has been univocal – it allows only humans to speak to its design and use. Nonhuman legal subjects like animals have necessarily instantiated their rights through human proxies. However, their inclusion is less about defining and protecting the rights and responsibilities of these nonhuman subjects and more a vehicle for addressing human interests and obligations as it relates to them. In the United States, corporations are recognized as “artificial persons” within the legal system. However, the laws of some jurisdictions do not always explicitly require corporate entities to have human owners or managers at their helm. Thus, by law, nothing generally prevents an AI from operating a corporate entity. Here, Gervais and Nay highlight the rapidly realizing concept of AI-operated “zero-member LLCs” – or a corporate entity operating autonomously without any direct human involvement in the process. The authors discuss several pathways in which such AI-operated LLCs and their actions could be handled within the legal system. As the idea of ceasing AI development and use is highly unrealistic, Gervais and Nay discuss other options, including regulating AI by treating the machines as legally inferior to humans or engineering AI systems to be law-abiding and bringing them into the legal fold now before it becomes too complicated to do so.

Gervais and Nay have written an October 26, 2023 essay “AIs could soon run businesses – it’s an opportunity to ensure these ‘artificial persons’ follow the law” for The Conversation, which helps clarify matters, Note: Links have been removed,

Only “persons” can engage with the legal system – for example, by signing contracts or filing lawsuits. There are two main categories of persons: humans, termed “natural persons,” and creations of the law, termed “artificial persons.” These include corporations, nonprofit organizations and limited liability companies (LLCs).

Up to now, artificial persons have served the purpose of helping humans achieve certain goals. For example, people can pool assets in a corporation and limit their liability vis-à-vis customers or other persons who interact with the corporation. But a new type of artificial person is poised to enter the scene – artificial intelligence systems, and they won’t necessarily serve human interests.

As scholars who study AI and law we believe that this moment presents a significant challenge to the legal system: how to regulate AI within existing legal frameworks to reduce undesirable behaviors, and how to assign legal responsibility for autonomous actions of AIs.

One solution is teaching AIs to be law-abiding entities.

This is far from a philosophical question. The laws governing LLCs in several U.S. states do not require that humans oversee the operations of an LLC. In fact, in some states it is possible to have an LLC with no human owner, or “member” [emphasis mine] – for example, in cases where all of the partners have died. Though legislators probably weren’t thinking of AI when they crafted the LLC laws, the possibility for zero-member LLCs opens the door to creating LLCs operated by AIs.

Many functions inside small and large companies have already been delegated to AI in part, including financial operations, human resources and network management, to name just three. AIs can now perform many tasks as well as humans do. For example, AIs can read medical X-rays and do other medical tasks, and carry out tasks that require legal reasoning. This process is likely to accelerate due to innovation and economic interests.

I found the essay illuminating and the abstract for the paper (link and citation for paper at end of this post), a little surprising,

Several experts have warned about artificial intelligence (AI) exceeding human capabilities, a “singularity” [emphasis mine] at which it might evolve beyond human control. Whether this will ever happen is a matter of conjecture. A legal singularity is afoot, however: For the first time, nonhuman entities that are not directed by humans may enter the legal system as a new “species” of legal subjects. This possibility of an “interspecific” legal system provides an opportunity to consider how AI might be built and governed. We argue that the legal system may be more ready for AI agents than many believe. Rather than attempt to ban development of powerful AI, wrapping of AI in legal form could reduce undesired AI behavior by defining targets for legal action and by providing a research agenda to improve AI governance, by embedding law into AI agents, and by training AI compliance agents.

it was a little unexpected to see the ‘singularity’ mentioned. it’s a term I associate with the tech and the sci fi communities.For anyone unfamiliar with the term, here’s a description from the ‘Technological singularity’ Wikipedia entry, Note: Links have been removed,

The technological singularity—or simply the singularity[1]—is a hypothetical future point in time at which technological growth becomes uncontrollable and irreversible, resulting in unforeseeable consequences for human civilization.[2][3] According to the most popular version of the singularity hypothesis, I. J. Good’s intelligence explosion model, an upgradable intelligent agent will eventually enter a “runaway reaction” of self-improvement cycles, each new and more intelligent generation appearing more and more rapidly, causing an “explosion” in intelligence and resulting in a powerful superintelligence that qualitatively far surpasses all human intelligence.[4]

The first person to use the concept of a “singularity” in the technological context was the 20th-century Hungarian-American mathematician John von Neumann.[5] Stanislaw Ulam reports in 1958 an earlier discussion with von Neumann “centered on the accelerating progress of technology and changes in the mode of human life, which gives the appearance of approaching some essential singularity in the history of the race beyond which human affairs, as we know them, could not continue”.[6] Subsequent authors have echoed this viewpoint.[3][7]

The concept and the term “singularity” were popularized by Vernor Vinge first in 1983 in an article that claimed that once humans create intelligences greater than their own, there will be a technological and social transition similar in some sense to “the knotted space-time at the center of a black hole”,[8] and later in his 1993 essay The Coming Technological Singularity,[4][7] in which he wrote that it would signal the end of the human era, as the new superintelligence would continue to upgrade itself and would advance technologically at an incomprehensible rate. He wrote that he would be surprised if it occurred before 2005 or after 2030.[4] Another significant contributor to wider circulation of the notion was Ray Kurzweil’s 2005 book The Singularity Is Near, predicting singularity by 2045.[7]

Finally, here’s a link to and a citation for the paper,

Law could recognize nonhuman AI-led corporate entities by Daniel J. Gervais and John J. Nay. Science 26 Oct 2023 Vol 382, Issue 6669 pp. 376-378 DOI: 10.1126/science.adi8678

This paper is behind a paywall.

Leave a Reply

Your email address will not be published. Required fields are marked *