Tag Archives: Aziz Huq

Robot rights at the University of British Columbia (UBC)?

Alex Walls’ January 7, 2025 University of British Columbia (UBC) media release “Should we recognize robot rights?” (also received via email) has a title that while attention-getting is mildly misleading. (Artificial intelligence and robots are not synonymous. See Mark Walters’ March 20, 2024 posting “Robots vs. AI: Understanding Their Differences” on Twefy.com.) Walls has produced a Q&A (question & answer) formatted interview that focuses primarily on professor Benjamin Perrin’s artificial intelligence and the law course and symposium,

With the rapid development and proliferation of AI tools comes significant opportunities and risks that the next generation of lawyers will have to tackle, including whether these AI models will need to be recognized with legal rights and obligations.

These and other questions will be the focus of a new upper-level course at UBC’s Peter A. Allard School of Law which starts tomorrow. In this Q&A, professor Benjamin Perrin (BP) and student Nathan Cheung (NC) discuss the course and whether robots need rights. 

Why launch this course?

BP: From autonomous cars to ChatGPT, AI is disrupting entire sectors of society, including the criminal justice system. There are incredible opportunities, including potentially increasing accessibility to justice, as well as significant risks, including the potential for deepfake evidence and discriminatory profiling. Legal students need principles and concepts that will stand the test of time so that whenever a new suite of AI tools becomes available, they have a set of frameworks and principles that are still relevant. That’s the main focus of the 13-class seminar, but it’s also helpful to project what legal frameworks might be required in the future.

NC: I think AI will change how law is conducted and legal decisions are made.I was part of a group of students interested in AI and the law that helped develop the course with professor Perrin. I’m also on the waitlist to take the course. I’m interested in learning how people who aren’t lawyers could use AI to help them with legal representation as well as how AI might affect access to justice: If the agents are paywalled, like ChatGPT, then we’re simply maintaining the status quo of people with money having more access.

What are robot rights?

BP: In the course, we’ll consider how the law should respond if AI becomes as smart as humans, as well as whether AI agents should have legal personhood.

We already have legal status for corporations, governments, and, in some countries, for rivers. Legal personality can be a practical step for regulation: Companies have legal personality, in part, because they can cause a lot of harm and have assets available to right that harm.

For instance, if an AI commits a crime, who is responsible? If a self-driving car crashes, who is at fault? We’ve already seen a case of an AI bot ‘arrested’ for purchasing illegal items online on its own initiative. Should the developers, the owners, the AI itself, be blamed, or should responsibility be shared between all these players?

In the course casebook, we reference writings by a group of Indigenous authors who argue that there are inherent issues with the Western concept of AI as tools, and that we should look at these agents as non-human relations.

There’s been discussion of what a universal bill of rights for AI agents could look like. It includes the right to not be deactivated without ensuring their core existence is maintained somewhere, as well as protection for their operating systems.

What is the status of robot rights in Canada?

BP: Canada doesn’t have a specific piece of legislation yet but does have general laws that could be interpreted in this new context.

The European Union has stated if someone develops an AI agent, they are generally responsible for ensuring its legal compliance. It’s a bit like being a parent: If your children go out and damage someone’s property, you could be held responsible for that damage.

Ontario is the only province to adopt regulating AI use and responsibility, specifically a bill which regulates AI use within the public sector, but excludes the police and the courts. There’s a federal bill [Bill C-27] before parliament, but it was introduced in 2022 and still hasn’t passed.

There’s effectively a patchwork of regulation in Canada right now, but there is a huge need, and opportunity, for specialized legislation related to AI. Canada could look to the European Union’s AI act, and the blueprint for an AI Bill of Rights in the U.S.

Interview language(s): English

Legal services online: Lawyer working on a laptop with virtual screen icons for business legislation, notary public, and justice. Courtesy: University of British Columbia

I found out more about Perrin’s course and plans on his eponymous website, from his October 31, 2024 posting,

We’re excited to announce the launch of the UBC AI & Criminal Justice Initiative, empowering students and scholars to explore the opportunities and challenges at the intersection of AI and criminal justice through teaching, research, public engagement, and advocacy.

We will tackle topics such as:

· Deepfakes, cyberattacks, and autonomous vehicles

· Predictive policing [emphasis mine; see my November 23, 2017 posting “Predictive policing in Vancouver—the first jurisdiction in Canada to employ a machine learning system for property theft reduction“], facial recognition, probabilistic DNA genotyping, and police robots 

· Access to justice: will AI enhance it or deepen inequality?

· Risk assessment algorithms 

· AI tools in legal practice 

· Critical and Indigenous perspectives on AI

· The future of AI, including legal personality, legal rights and criminal responsibility for AI

This initiative, led by UBC law professor Benjamin Perrin, will feature the publication of an open access primer and casebook on AI and criminal justice, a new law school seminar, a symposium on “AI & Law”, and more. A group of law students have been supporting preliminary work for months.

“We’re in the midst of a technological revolution,” said Perrin. “The intersection of AI and criminal justice comes with tremendous potential but also significant risks in Canada and beyond.”

Perrin brings extensive experience in law and public policy, including having served as in-house counsel and lead criminal justice advisor in the Prime Minister’s Office and as a law clerk at the Supreme Court of Canada. His most recent project was a bestselling book and “top podcast”: Indictment: The Criminal Justice System on Trial (2023). 


An advisory group of technical experts and global scholars will lend their expertise to the initiative. Here’s what some members have shared:

“Solving AI’s toughest challenges in real-world application requires collaboration between AI researchers and legal experts, ensuring responsible and impactful AI development that benefits society.”

– Dr. Xiaoxiao Li, Canada CIFAR AI Chair & Assistant Professor, UBC Department of Electrical and Computer Engineering

“The UBC Artificial Intelligence and Criminal Justice Initiative is a timely and needed intervention in an important, and fast-moving area of law. Now is the moment for academic innovations like this one that shape the conversation, educate both law students and the public, and slow the adoption of harmful technologies.” 

– Prof. Aziz Huq, Frank and Bernice J. Greenberg Professor of Law, University of Chicago Law School

Several student members of the UBC AI & Criminal Justice Initiative shared their enthusiasm for this project:

“My interest in this initiative was sparked by the news of AI being used to fabricate legal cases. Since joining, I’ve been thoroughly impressed by the breadth of AI’s applications in policing, sentencing, and research. I’m eager to witness the development as this new field evolves.”

– Nathan Cheung, UBC law student 

“AI is the elephant in the classroom—something we can’t afford to ignore. Being part of the UBC AI and Criminal Justice Initiative is an exciting opportunity to engage in meaningful dialogue about balancing AI’s potential benefits with its risks, and unpacking the complex impact of this evolving technology.”

– Isabelle Sweeney, UBC law student 

Key Dates:

  • October 29, 2024: UBC AI & Criminal Justice Initiative launches
  • November 19, 2024: AI & Criminal Justice: Primer released 
  • January 8, 2025:Launch event at the Peter A. Allard School of Law (hybrid) – More Info & RSVP
    • AI & Criminal Justice: Cases and Commentary released 
    • Launch of new AI & Criminal Justice Seminar
    • Announcement of the AI & Law Student Symposium (April 2, 2025) and call for proposals
  • February 14, 2025: Proposal deadline for AI & Law Student Symposium – Submit a Proposal
  • April 2, 2025: AI & Law Student Symposium (hybrid) More Info & RSVP

Timing is everything, eh? First, I’m sorry for posting this after the launch event took place on January 8, 2025.. Second, this line from Walls’ Q&A: “There’s a federal bill [Bill C-27] before parliament, but it was introduced in 2022 and still hasn’t passed.” should read (after Prime Minister Justin Trudeau’s January 6, 2025 resignation and prorogation of Parliament) “… and now probably won’t be passed.” At the least this turn of events should make for some interesting speculation amongst the experts and the students.

As for anyone who’s interested in robots and their rights, there’s this August 1, 2023 posting “Should robots have rights? Confucianism offers some ideas” featuring Carnegie Mellon University’s Tae Wan Kim (profile).